21st EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING
COMPUTER-AIDED CHEMICAL ENGINEERING Advisory Editor: R. Gani and E.N. Pistikopoulos Volume 1: Distillation Design in Practice (L.M. Rose) Volume 2: The Art of Chemical Process Design (G.L. Wells and L.M. Rose) Volume 3: Computer Programming Examples for Chemical Engineers (G. Ross) Volume 4: Analysis and Synthesis of Chemical Process Systems (K. Hartmann and K. Kaplick) Volume 5: Studies in Computer-Aided Modelling. Design and Operation Part A: Unite Operations (I. Pallai and Z. Fonyó, Editors) Part B: Systems (I. Pallai and G.E. Veress, Editors) Volume 6: Neural Networks for Chemical Engineers (A.B. Bulsari, Editor) Volume 7: Material and Energy Balancing in the Process Industries - From Microscopic Balances to Large Plants (V.V. Veverka and F. Madron) Volume 8: European Symposium on Computer Aided Process Engineering-10 (S. Pierucci, Editor) Volume 9: European Symposium on Computer Aided Process Engineering-11 (R. Gani and S.B. Jørgensen, Editors) Volume 10: European Symposium on Computer Aided Process Engineering-12 (J. Grievink and J. van Schijndel, Editors) Volume 11: Software Architectures and Tools for Computer Aided Process Engineering (B. Braunschweig and R. Gani, Editors) Volume 12: Computer Aided Molecular Design: Theory and Practice (L.E.K. Achenie, R. Gani and V. Venkatasubramanian, Editors) Volume 13: Integrated Design and Simulation of Chemical Processes (A.C. Dimian) Volume 14: European Symposium on Computer Aided Process Engineering-13 (A. Kraslawski and I. Turunen, Editors) Volume 15: Process Systems Engineering 2003 (Bingzhen Chen and A.W. Westerberg, Editors) Volume 16: Dynamic Model Development: Methods, Theory and Applications (S.P. Asprey and S. Macchietto, Editors) Volume 17: The Integration of Process Design and Control (P. Seferlis and M.C. Georgiadis, Editors) Volume 18: European Symposium on Computer-Aided Process Engineering-14 (A. Barbosa-Póvoa and H. Matos, Editors) Volume 19: Computer Aided Property Estimation for Process and Product Design (M. Kontogeorgis and R. Gani, Editors) Volume 20: European Symposium on Computer-Aided Process Engineering-15 (L. Puigjaner and A. Espuña, Editors) Volume 21: 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering (W. Marquardt and C. Pantelides) Volume 22: Multiscale Modelling of Polymer Properties (M. Laso and E.A. Perpète) Volume 23: Chemical Product Design: Towards a Perspective through Case Studies (K.M. Ng, R. Gani and K. Dam-Johansen, Editors) Volume 24: 17th European Symposium on Computer Aided Process Engineering (V. Plesu and P.S. Agachi, Editors) Volume 25: 18th European Symposium on Computer Aided Process Engineering (B. Braunschweig and X. Joulia, Editors) Volume 26: 19th European Symposium on Computer Aided Process Engineering (Jacek Je owski and Jan Thullie, Editors) Volume 27: 10th International Symposium on Process Systems Engineering (Rita Maria de Brito Alves, Claudio Augusto Oller do Nascimento and Evaristo Chalbaud Biscaia, Editors) Volume 28: 20th European Symposium on Computer Aided Process Engineering (S. Pierucci and G. Buzzi Ferraris, Editors)
COMPUTER-AIDED CHEMICAL ENGINEERING, 29
21st EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING PART – A
Edited by
E.N. Pistikopoulos Imperial College London, UK
M.C. Georgiadis Aristotle University of Thessaloniki, Greece
A.C. Kokossis National Technical University of Athens, Greece
Amsterdam – Boston – Heidelberg – London – New York – Oxford Paris – San Diego – San Francisco – Singapore – Sydney – Tokyo
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK First edition 2011 Copyright © 2011 Elsevier B.V. All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier's Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material
Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein.
British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress
ISBN (Part A): ISBN (Set):
978-0-444-53711-9 978-0-444-53895-6
For information on all Elsevier publications visit our web site at elsevierdirect.com
Printed and bound in Great Britain 11 12 10 9 8 7 6 5 4 3 2 1
Contents Preface
xxxiii
Members of the International Scientific Committee
xxxv
Multiscale Modeling Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns Moutasem Jaradat, Menwer Attarakih and Hans-Jörg Bart
1
Multi-Scale modelling of a membrane reforming power cycle with CO2 capture Øivind Wilhelmsen, Rahul Anantharaman, David Berstad and Kristin Jordal
6
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process Mayank Shah, Edwin Zondervan, Anton A. Kiss, Andre B. de Haan
11
Application of computer-aided multi-scale modelling framework – Aerosol case study Martina Heitzig, Chistopher Gregson, Gürkan Sin, Rafiqul Gani
16
Sensitivity of shrinkage and collapse functions involved in pore formation during drying Seddik Khalloufi, Cristhian Almeida-Rivera, Jo Jansen, Marcel Van-Der-Vaart, and Peter Bongers
21
A reduced-order approach of Distributed parameter models using Proper orthogonal decomposition M. Valbuena, D. Sarabia, C. de Prada
26
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment Ingo Thomas
31
Mathematical description of mass transfer in supercritical-carbon-dioxide-drying processes Cristhian Almeida-Rivera, Seddik Khalloufi, Jo Jansen and Peter Bongers
36
Three-moments conserving sectional techniques for the solution of coagulation and breakage population balances Margaritis Kostoglou, Michalis C. Georgiadis
41
Modelling and Simulation of Forced Convection Drying of Electric Insulators Cristea Vasile-Mircea, Goga Firuta, Mogos Liviu Mihai
46
Comprehensive Mathematical Modeling of Controlled Radical Copolymerization in Tubular Reactors Mariano Asteasuain, Daniel Covan, Claudia Sarmoria, Adriana Brandolin Carolina Leite de Araujo, José Carlos Pinto
51
An Efficient High Resolution FEM for PDE Systems Duc Hoang Minh, Harvey Arellano-Garcia, Lorenz T. Biegler
56
Simulation of Reactive Absorption: Model Validation for CO 2 -MEA system Chinmay Kale, Inga Tönnies, Hans Hasse, Andrzej Górak
61
A CFD-Population Balance Model for the Simulation of Kühni Extraction Column Mark W. Hlawitschka, Moutasem Jaradat, Fang Chen, Menwer M. Attarakih, Jörg Kuhnert, Hans-Jörg Bart
66
vi
Contents
CFD Study on the Application of Rotary Kiln in Pyrolysis Ka-Leung Lam, Adetoyese O. Oyedun, Chi-Wai Hui
71
Towards a Generic Simulation Environment for Multiscale Modelling based on Tool Integration Yang Zhao, Cheng Jiang, Aidong Yang
76
Integral Formulation of the Population Balance Equation using the Cumulative QMOM Menwer Attarakih, M. Jaradat, M. Hlawitschka, H.-J. Bart, J. Kuhnert
81
Integration of Generic Multi-dimensional Model and Operational Policies for Batch Cooling Crystallization Noor Asma Fazli Abdul Samad, Ravendra Singh, Gürkan Sin, Krist V. Gernaey, Rafiqul Gani
86
A Multi-scale Systems Approach to Granulation Process Design Rohit Ramachandran
91
Multi-scale modeling of activated sludge floc structure formation in wastewater bioreactors Irina D. OfiĠeru, Micol Bellucci, Vasile Lavric, Cristian Picioreanu, Thomas P. Curtis
96
A Multi-layered Ontology for Physical-Chemical-Biological Processes Heinz A Preisig
101
Modeling and simulation of a gas cleaning section in a Cu/Ni metallurgical plant Mirnes Alic, Tor Anders Hauge, Bernt Lie
106
A novel approach to the biomass pyrolysis step and product lumping Daniele Bernocco, Paolo Greppi, Elisabetta Arato
111
Towards a rigorous model of electrodialysis processes Matthias Johannink, Adel Mhamdi, Wolfgang Marquardt
116
Stochastic Monte Carlo Simulations as an Efficient Multi-Scale Modeling Tool for the Prediction of Multi-Variate Distributions Dimitrios Meimaroglou, Costas Kiparissides
121
Modeling of a batch emulsion copolymerization reactor in the presence of a chain transfer agent : estimability analysis, parameters identification and experimental validation. B. Benyahia , M. A. Latifi , C. Fonteix , F. Pla
126
Multiscale modeling of chemical vapor deposition of silicon Nikolaos Cheimarios, Sokratis Garnelis, George Kokkoris, Andreas G. Boudouvis
131
3D Cellular automata for modeling of spray freeze drying process S. Ivanov, A. Troyankin, P. Gurikov, A.Kolnoochenko, N. Menshutina
136
Spatially 3D simulation of a catalytic monolith by coupling of 1D channel model with CFD Jan ŠtČpánek, Petr Koþí, Milan Kubíþek, František Plát, Miloš Mare k
141
Contents
vii
A generic framework for stochastic dynamic simulation of chemical engineering systems using free/opensource software Carl Sandrock and Philip de Vaal
146
Modelling of micro- and nano-patterned electrodes for the study and control of spillover processes in catalysis I. Bonis, S. Valiño-Pazos, I.S. Fragkopoulos, C. Theodoropoulos
151
Multiscale Modeling of a Silicon Solar Wafer Manufacturing Process Ruochen Liu, German Oliveros, Seetharaman Sridhar, B. Erik Ydstie
156
Process modelling and model reduction for chemical engineering applications Bogdan Dorneanu, Johan Grievink, Costin S. Bildea
161
General-purpose graphics processing units application for diffusion simulation using cellular automata A. Kolnoochenko, P. Gurikov, N. Menshutina
166
Mercury Transformation Modelling with Bromine Addition in Coal Derived Flue Gases Kevin J. Hughes, Lin Ma, Richard T.J. Porter and Mohamed Pourkashanian
171
Synthesis and Design Optimal design of multiple dividing wall columns based on genetic programming Fernando I. Gómez-Castro, Mario A. Rodríguez-Ángeles, Juan G. SegoviaHernández,Claudia Gutiérrez-Antonio, Abel Briones-Ramírez
176
Retrofit design of a pharmaceutical batch process considering green chemistry and engineering principles Alireza Banimostafa, Stavros Papadokonstantakis, Konrad Hungerbühler
181
Design and control of an energy integrated biodiesel process Anton A. Kiss, Costin Sorin Bildea
186
A systematic approach towards applicability of reactive distillation Anton A. Kiss, Prachi Singh, Cornald J. G. van Strien
191
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences Miguel A. Navarro, José A. Caballero, Ignacio E. Grossmann
196
Spatiotemporal pattern formation in an electrochemical membrane reactor during deep CO removal from reformate gas Richard Hanke-Rauschenbach, Sebastian Kirsch and Kai Sundmacher
201
Optimization of Design and Operation of Reverse Osmosis Based Desalination Process Using MINLP Approach Incorporating Fouling Effect Kamal M. Sassi, Iqbal. M. Mujtaba
206
Logic-Sequential Approach to the Synthesis of Complex Thermally Coupled Distillation Systems. José A. Caballero, Ignacio E. Grossmann
211
viii
Contents
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes Fani Boukouvala, Rohit Ramachandran, Aditya Vanarase, Fernando J. Muzzio, Marianthi G. Ierapetritou
216
Phenomena-based Process Synthesis and Design to achieve Process Intensification Philip Lutze, Rafiqul Gani, John M. Woodley
221
A Novel Process Design for the Hydroformylation of Higher Alkenes. Michael Müller, Victor Alejandro Merchan, Harvey Arellano-Garcia Reinhard Schomäcker, Günter Wozny
226
Flowsheet Optimization by Memetic Algorithms Maren Urselmann, Sebastian Engell
231
Biomass to chemicals: Design of an extractive reaction process for the production of 5-hydroxymethylfurfural Ana I. Torres, Prodromos Daoutidis, Michael Tsapatsis
236
A strategy to extend reactive distillation column performance under catalyst deactivation Rui M. Filipe, Henrique A. Matos, Augusto Q. Novais
241
Separation Circuits Analysis and Design, Using Sensitivity Analysis Freddy Lucay, Mario E. Mellado, Luis A. Cisternas, Edelmira D. Gálvez
246
Feasibility of reactive pressure swing batch distillation in a double column configuration Gabor Modla
251
Lipid Processing Technology: Building a Multilevel Modeling Network Carlos A. Diaz-Tovar, Azizul A. Mustaffa, Amol Hukkerikar, Alberto Quaglia, Gürkan Sin, Georgios Kontogeorgis, Bent Sarup, Rafiqul Gani
256
Enhancement of Productivity of Distillate Fractions by Crude Oil Hydrotreatment: Development of Kinetic Model for the Hydrotreating Process Aysar T. Jarullah, Iqbal M. Mujtaba, and Alastair S. Wood
261
Modeling and design of reacting systems with phase transfer catalysis Chiara Piccolo, George Hodges, Patrick M. Piccione, Rafiqul Gani
266
A systematic methodology for the design of continuous active pharmaceutical ingredient production processes Albert E. Cervera, Rafiqul Gani, Søren Kiil,Tommy Skovby,Krist V. Gernaey
271
Synthesis tool for separation processes in the pharmaceutical industry Ana I. C. Morão, Edwin Zondervan, Gerard Krooshof, Rob Geertman, André B. de Haan
276
New algorithm for the determination of product sequences in azeotropic batch distillation Laszlo Hegely, Peter Lang
281
Contents
ix
Designing multi-product biopharmaceutical facilities using evolutionary algorithms Ana S. Simaria, Ying Gao, Richard Turner and Suzanne S. Farid Ravendra Singh, Raquel Rozada-Sanchez, Tim Wrate, Frans Muller, Krist V. Gernaey, Rafiqul Gani, John M. Woodley
291
Modified Case Based Reasoning cycle for Expert Knowledge Acquisition during Process design. Eduardo Roldán, Stéphane Negny, Jean Marc Le Lann, Guillermo Cortés
296
Integrating process simulation and MINLP methods for the optimal design of absorption cooling systems Juan A. Reyes-Labarta, Robert Brunet, José A. Caballero, Dieter Boer, Laureano Jiménez
301
A method for the design and planning operations of heap leaching circuits Jorcy Y. Trujillo, Mario E. Mellado, Edelmira D. Gálvez, Luis A. Cisternas
306
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods Garyfallos Giannakoudis, Athanasios I. Papadopoulos, Panos Seferlis, Spyros Voutetakis
311
Integrated Design of a Reactor and a Gas-Expanded Solvent Eirini Siougkrou, Amparo Galindo and Claire S. Adjiman
316
Computer Aided Flowsheet Design using Group Contribution Methods Susilpa Bommareddy, Mario R. Eden, Rafiqul Gani
321
A Business Process Model for Process Design that Incorporates Independent Protection Layer Considerations Tetsuo Fuchino, Yukiyasu Shimada, Teiji Kitajima, Kazuhiro Takeda, Rafael Batres, Yuji Naka
326
Conceptual design of glycerol etherification processes Elena Vlad, Costin Sorin Bildea, Elena Zaharia, Grigore Bozga.
331
Dynamic Conceptual Design under Market Uncertainty and Price Volatility Davide Manca, Andrea Fini, Mirko Oliosi
336
Analysis of separation possibilities of multicomponent mixtures Laszlo Szabo, Sandor Nemeth, Ferenc Szeifert
341
A Computer tool for the development of poly(lactic acid) synthesis process from renewable feedstock for biomanufacturing Guillermo A. R. Martinez, Astrid J. R. Lasprilla, Betânia H. Lunelli, André L. Jardini, Rubens Maciel Filho
346
Robust optimisation methodology for the process synthesis of continuous technologies Mayank P. Patel, Nilay Shah, Robert Ashe
351
A Shortcut Design for Kaibel Columns Based on Minimum Energy Diagrams Maryam Ghadrdan, Ivar J. Halvorsen, Sigurd Skogestad
356
286
x
Contents
A superstructure optimization approach for optimal refinery water network systems synthesis with membrane-based regenerators Cheng Seong Khor, Nilay Shah
361
New generalised double-column system for batch heteroazeotropic distillation Ferenc Denes, Peter Lang, Xavier Joulia
366
Design of an Optimal Biorefinery Mehboob Nawaz, Edwin Zondervan, John Woodley and Rafiqul Gani
371
A Novel Design Concept for the Oxidative Coupling of Methane Using Hybrid Reactors Stanislav Jašo, Harvey Arellano-Garcia, Günter Wozny
377
Comparison of Extractive and Pressure-Swing Batch Distillation for Acetone-Methanol Separation Gabor Modla and Peter Lang
382
Constructive nonlinear dynamics for reactor network synthesis with guaranteed robust stability Xiao Zhaoa, Wolfgang Marquardt
387
Systems Analysis of Benign Hydrogen Peroxide Synthesis in Supercritical CO 2 Deborah B. Bacik, Wei Yuan, Christopher B. Roberts, Mario R. Eden
392
Design of pervaporation modules based on computational process modelling Patrick Schiffmann, Jens-Uwe Repke
397
Surrogate-based VSA Process Optimization for Post-Combustion CO 2 Capture M. M. Faruque Hasan, I. A. Karimi, S. Farooq, A. Rajendran,M. Amanullah,
402
Design of flexible process flow sheets with a large number of uncertain parameters Mihael Kasaš, Zdravko Kravanja, Zorka Novak Pintariþ
407
A Design methodology for Internally Heat-Integrated Distillation Columns (IHIDiC) with side condensers and side reboilers (SCSR) Sankari Maddu, Ranjan K Malik
412
Development of a synthesis tool for Gas-To-Liquid complexes. Jan van Schijndel, Nort Thijssen, Govert Baak, Abhijeet Avhale, Jerome Ellepola, Johan Grievink
417
Pareto-Navigation in Chemical Engineering Norbert Asprion, Sergej Blagov, Oliver Ryll, Richard Welke, Anton Winterfeld, Agnes Dittel, Michael Bortz, Karl-Heinz Küfer, Jakob Burger, Andreas Scheithauer, Hans Hasse
422
Optimization and Control Integration of ontology and knowledge-based optimization in process synthesis applications Franjo Cecelja, Antonis Kokossis, Du Du
427
Contents
xi
Feasibility analysis of black-box processes using an adaptive sampling kriging based method Fani Boukouvala, Fernando J. Muzzio, Marianthi G. Ierapetritou
432
Multiobjective Optimization for Plastic Sheet Production M. Rivera-Toledo, G. Meneses-Castellanos, and A. Flores-Tlacuahuac
437
Systematic identification and robust control design for uncertain time delay processes Jakob K. Huusom, Niels K. Poulsen, Sten B. Jørgensen, John B. Jørgensen
442
Control and dynamic optimization of a BTX dividing-wall column Anton A. Kiss, Rohit R. Rewagad
447
Process Dynamic Optimization Using ROMeo Flavio Manenti, Guido Buzzi-Ferraris, Sauro Pierucci, Maurizio Rovaglio, Harpreet Gulati
452
Model based optimisation of a cyclic reactor for the production of hydrogen Filip Logist, Joost Lauwers, Benoît Trigaux, Jan F. Van Impe
457
Multi-objective optimisation approach to optimal experiment design in dynamic bioprocesses using ACADO toolkit Filip Logist, Dries Telen, Eva Van Derlinden, Jan F. Van Impe
462
A disturbance estimation approach for online model-based redesign of experiments in the presence of systematic errors F. Galvanin, M. Barolo, G. Pannocchia and F. Bezzo
467
A Semidefinite Programming Approach to Portfolio Optimization Raquel J. Fonseca, Wolfram Wiesemann, Berç Rustem
472
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study Cristina Popa, Cristian Pătrăúcioiu
477
Experimental Evaluation of a Robust NMPC Strategy for an Unstable Nonlinear Process Udo Schubert, Andreas Lange, Harvey Arellano-Garcia, Günter Wozny
482
Economic Plantwide Control of C4 Isomerization Process Rahul Jagtap, Sonam Goenka, Nitin Kaistha
487
Application of Graphic Processing Unit in Model Predictive Control Arash Sadrieh, Parisa A. Bahri
492
Statistical Process Control of Multivariate Systems with Autocorrelation Tiago J. Rato, Marco S. Reis
497
Implementation of model predictive controller in a pharmaceutical development plant Stéphane Hattou , Marie-Véronique Le Lann, Karlheinz Preuss, Boris Roussel, Michel Cabassud
502
xii
Contents
A Hybrid Branch-and-Cut Approach for the Capacitated Vehicle Routing Problem Chrysanthos E. Gounaris, Panagiotis P. Repoussis, Christos D. Tarantilis, and Christodoulos A. Floudas
507
Design of robust PID controller for processes with stochastic uncertainties Pham L. T. Duong, Moonyong Lee
512
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant Mihaela Iancu, Mircea V. Cristea, Paul S. Agachi
517
Plantwide Control of a Cumene Manufacture Process Vivek Gera, Nitin Kaistha, Mehdi Panahi, Sigurd Skogestad
522
A robust optimization based approach to the general solution of mp-MILP problems Martina Wittmann-Hohlbein, Efstratios N. Pistikopoulos
527
A deterministic optimization approach for the unit commitment problem Marian G. Marcovecchio, Augusto Q. Novaisa, Ignacio E. Grossmann
532
Tight Convex and Concave Relaxations via Taylor Models for Global Dynamic Optimization Ali M. Sahlodin and BenoˆÕt Chachuat
537
Simulation-based dynamic optimization of discretely controlled continuous processes Mariano De Paula, Ernesto Martínez
542
Evaluation of Steady State Multiplicity for the Anaerobic Degradation of Solid Organic Waste Mihaela Sbarciog, Andres Donoso-Bravo, Alain Vande Wouwer
547
Towards global optimization of combined distillation-crystallization processes for the separation of closely boiling mixtures Martin Ballerstein, Achim Kienle, Christian Kunde, Dennis Michaels, Robert Weismantel
552
Time Optimal Control of Particle Size Distribution in Emulsion Polymerization Ahmad Mansour, Ala Eldin Bouaswaig, Sebastian Engell
557
Multi-objective optimization of three-phase batch extractive distillation Alien Arias Barreto, Ivonne Rodriguez Donis, V. Gerbaud, X. Joulia
562
Integrating Graph-based Representation and Genetic Algorithm for Large-Scale Optimization: Refinery Crude Oil Scheduling Manojkumar Ramteke, Rajagopalan Srinivasan
567
Self-adaptive Differential Evolution with Taboo List for Constrained Optimization Problems and Its Application to Pooling Problems Haibo Zhang and G. P. Rangaiah
572
Contents
xiii
Disturbance Estimation via Moving Horizon Estimation for In-flight Model-based Wind Estimation Anna Voelker, Konstantinos Kouramas, Christos Panos, Efstratios N. Pistikopoulos
577
Deterministic global optimization of kinetic models of metabolic networks: outer approximation vs. spatial branch and bound Carlos Pozo, Gonzalo Guillén-Gosálbez, Albert Sorribas, Laureano Jiménez
582
Optimal Grade Transitions in an Industrial Slurry-Phase Catalytic Olefin Polymerization Loop-Reactor Series Vassileios Touloupides, Vassileios Kanellopoulos, Christos Chatzidoukas, and Costas Kiparissides
587
Nonlinear State Estimation with Delayed Measurements. Application to Polymer Processes Ruben Galdeano, Mariano Asteasuain, Mabel C. Sanchez
592
Optimal controlled variable selection using a nonlinear simulation-optimization framework Mahdi Sharifzadeh , Nina F. Thornhill
597
Branch-and-Sandwich: An Algorithm for Optimistic Bi-Level Programming Problems Polyxeni M. Kleniati, Claire S. Adjiman
602
Comparison of Gradient Estimation Methods for Real-time Optimization Bala Srinivasan, Grégory François and Dominique Bonvin
607
Multiobjective optimization of the pulp/water storage towers in design of paper production systems Aino Ropponen, Miika Rajala, Risto Ritala
612
Combined nonlinear model reduction and multiparametric nonlinear programming for nonlinear model predictive control Pedro Rivotti, Romain S.C. Lambert, Luis Dominguez, Efstratios N. Pistikopoulos
617
Multi-Model MPC for Nonlinear Systems: Case Study of a Complex pH Neutralization Process Weiting Tang, M. Nazmul Karim
622
Integrated Design and Control of Pressure Swing Adsorption Systems Harish Khajuria, Efstratios N. Pistikopoulos
628
A robust MILP-based approach to vehicle routing problems with uncertain demands A. Aguirre, M. Coccola, M. Zamarripa, C. Méndez and A. Espuña
633
An Improved Formulation for the Process Control Structure Selection based on Economics Problem Andreas Psaltis, Ioannis K. Kookos, Costas Kravaris
638
Software application for intelligent control of a bioprocess. Case study Cristina Tănase, Mihai Caramihai, Camelia Ungureanu, Gheorghe Sârbu, Ana Aurelia Chirvase, Ovidiu Muntean
643
xiv
Contents
Integration of a multilevel control system in an ontological information environment Edrisi Muñoz, Antonio Espuña, Luis Puigjaner
648
Control Structure Selection with Regard to Stationary and Dynamic Performance with Application to A Ternary Distillation Column Le Chi Pham,Sebastian Engell
653
The Coulomb Glass – Modeling and Computational Experience with a Large Scale 0-1 QP Problem Ray Pörn, Otto Nissfolk, Fredrik Jansson and Tapio Westerlund
658
Reliable optimal control of a fed-batch fermentation process using ant colony optimisation and bootstrap aggregated neural network models Jie Zhang, Yiting Feng, Mahmood Hilal Al-Mahrouqi
663
Integrated process and control design by the normal vector approach: Application to the Tennessee-Eastman process Diego A. Muñoz, Johannes Gerhard, Ralf Hannemann, Wolfgang Marquardt
668
Calibration of a polyethylene plant model for grade change optimisations Niklas Andersson, Per-Ola Larsson, Johan Åkesson, Staffan Haugwitz, Bernt Nilsson
673
Membrane process optimization for hydrogen peroxide ultrapurification Ricardo Abejón, Aurora Garea, Angel Irabien
678
Dynamic optimization of porous media combustor using a greybox neural model and NMPC technique Luis Henríquez-Vargas, Valeri Bubnovich and Francisco Cubillos
683
Monte Carlo Assessment of the Arrival Cost Evaluation Method in Moving Horizon Estimation for Chemical Processes Rincón Cuellar , F.D.. Hirota , W.H.. Giudici , R., Le Roux , G.A.C.
688
Adaptive Advanced Control of a Copolymerization System Nádson M. N. Lima, Lamia Zuñiga Liñan, Flavio Manenti, Rubens Maciel Filho, Marcelo Embiruçu, Maria R. Wolf Maciel
693
Control of processes with multiple steady states using MPC and RBF neural networks Alex Alexandridis, Haralambos Sarimveis
698
Simulation Optimization of Cost, Safety and Displacements in a Construction Design Eleftherios-Stamatios Telis, George Besseris,Constantinos Stergiou
703
Methodologies for input-output data exchange between LabVIEW® and MATLAB®/Simulink® software for Real Time Control of a Pilot Scale Distillation Process Alexandre J. S. Chambel, Carla I.C. Pinheiro, José Borges, João M. Silva 708 Plant-wide optimisation and control of a multi-scale pharmaceutical process Mayank P. Patel, Nilay Shah, Robert Ashe
713
Contents
xv
Optimization of Hybrid Reactive Distillation-Pervaporation System Vinay Amte
718
Dynamic Modeling and Optimization of Flash Separators for Highly-Viscous Polymerization Processes Prokopis Pladis, Vassileios Kanellopoulos, Apostolos Baltsas and Costas Kiparissides
723
Role of MPC in Building Climate Control Samuel Prívara, ZdenČk VáĖa, JiĜí Cigler, Frauke Oldewurtel and Josef Komárek
728
Ef¿cient Computation of First- and Second-Order Sensitivities Using an Internal Forward Differentiation Scheme T. Barz, L. Zhu, G. Wozny, H. Arellano-Garcia
733
A novel approximation technique for online and multi-parametric model predictive control Romain S.C. Lambert, Pedro Rivotti, E.N. Pistikopoulos
738
Multi-Parametric Model Predictive Control of an Automated Integrated Fuel Cell Testing Unit Chrysovalantou Ziogou, Christos Panos, Konstantinos I. Kouramas, Simira Papadopoulou, Michael C. Georgiadis, Spyros Voutetakis, Efstratios N. Pistikopoulos
743
Use of commercial structured databases as innovative solution for FEED projects Fabio Ferrari, Lorenzo Selmi
748
Controlled Variables from Optimal Operation Data Johannes Jäschke, Sigurd Skogestad
753
Optimization of IMC-PID Tuning Parameters for Adaptive Control: Part 1 Chih-Wei Chua, B. Erik Ydstie , Nikolaos V. Sahinidis
758
System identi¿cation using wavelet analysis ZdenČk VáĖa, Samuel Prívara, JiĜí Cigler and Heinz A. Preisig
763
Robust Reallocation and Upgrade of Sensor Networks for Fault Diagnosis Suryanarayana Kolluri and Mani Bhushan
768
Explicit/Multi-Parametric Model Predictive Control of a Solid Oxide Fuel Cell Kostas Kouramas, Petar S. Varbanov, Michael C. Georgiadis, JiĜí J. Klemeš, Efstratios N. Pistikopoulos
773
A Reformulation Scheme for Parameter Estimation of Hybrid Systems Ines Mynttinen and Pu Li
778
Dynamic optimization of bioreactors using probabilistic tendency models and Bayesian active learning Ernesto Martínez, Mariano Cristaldi, Ricardo Grau, Joao Lopes
783
xvi
Contents
Plantewide Control Design of a Postcombustion CO 2 Capture Process Marc-Oliver Schach, Rüdiger Schneider, Henning Schramm, Jens-Uwe Repke
788
A theoretically rigorous approach to soft sensor development using Principal Components Analysis C.K. Naveen Karthik, Shankar Narasimhan
793
Approximate Multi-Parametric Programming based B&B Algorithm for MINLPs Taoufiq Gueddar and Vivek Dua
798
Experimental Comparison of Type-1 and Type-2 Fuzzy Logic Controllers for the Control of Level and Temperature in a Vessel B. Cosenza, M. Galluzzo
803
Simulation-based Dynamic Optimization under Uncertainty of an Industrial Biological Process Guillermo A. Durand, Aníbal M. Blanco,Fernando D. Mele, J. Alberto Bandoni
808
Parallel Solution of Large-Scale Dynamic Optimization Problems Carl D. Laird, Angelica V. Wong, Johan Akesson
813
Optimization of simulated moving bed chromatography with fractionation and feedback incorporating an enrichment step Suzhou Li, Yoshiaki Kawajiri, Jörg Raisch, Andreas Seidel-Morgenstern
818
Tuning a Distillation Column Simulator Kurt E. Häggblom and Ramkrishna K. Ghosh
823
A Comparative Study of MPC-Based Control Configurations of an Industrial Bioreactor to Produce Ethanol Aarón Romo-Hernández, Salvador Hernández, Arturo Sánchez, Héctor Hernández-Escoto
828
Control of an azeotropic distillation process to acetonitrile production Andrea Ruiz Ruiz, Nelson Borda Beltrán, Alexander Leguizamón R., Javier R. Guevara L., Ivan D. Gil C.
833
Optimal Temperature Tracking of a Solid State Fermentation Reactor C. González-Figueredo, O.R. Ayala, S. Aguilar, O. Aroche, A. Loukianov, A. Sánchez
839
Receding Nonlinear Kalman (RNK) Filter for Nonlinear Constrained State Estimation Raghunathan Rengaswamy, Shankar Narasimhan, Vidyashankar Kuppuraj
844
Free Radicals Copolymerization Optimization, System: Acrylonitrile-Vinyl Acetate in CSTR S.V. Vallecillo-Gómez, J.C. Tapia-Picazo, A. Bonilla-Petriciolet, G.G. DeAlba-Pérez-de-Gracia
849
Convex optimization for shape manipulation of multidimensional crystal particles Naim Bajcinca, Ricardo Perl, Kai Sundmacher
855
Contents
xvii
A Worst-Case Observer for Impurities in Enantioseparation by Preferential Crystallization Steffen Hofmann, Matthias Eicke, Martin Peter Elsner, Andreas SeidelMorgenstern, Jörg Raisch
860
Production Operations A Simulated Annealing Approach for the Bi-Objective Design and Scheduling of Multipurpose Batch Plants Nelson Chibeles-Martins, Tânia Pinto-Varela, Ana Paula Barbósa-Póvoa, A. Q. Novais
865
Robust Logistics Network Modeling and Design against Uncertainties Yoshiaki Shimizu, Hideaki Fushimi, Takeshi Wada
870
Operating Procedure Synthesis Subject to Restricted State Transition Using Differential Evolution Yoshiaki Shimizu
875
MILP Formulation for Resource-Constrained Project Scheduling Problems Thomas S. Kyriakidis, Georgios M. Kopanos, Michael C. Georgiadis
880
Self-learning of fault diagnosis identification José Luis de la Mata, Manuel Rodríguez
885
Complex Network Optimization in FMCG Ali Mehdizadeh, Nilay Shah, Peter M.M. Bongers, Cristhian Almeida-Rivera
890
Freshwater Production by MSF Desalination Process: Coping with Variable Demand by Flexible Design and Operation Ebrahim A. Hawaidi and Iqbal M. Mujtaba
895
Optimal run length in Factory operations to reduce overall costs Peter Bongers, Cristhian Almeida-Rivera
900
Batch sizing in multi-stage, multi-product batch production systems Norbert Trautmann, Philipp Baumann, Nadine Saner, Tobias Schäfer
905
Decision Support System for Multiproduct Pipeline and Inventory Management Systems Susana Relvas, Ana Paula F.D. Barbosa-Póvoa, Henrique A. Matos, Pedro Pinto
910
Ice Cream Scheduling: Modeling the Intermediate Storage Martijn A.H. van Elzakker, Edwin Zondervan, Cristhian Almeida-Rivera, Ignacio E. Grossmann, Peter M.M. Bongers
915
Production Optimization and Scheduling across a Steel Plant Iiro Harjunkoski, Sleman Saliba , Matteo Biondi
920
Simultaneous Optimization of Planning and Scheduling in an Oil Refinery Edwin Zondervan, Tijn P.J. van Boekel, Jan C. Fransoo, André B. de Haan
925
Efficient Scheduling of Batch Plants Using Reachability Tree Search for Timed Automata with Lower Bound Computations Subanatarajan Subbiah, Christian Schoppmeyer, Sebastian Engell
930
xviii
Contents
Robust Market Launch Planning for a Multi-Echelon Pharmaceutical Supply Chain Klaus Reinholdt Nyhuus Hansen, Martin Grunow, Rafiqul Gani
935
A new Coordination Heuristic for Plant-wide Planning and Scheduling Chaojun Xu, Christian Staud, Guido Sand, Sebastian Engell
940
Optimization of Closed-Loop Supply Chains under Uncertain Quality of Returns M Isabel Gomes, Luis J Zeballos, Ana P Barbosa-Povoa, Augusto Q Novais
945
Integrated Refinery Planning under Product Demand Uncertainty Edith Ejikeme-Ugwu, Songsong Liu and Meihong Wang
950
Modelling and dynamic optimisation for optimal operation of industrial tubular reactor for propane cracking Mehdi Berreni and Meihong Wang
955
An Efficient Mathematical Framework for Detailed Production Scheduling in Food Industries: The Ice-cream Production Line Georgios M. Kopanos, Luis Puigjaner, Michael C. Georgiadis, Peter M. M. Bongers
960
Corporate Production Planning for Industrial Gas Supply Chains under Low-Demand Conditions Matteo D’Isanto, Flavio Manenti, Nadson M. N. Lima, Lamia Zuniga Linan
965
Standards for Continual Scheduling of Batch Operations Charles Siletti, Demetri Petrides, Dimitri Vardalis
970
New Scheduling Approach for Shared Resources and Mixed Storage Policies Pedro M. Castro, Luis J. Zeballos, Carlos A. Méndez
975
Optimal Scheduling of Multi-Level Tree-Structure Pipeline Networks Diego C. Cafaro, Jaime Cerdá
980
New Tools for the Detailed Scheduling of Refined Products Pipelines Vanina G. Cafaro, Diego C. Cafaro, Carlos A. Méndez, Jaime Cerdá
985
A rigorous mathematical formulation to Automated Wet-Etch Station scheduling with multiple material-handling robots in Semiconductor Manufacturing Systems Adrián M. Aguirre, Carlos A. Méndez, Pedro M. Castro
990
A MILP Planning Model for a Real-world Multiproduct Pipeline Network Suelen N. Boschetto, Leandro Magatão, Flávio Neves-Jr, Ana P.F.D. Barbosa-Póvoa
995
Improving supply chain management in a competitive environment M. Zamarripa,A. M. Aguirre, C. A. Méndez and A. Espuña
1000
Optimal Scheduling of Biodiesel Plants through Property-based Integration with Oil Refineries Vasiliki Kazantzi, Stella Bezergianni, Rene’ Elms, Fadwa Eljack, and Mahmoud M. El-Halwagi
1005
Contents
xix
Integration of financial statement analysis in the optimal design and operation of supply chain networks Pantelis Longinidis, Michael C. Georgiadis, Panagiotis Tsiakis
1010
Integrated production planning and scheduling optimization of multi-site, multi-product process industry Nikisha K. Shah, Marianthi G. Ierapetritou
1015
Simulation-based reactive scheduling in tomato processing plant with raw material uncertainty Alexandros Koulouris, Ioanna Kotelida
1020
Scenario-Based Strategic Supply Chain Design and Analysis for the Forest Biorefinery Behrang Mansoornejad, Efstratios N. Pistikopoulos, Paul Stuart
1025
The Role of Supply Chain Analysis in Market-Driven Product Portfolio Selection for the Forest Biorefinery Virginie Chambost, Behrang Mansoornejad and Paul Stuart
1030
Real-time Process Management in Particulate and Pharmaceutical Systems Arun Giridhar, Intan Hamdan, Girish Joglekar, Venkat Venkatasubramanian, Gintaras V. Reklaitis
1035
Modeling Next Generation Feedstock Development for Chemical Process Industry Selen Cremaschi
1040
Perdiction of the Permeability and Filtration Performance of Packed Beds Mishal Islam, Xiaodong Jia, Michael Fairweather, Richard Williams
1045
Study of Closed Operation Modes of Batch Distillation Columns Laszlo Hegely, Peter Lang
1050
Dynamic failure assessment of incidents reported in the Greek Petrochemical Industry Eftychia C. Marcoulaki, Myrto Konstandinidou, Ioannis A. Papazoglou
1055
A continuous-time MILP to compute schedules with minimum changeover times for a make-and-pack production Philipp Baumann, Norbert Trautmann
1060
An Evaluation Method for Plant Alarm System Based on a Two-Layer Cause-Effect Model Naoki Kimura, Kazuhiro Takeda, Masaru Noda, Takashi Hamaguchi
1065
Generating cause-implication graphs for process systems via blended hazard identification methods Erzsébet Németh, Benjamin J. Seligmann, Kim Hockings, Jim Oakley, Con O'Brien, Katalin M. Hangos, Ian T. Cameron
1070
Integrated Supply Chain Planning for Multinational Pharmaceutical Enterprises Naresh Susarla, I A Karimi
1075
Data Mining and Decision Making Tool Development for an Industrial Dual Sequential Batch Reactor Soledad Gutiérrez, Adrián Ferrari, Alejandra Benítez
1080
xx
Contents
A Novel CP Approach for Scheduling an Automated Wet-Etch Station Juan M. Novas, Gabriela P. Henning
1085
Agent-based coordination framework for disruption management in a chemical supply chain Behzad Behdani, Zofia Lukszo, Arief Adhitya, Rajagopalan Srinivasan
1090
Recipe-driven dynamic hybrid simulation of batch processes: a combined optimization/simulation approach Gilles Hétreux, Anthony Ramaroson. Jean-Marc Le Lann
1095
Recipe-based Batch Process Engineering Tool for Development Workflow Jae Hyun Cho, Junghwan Kim, Il Moon
1100
Superstructure Approach to Batch Process Scheduling by S-graph Representation B. Bertok, R.Adonyi, F. Friedler, L.T. Fan
1105
Training & Education The TriLab and ilough-Lab portal - Systematic evaluation of the use of remote and virtual laboratories in engineering education Mahmoud Abdulwahed, Zoltan K Nagy
1110
Long Distance Operator Training Yiannis Bessiris, Dionyssia Kyriakopoulou, Fadi Ghajar, Curtis Steuckrath
1115
Modularization within the framework of the course Computer-Aided Plant Design àukasz Hady, Günter Wozny
1120
Academic performance and success rate: A challenge problem for the PSE community Moisès Graells and Antonio Espuña
1125
Is it possible to improve creativity? If yes, how do we do it? Seungnam Kim, Woorim Moon, Woosik Kim, Seonjoo Park and Il Moon
1130
Use of Advanced Educational Technologies in a Process Simulation Course Mordechai Shacham
1135
MOSAIC, an environment for web-based modeling in the documentation level Stefan Kuntsche, Harvey Arellano-Garcia, Günter Wozny
1140
Addressing interdisciplinary process engineering design, construction and operations through 4D virtual environments Ian Cameron, Caroline Crosthwaite, David Shallcross, Roger Hadgraft, Jo Dalvean, Nicoleta Maynard, Moses Tade, John Kavanagh, Grant Lukey
1145
Integrating Alternate Reality Games and Social Media in Engineering Education Sonia Zheleva, Toshko Zhelev
1150
Environmental Systems Engineering Supply Chain Design and Planning with Environmental Impacts: An RTN approach Tânia Pinto-Varela, Ana Paula F. D. Barbosa-Póvoa and Augusto Q. Novais
1155
Contents
xxi
Modelling the Natural Gas Pipeline Internal Corrosion Rate Resulting from Hydrate Formation E.O. Obanijesu, M.K. Akindeju, P. Vishnu, and M.O. Tade 1160 Multilevel strategies for the retrofit of a large industrial water system Hella Tokos, Zorka Novak Pintariþ, Yongrong Yang, Zdravko Kravanja
1165
Synthesis of water integration networks in eco-industrial parks Eusiel Rubio-Castro, José María Ponce-Ortega, Mahmoud M. El-Halwagi, Medardo Serna-González,and Arturo Jiménez-Gutiérrez
1170
Eco Industrial Parks for Water and Heat Management Marianne Boix, Ludovic Montastruc, Luc Pibouleau, Catherine Azzaro-Pantel, Serge Domenech
1175
Effect of Demister Separation Efficiency on the Freshwater Purity in MSF Desalination Process Ebrahim A. Hawaidi and Iqbal M. Mujtaba
1180
Evaluation of CO2 absorption-desorption cycle by dynamic modeling and simulation Ana-Maria Cormos, Jozsef Gaspar, Paul-Serban Agachi
1185
CO2 Sustainable Recovery Network Cluster for Carbon Capture and Sequestration J. Duque, A.P.F.D. Barbosa-Póvoa, A.Q.Novais
1190
Minimization of the life cycle impact of chemical supply chain networks under demand uncertainty Rubén Ruiz-Femenia, José A. Caballero and Laureano Jiménez
1195
Design of an electric and electronic equipment recovery network in Portugal – Costs vs. Sustainability Pedro Furtado, Maria Isabel Gomes, Ana Paula Barbosa-Povoa
1200
Multiscale whole-systems design and analysis of CO 2 capture and transport networks Niall Mac Dowell, Ahmed Alhajaj, Murthy Konda and Nilay Shah
1205
On the model based optimization of secreting mammalian cell cultures via minimal glucose provision Alexandros Kiparissides, Efstratios N. Pistikopoulos, Athanasios Mantalaris
1210
A systematic methodology for the synthesis of unit process chains using Life Cycle Assessment and Industrial Ecology Principles Léda Gerber, Jérôme Mayer, François Maréchal
1215
Integrating Economic, Environmental and Social Indicators for Sustainable Supply Chains Peng Cheng Wang, Iskandar Halim, Arief Adhitya, Rajagopalan Srinivasan
1220
Evaluating the reactivity of limestone utilized in Flue Gas Desulfurization. An application of the Danckwerts theory for particles reacting in acidic environments and agitated vessels with Archimedes number less than 40 Cataldo De Blasio, Claudio Carletti, Lauri Järvinen, Tapio Westerlund
1225
xxii
Contents
Sustainability in Chemical Processes: Application of different environmental methodologies to evaluate process alternatives Acácio Nobre Mendes, Ana Carvalho, Henrique A. Matos
1230
Design and Simulation of Eco-Efficient Biodiesel Manufacture Sandra Couto, Teresa M. Mata, António A. Martins, Bruna Moura, Joana Magalhães, Nidia S. Caetano
1235
New Environmentally-Conscious Design Approach and Evaluation Tool for Chemical Processes Carmen M. Torres, Mamdouh Gadalla, Josep M. Mateo, Laureano Jiménez
1241
Optimal Reactor Design for the Hydroformylation of Long Chain Alkenes in Biphasic Liquid Systems Andreas Peschel, Benjamin Hentschel, Hannsjörg Freund, Kai Sundmacher
1246
Optimal design of real world industrial wastewater treatment networks B. Galán, I.E. Grossmann
1251
A Mixed-Integer Programming Model for Pollution Trading Vicente Rico-Ramirez, Francisco Lopez-Villarreal, Salvador HernandezCastro and Urmila M. Diwekar
1256
Modelling and process integration of carbon dioxide capture using membrane contactors J. Albo, J. Cristóbal and A. Irabien
1261
Increasing the Understanding of the BP Texas City Refinery Accident Davide Manca, Sara Brambilla, Alessandro Villa
1266
Integrating process simulation, multi-objective optimization and LCA for the development of sustainable processes: application to biotechnological plants Robert Brunet, Kartik S. Kumar, Gonzalo Guillén-Gosálbez, Laureano Jiménez
1271
Multi-objective optimization of integrated bioethanol-sugar supply chains considering different LCA metrics simultaneously Andrei Kostin, Fernando D. Mele, Gonzalo Guillén-Gozálbez
1276
Determination of biorestoration strategies in eutrophic water bodies through the formulation of an optimal control problem based on a 3D ecological model Vanina Estrada, Sabrina Belén Rodriguez Reartes, M. Soledad Diaz
1281
Integration of Carbon Footprint Minimization into the Process Design of SWRO Desalination Pre-treatment Matan Beery, Günter Wozny, Jens-Uwe Repke
1286
Optimization of a Sequencing Batch Reactor process for waste water treatment using a two step nitrification model M. N. Cruz Bournazou , K. Hooshiar , H. Arellano-Garcia , G. Lyberatos, C.Kravaris , G. Wozny
1291
Contents
xxiii
Optimization of solar assisted reverse osmosis plants considering economic and environmental concerns Raquel Salcedo-Díaz, Gonzalo Guillén-Gosálbez, Laureano Jiménez, Ekaterina Antipova
1296
Bioprocess Systems Engineering Dynamic modelling of the margarine production process Peter Bongers, Cristhian Almeida-Rivera
1301
Microbial Strain Design for Biochemical Production Using Mixed-integer Programming Techniques Joonhoon Kim, Jennifer L. Reed, and Christos T. Maravelias
1306
A Comprehensive Multi-Scale Modeling of Heterogeneities in Mammalian Cell Culture Processes Srinivas Karra, Brian Sager and M. Nazmul Karim
1311
Population balance modelling of homogeneous and heterogeneous cellulose hydrolysis Philip Engel, Benjamin Bonhage, Douglas Pernik, Roberto Rinaldi, Patrick Schmidt, Helene Wulfhorst, Antje C. Spiess
1316
Predicting microbial growth kinetics with the use of genetic circuit models Michalis Koutinas, Alexandros Kiparissides, Victor de Lorenzo, Vitor A.P. Martins dos Santos, Efstratios N. Pistikopoulos, Athanasios Mantalaris
1321
A combined growth kinetics, metabolism and gene expression model for 3D ESC bioprocesses David Yeo, Alexandros Kiparissides, Efstratios Pistikopoulos, and Athanasios Mantalaris
1326
Toward Online Control of Glycosylation in MAbs Melissa M. St. Amand, Anne S. Robinson, Babatunde A. Ogunnaike
1331
Population balance modelling of influenza virus replication during vaccine production – Influence of apoptosis Thomas Müller, Robert Dürr, Britta Isken , Josef Schulze-Horsel , Udo Reichl, Achim Kienle
1336
Assessment of Jatropha Curcas bioprocess for fuel production using LCA and CAPE Sayed Gillani, Caroline Sablayrolles, Jean-Pierre Belaud, Mireille Montrejaud-Vignoles, Jean Marc Le Lann
1341
Methodological Approach for Modeling of Multi-enzyme in-pot Processes Paloma A. Santacoloma, Alicia Roman-Martinez, Gürkan Sin, Krist V. Gernaey, and John M. Woodley
1346
Systematic Data and Knowledge Utilization to Speed up Bioprocess Design Jun Zhang, Anthony Hunter, Yuhong Zhou
1351
Integration of stochastic simulation with advanced multivariate and visualisation analyses for rapid prediction of facility fit issues in biopharmaceutical processes Adam Stonier, Dave Pain, Ashley Westlake, Nicholas Hutchinson, Nina F Thornhill, Suzanne S. Farid
1356
xxiv
Contents
Standards for Continual Scheduling of Batch Operations Charles Siletti, Demetri Petrides, Dimitri Vardalis
1361
Optimizing cyanobacteria metabolic network for ethanol production Cecilia Paulo, Jimena Di Maggio, Vanina Estrada, M. Soledad Diaz
1366
Dynamic process monitoring and fault detection in a batch fermentation process: comparative performance assessment between MPCA and BDPCA Isaac Monroy, Kris Villez, Moisès Graells, Venkat Venkatasubramanian
1371
Techno-Economic Assessment and Risk Analysis of Biorefinery Processes Eemeli Hytönen, Paul Stuart
1376
BIOCORE– A systems integration paradigm in the real-life development of a lignocellulosic biorefinery Aikaterini D. Mountraki, Athanassios Nikolakopoulos, Bouchra Benjelloun Mlayah, Antonis C. Kokossis
1381
Prediction of activation of metabolic pathways via dynamic optimization Gundian M. De Hijas-Liste, Eva Balsa-Canto, Julio R. Banga
1386
Non linear identification of Spirulina maxima growth and characteristics Márcia P. Vega, José W. Silva and Maria A.C.L. Oliveira
1391
Real-time optimization for lactic acid production from sucrose fermentation by Lactobacillus plantarum Betânia H. Lunelli, Delba N. C. Melo, Edvaldo R. de Morais, Igor R. S. Victorino, Eduardo C. Vasco de Toledo, Maria Regina Wolf Maciel, Rubens Maciel Filho
1396
Model-based Dynamic Optimisation of Microbial Processes for the High-Yield Production of Biopolymers with Tailor-made Molecular Properties Giannis Penloglou, Christos Chatzidoukas, Avraam Roussos, Costas Kiparissides
1401
Systematic Procedure for Integrated Process Operation: Reverse Electro-Enhanced Dialysis (REED) during Lactic Acid Fermentation Oscar Andrés Prado-Rubio, Sten Bay Jørgensen and Gunnar Jonsson
1406
Bioprocessing of exopolysaccharides (EPS): CFD optimization of bioreactor conditions Serafim Vlaev, Konstantza Tonova, Kostantsa Pavlova, Mohammed Elqotbi
1411
Simultaneous design and scheduling of a plant for producing ethanol and derivatives Yanina Fumero, Gabriela Corsano, Jorge M. Montagna
1416
Glycerol metabolic conversion to succinic acid using Actinobacillus succinogenes: a metabolic network-based analysis Michael Binns, Anestis Vlysidis, Colin Webb, Constantinos Theodoropoulos, Pedro de Atauri, Marta Cascante
1421
Design and Operation of a Continuous Reactor for Acid Pretreatment of Lignocellulosic Biomass Mauricio Sales-Cruz, Edgar Ramírez-Jiménez, Teresa López-Arenas
1426
Contents
xxv
Viscosity Prediction of Compounds Derived from Castor Oil: Parameter Optimization Teresa López-Arenas, Gloria Aca-Aca, Oscar Sánchez-Daza, Mauricio Sales-Cruz
1431
Global sensitivity analysis in bioreactor networks Maria Paz Ochoa, Patricia M. Hoch
1436
Graph Theory Augmented Recursive MILP Approach for Identifying Multiple Minimal Reaction Sets in Metabolic Networks Sudhakar Jonnalagadda and Rajagopalan Srinivasan
1441
Model-driven design based on sensitivity analysis for a synthetic biology application Nikolaos Anesiadis, William R. Cluett, Radhakrishnan Mahadevan
1446
Simulations of hydrodynamic stress in stirred-tank bioreactors using CFD technology Y. Verkholaz, P. Lavrov, E. Guseva, N. Menshutina, J. Boudrant
1451
A framework for model-based optimization of bioprocesses under uncertainty: Identifying critical parameters and operating variables. Ricardo Morales-Rodriguez, Anne S. Meyer, Krist V. Gernaey, Gürkan Sin
1455
Robust optimal control of a biochemical reactor with multiple objectives Filip Logist, Boris Houska, Moritz Diehl, Jan F. Van Impe
1460
System inversion of multidimensional population balance systems Henrique Menarin and Naim Bajcinca
1465
Implementation and initial evaluation of a decision support platform for selecting production routes of biomass-derived chemicals Marinella Tsakalova, Ta-Chen Lin, Aidong Yang, Antonis C. Kokossis
1470
Biomedical Systems Engineering Multi-Scale Modeling of PLGA Microparticle Drug Delivery Systems Ashlee N. Ford, Daniel W. Pack, Richard D. Braatz
1475
Computational Molecular Design of Drug Delivery Vehicles for Anti-HIV Microbicides Taylor Wilson, Amber Markey, Kyle V. Camarda, Sarah Kieweg
1480
Towards in silico models of decomplexification in human endotoxemia Jeremy D. Scheff, Pantelis Mavroudis, Steve E. Calvano, Stephen F. Lowry, Ioannis P. Androulakis
1485
Physiologically Based Pharmacokinetic Modeling and Predictive Control: An integrated approach for optimal drug administration Pantelis Sopasakis, Panagiotis Patrinos, Stefania Giannikou, Haralambos Sarimveis
1490
A Novel Physiologically Based Compartmental Model for Volatile Anaesthesia Alexandra Krieger, Nicki Panoskaltsis, Athanasios Mantalaris, Michael C. Georgiadis, Efstratios N. Pistikopoulos
1495
Modelling of the Insulin Delivery System for patients with Type 1 Diabetes Mellitus Stamatina Zavitsanou, Nicki Panoskaltsi, Athanasios Mantalaris, Michael C. Georgiadis, Efstratios N. Pistikopoulos
1500
xxvi
Contents
Towards a high-fidelity model for model based optimisation of drug delivery systems in acute myeloid leukemia Eleni Pefani, Nicki Panoskaltsis, Athanasios Mantalaris, Michael C. Georgiadis, Efstratios N. Pistikopoulos
1505
From Chemical Process Diagnosis to Cancer Prognosis: An Integrated Approach for Diagnosis and Sensor/Marker Selection Lyamine Hedjazi, Marie-Véronique Le Lann, Tatiana KempowskyHamon, Joseph Aguilar-Martin, Florence Dalenc, Gilles Favre, Laurène Despenes, Sébastien Elgue
1510
Computational Investigation of Vascular Surgical Interventions on Popliteal Artery Aneurysms D. Papadimitriou, A.H. Alexopoulos, T. Gerasimidis, T. and C. Kiparissides
1515
A Minimal Exercise Extension for Models of the Glucoregulatory System Alain Bock, Grégory François, Thierry Prud'homme, Denis Gillet
1520
Three Dimensional Simulation and Experimental Investigation of Intrathecal Drug Delivery in the Spinal Canal and the Brain Ying Hsu, Timothy J. Harris Jr, H.D.M. Hettiarachchi, Richard Penn, Andreas A. Linninger
1525
A Computational Model of Cerebral Vasculature, Brain Tissue, and Cerebrospinal Fluid Nicholas M. Vaiþaitis, Brian J. Sweetman, Andreas A. Linninger
1530
Systems engineers’ role in biomedical research Andreas A. Linninger
1535
Physiologically-Based Pharmacokinetic Modeling: Parameter Estimation for Cyclosporin A Eric Lueshen, Cierra Hall, Andrej MošaĢ and Andreas Linninger
1543
Disease Classification through Integer Optimisation Chrysanthi Ainali, Frank Nestle, Lazaros G. Papageorgiou , Sophia Tsoka
1548
Optimal design of chitosan-based scaffolds for controlled drug release using dynamic optimization Belmiro P.M. Duarte, Nuno M.C. Oliveira, Maria J.C. Moura
1553
Insulin Administration for People with Type 1 diabetes Dimitri Boiroux, Daniel Aaron Finan, Niels Kjølstad Poulsen, Henrik Madsen and John Bagterp Jørgensen
1558
A Variational Bayesian Approach for Dosage Regimen Individualization J. M. Laínez, L. Mockus, G. Blau, S. Orçun, and G.V. Reklaitis
1563
Development of a fuzzy expert system for the control of glycemia in type 1 diabetic patients Leonardo Nobile, Bartolomeo Cosenza, Marco Amato, Valentina Guarnotta, Carla Giordano, Aldo Galluzzo, Mosè Galluzzo
1568
Contents
xxvii
Materials & Molecular Systems Engineering Controlling Particle Size in a Novel Spinning Disc Continuous Stir Tank and Settler Reactor for the Continuous Synthesis of Titania M.K. Akindeju and P.H. Ong
1573
Simultaneous Design of Ionic Liquids and Azeotropic Separation Processes Brock C. Roughton, John White, Kyle V. Camarda, and Rafiqul Gani
1578
GPU-Based Parallel Calculation Method for 1Molecular Weight Distribution of Batch Free Radical Polymerization Zhiqiang Chen, Xi Chen, Zhen Yao, Zhijiang Shao
1583
Chemicals-Based Formulation Design: Virtual Experimentations Elisa Conte, Rafiqul Gani
1588
Simultaneous prediction of phase behaviour and second derivative properties with a group contribution approach (SAFT-γ Mie) Vasileios Papaioannou, Thomas Lafitte, Claire S. Adjiman, Amparo Galindo and George Jackson
1593
A Lattice Boltzmann Method for Non Ideal Gases Based on the Gradient Theory of Interfaces E.S. Kikkinides, M.E. Kainourgiakis, A.G. Yiotis and A.K. Stubos
1598
Towards robust fabrication of non-periodic nanoscale systems via directed self assembly Richard Lakerveld, George Stephanopoulos, Paul I. Barton
1603
Models driven conception of an Computer Aided Mixture Design tool Juliette Heintz, Vincent Gerbaud, Jean-Pierre Belaud
1608
Iterative learning control of a reactive polymer composite moulding process using batch-wise updated linearised models Jie Zhang, Nikos G. Pantelelis
1613
CFD Modelling of the Demister in the Multi Stage Flash Desalination plant Hala Al-Fulaij, Andrea Cipollina, Giorgio Micale, David Bogle, Hisham Ettouney
1618
Predicting a Variety of Constant Pure Compound Properties by the Targeted QSPR Method Mordechai Shacham, Neima Brauner
1623
PSE in Pharmaceutical Process Development Krist V. Gernaey, Albert E. Cervera and John M. Woodley
1628
Molecular Design of Biofuel Additives for Optimization of Fuel Characteristics Subin Hada, Charles C. Solvason, Mario R. Eden
1633
Online estimation of crystal size distribution (CSD) within industrial gibbsite precipitation plants Jan K. Hurst, Parisa A. Bahri, Ali Nooraii
1638
xxviii
Contents
Energy Systems Engineering Simulation of Water Gas Shift Membrane Reactors by a Two-dimensional Model M. De Falco, V. Piemonte, A.Basile
1643
Potential Impacts and Modelling of the Heat Loss due to Copper Chelation in Natural Gas Processing and Transport D.J. Hunt, M.K. Akindeju, E.O. Obanijesu, V.K. Pareek and M.O Tade
1648
Optimal biorefinery planning considering simultaneously economic and environmental objectives José Ezequiel Santibáñez-Aguilar, J. Betzabe González-Campos, José María Ponce-Ortega, Medardo Serna-González
1653
Reduce Costs and Energy Consumption of Deethanizing and Depropanizing Fractionation Steps in NGL Recovery Process Nguyen Van Duc Long and Moonyong Lee
1658
Ethanol from corn: screening options and power supply improvement to ethanol plant in Italy Marco Soldà, Franjo Cecelja, Aidong Yang, Piyalap Manakit
1663
A Mixed-Integer Programming Approach to Infrastructure Planning for Chemical Centres: A Case Study in the UK Pei Liu, Alan Whitaker, Efstratios N. Pistikopoulos, Zheng Li, Yong Chen
1668
A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes Helen Becker, Giulia Spinato, François Maréchal
1673
Techno-economical and environmental evaluations of IGCC power generation process with carbon capture and storage (CCS) Calin-Cristian Cormos, Ana-Maria Cormos, Paul Serban Agachi
1678
Reynolds Number Effects on Particle Dispersion and Deposition in Turbulent Square Duct Flows J.F.W. Adams, J. Yao and M. Fairweather
1683
Multiscale Modeling of Biorefineries Seyed Ali Hosseini, Nilay Shah
1688
Towards Second Generation Bioethanol: Supply Chain Design and Capacity Planning Andrea Zamboni, Sara Giarola, Fabrizio Bezzo
1693
Optimization of lignocellulosic based diesel Mariano Martín, Ignacio E. Grossmann
1698
Using Low-Grade Heat for Solvent Extraction based Efficient Water Desalination Kary Thanapalan and Vivek Dua
1703
Impact of hydrogen injection in natural gas infrastructures Guillermo Hernández-Rodríguez, Luc Pibouleau, Catherine Azzaro-Pantel, Serge Domenech
1708
Optimal Design and Operation of Distributed Energy Systems E. D. Mehleri, H. Sarimveis, N. C. Markatos, L. G. Papageorgiou
1713
Contents
xxix
Process Synthesis with Heat and Power Integration of Thermochemical Coal, Biomass, and Natural Gas Hybrid Energy Processes Richard C. Baliban, Josephine A. Elia, Christodoulos A. Floudas
1718
A Novel Catalytic Strategy for the Production of Liquid Fuels from Ligno-cellulosic Biomass Carlos A. Henao, DrewJ. Braden, Christos T. Maravelias, James A. Dumesic
1723
Optimizing the Lignocellulosic Biomass-to-Ethanol Supply Chain: A Case Study for the Midwestern United States W. Alex Marvin, Lanny D. Schmidt, Saif Benjaafar, Douglas G. Tiffany, Prodromos Daoutidis
1728
Modeling and Simulation of the Production of Lead and Elementary Sulphur from Lead Sulphide Concentrates Giulia Bozzano, Mario Dente, Sauro Pierucci, Massimo Maccagni
1733
Strategic Planning of Petroleum Supply Chains Leão José Fernandes , Susana Relvas, Ana Paula Barbosa-Póvoa
1738
Network generation and analysis of complex biomass conversion systems Srinivas Rangarajan, Ted Kaminski, Eric Van Wyk, Aditya Bhan, Prodromos Daoutidis
1743
Fractional-order transfer functions applied to the modeling of hydrogen PEM fuel cells Vitor V. Lopes, Carmen M. Rangel, Augusto Q. Novais
1748
An Integrated Approach to Optimal Pipeline Routing, Design, Operation and Maintenance Eftychia C. Marcoulaki, Ioannis A. Papazoglou, Nathalie Pixopoulou
1753
General Methodology for Exergy Balance in a Process Simulator Ali Ghannadzadeh, Raphaële Thery-Hetreux, Olivier Baudouin, Philippe Baudet, Pascal Floquet, Xavier Joulia
1758
Process Modelling of Entrained Flow Gasification Ruwaida A. Rasid, Peter J. Heggs, Kevin J. Hughes and Mohamed Pourkashanian
1763
Modeling post-combustion CO2 capture with amine solvents Grégoire Léonard, Georges Heyen
1768
Modelling biomass and biofuels supply chains Christiana Papapostolou, Emilia Kondili, John K. Kaldellis
1773
Design and performance optimization of hybrid energy systems E. Kondili, J. K. Kaldellis
1778
Recurrent neural network prediction of steam production in a Kraft recovery boiler Matthieu Sainlez, Georges Heyen
1784
Improved Wind Power Forecasting with ARIMA Models Bri-Mathias Hodge, Austin Zeiler, Duncan Brooks, Gary Blau, Joseph Pekny, Gintaras Reklatis
1789
xxx
Contents
Power reduction in air separation units for oxy-combustion processes based on exergy analysis Chao Fu, Truls Gundersen
1794
An MILP Model for the Strategic Design of the UK Bioethanol Supply Chain Ozlem Akgul, Nilay Shah, Lazaros G. Papageorgiou
1799
Long-Term Planning of Wind Farm Siting in the Electricity Grid Jingjie Xiao, Bri-Mathias S. Hodge, Andrew L. Liu, Joseph F. Pekny, Gintaras V. Reklaitis
1804
Optimal location of gasification plants for electricity production in rural areas Mar Pérez-Fortes, Pol Arranz-Piera, José Miguel Laínez, Enric Velo and Luis Puigjaner
1809
Multi-objective optimization of the electricity production from coal burning Jorge Cristóbal, Gonzalo Guillén-Gosálbez, Laureano Jiménez, Angel Irabien
1814
Detailed Operation Scheduling and Control for Renewable Energy Powered Microgrids Miguel Zamarripa, Juan C. Vasquez, Josep M. Guerrero, Moisès Graells
1819
Optimization of mixed-refrigerant system in LNG liquefaction process Kyungjae Tak, Wonsub Lim, Kwangho Choi, Daeho Ko, Il Moon
1824
BOG Handling Method for Energy Saving in LNG Receiving Terminal Chansaem Park, Youngsub Lim, Sangho Lee, Chonghun Han
1829
Oil Well Drilling Process - Simulation and Experimental Multi-Objective Studies Márcia Peixoto Vega, Marcela Galdino de Freitas, Claudia Miriam Scheid and André Leibsohn Martins
1834
Economic MPC for Power Management in the SmartGrid Tobias Gybel Hovgaard, Kristian Edlund, John Bagterp Jørgensen
1839
Fisher information based time-series segmentation of streaming process data for monitoring and supporting on-line parameter estimation in energy systems László Dobos, János Abonyi
1844
NMPC for Oil Reservoir Production Optimization Carsten Völcker, John Bagterp Jørgensen, Per Grove Thomsen, Erling Halfdan Stenby
1849
Optimization of LNG plants – challenges and strategies Magnus G. Jacobsen, Sigurd Skogestad
1854
Site-wide process integration for low grade heat recovery Ankur Kapil, Igor Bulatov, Robin Smith, Jin-Kuk Kim
1859
Novel optimization method for retrofitting heat exchanger networks with intensified heat transfer Ming Pan, Igor Bulatov, Robin Smith, Jin-Kuk Kim
1864
A CFD-process model of steam generation in a power plant by a thermosyphon system Penelope J. Edge, Peter J. Heggs, Mohamed Pourkashanian, Alan Williams
1869
Contents
xxxi
Techno-Economic Analysis for Ethylene and Methanol Production from the Oxidative Coupling of Methane Process Daniel Salerno, Harvey Arellano-Garcia, Günter Wozny
1874
The Effects of Electricity Storage on Large Scale Wind Integration Shisheng Huang, Bri-Mathias S. Hodge, Jingjie Xiao, Gintaras V. Reklaitis, Joseph F. Pekny
1879
Co-production of ethanol, hydrogen and biogas using agro-wastes. Conceptual plant design and NPV analysis for mid-size agricultural sectors Arturo Sanchez, Victor Sevilla-Guitron, Gabriela Magaña, Paulina Melgoza, Hector Hernandez
1884
Energy Systems Analysis for a Renewable Transportation Sector Dharik S. Mallapragada, Navneet R. Singh, Rakesh Agrawal
1889
Prediction of Conversion of a Packed Bed of Fuel Particles on a Forward Acting Grate by the Discrete Particle Method (DPM) Bernhard Peters, Algis Dziugys
1894
Monitor and diagnosis of LNG plant fractionation process using k-mean clustering and principal component analysis Hahyung Pyun, Daeyoun Kim, Kyungjin Kim, Chonghun Han
1899
Design of Integrated Gasification Combined Cycle plant with Carbon Capture and Storage based on co-gasification of coal and biomass Victoria Maxim, Calin-Cristian Cormos, Paul Serban Agachi
1904
Low Temperature Process Design: Challenges and Approaches for using Exergy Efficiencies Danahe Marmolejo-Correa, Truls Gundersen
1909
Optimization of sustainable energy planning with consideration of uncertainties in learning rates and external cost factors Seunghyok Kim, Jamin Koo, En Sup Yoon
1914
Analysis of Integrated Gasification Combined Cycle (IGCC) Power Plant Based on Climate Change Scenarios with Respect to CO 2 Capture Ratio Kyungtae Park, Kyusang Han and En Sup Yoon
1919
SynFlex: A Computational Framework for Synthesis of Flexible Heat Exchanger Networks M. Escobar, J.O. Trierweiler, and I.E. Grossmann
1924
Modeling and Optimization of Supercritical Phase Fischer-Tropsch Synthesis Wei Yuan, Gregory C. Vaughan, Christopher B. Roberts, Mario R. Eden
1929
Optimization of pipeline unloading operations in an LPG terminal S.Arun Srikanth, Sridharakumar Narasimhan, Shankar Narasimhan
1934
Assessment for Carbon Capture and Storage Opportunities: Greek Case Study Christos Ioakimidis, Nikolaos Koukouzas, Anna Chatzimichali, Sergio Casimiro, Grigorios Itskos
1939
xxxii
Contents
Methodology for Maximising the Use of Renewables with Variable Availability Andreja Nemet, JiĜí J. Klemeš, Petar S. Varbanov
1944
Exergy-based methods for computer-aided design of energy conversion systems George Tsatsaronis, Tatiana Morosuk
1949
Computational support as efficient sophisticated approach in waste-to-energy systems Petr Stehlík
1954
Regional Optimizer (RegiOpt) – Sustainable energy technology network solutions for regions K.H. Kettl, N. Niemetz, N. Sandor, M. Eder, I. Heckl, M. Narodoslawsky
1959
Improving Energy Efficiency of a Dyes Intermediates Synthesis Plant. A Developing Country Specific Case Study Zsófia Fodor, Paul Krajnik, Petar Sabev Varbanov, JiĜí Jaromír Klemeš
1964
Evaluation of Design Issues and Automation Infrastructure in a Solar-Hydrogen Production Unit at CERTH in Thessaloniki Chrysovalantou Ziogou, Dimitris Ipsakis, Fotis Stergiopoulos, Simira Papadopoulou, Stella Bezergianni, Spyros Voutetakis
1969
Optimal Operation of a Concentrated Solar Thermal Cogeneration Plant Amin Ghobeity, Alexander Mitsos
1974
The role of energy consumption in batch process scheduling Mate Hegyhati, Gerenc Friedler
1979
Modeling Fluid Flow of Vipertex Enhanced Heat Transfer Tubes David J. Kukulka and Rick Smith
1984
Energy targeting in heat integrated water networks with isothermal mixing Santanu Bandyopadhyay, Gopal Chandra Sahu
1989
Design of renewable energy systems incorporating uncertainties through pinch analysis Santanu Bandyopadhyay
1994
Sustainable LCA-based MIP Synthesis of Biogas Processes Lidija ýuþek, Rozalija Drobež, Bojan Pahor, Zdravko Kravanja
1999
Energy, Water and Process Technologies Integration for the Simultaneous Production of Ethanol and Food from the entire Corn Plant Lidija ýuþek, Mariano Martín, Ignacio E. Grossmann, Zdravko Kravanja
2004
Ontology-Driven Design of an Energy Management System Karel Macek, Karel MaĜík, Petr Stluka
2009
Synthesis of Flexible Palm Oil-Based Regional Energy Supply Chain Dominic C. Y. Foo, Raymond R. Tan , Hon Loong Lam, Mustafa Kamal, JiĜí J. Klemeš
2014
Index
2019
ESCAPE-21 - PREFACE This book includes papers presented at the 21st European Symposium on ComputerAided Process Engineering (ESCAPE-21) held at Porto Carras Resort, Chalkidiki, Greece from 29 May to 1 June 2011. The ESCAPE series constitute the major European annual event which serves as a global forum for engineers, scientists, researchers, managers and students to present and discuss progress being made in the area of Process Systems Engineering. Previous events took place in Lyon, France, 2008 (ESCAPE-18), Cracow, Poland, 2009 (ESCAPE-19) and Ischia, Italy, 2010 (ESCAPE-20). European industries are bringing innovations into our lives, whether in the form of new technologies to address environmental problems, new products to make our homes more comfortable and energy efficient, or new therapies to improve the health and well-being of European citizens. The technical theme of ESCAPE 21 hence recognizes the continuous and increasingly expanding importance of and need for a systems approach in tackling such industrial and societal grand challenges, featuring the following strands: Core Process Systems Engineering x x x x x
Multi-scale Modeling Synthesis and Design Optimization and Control Production Operations Training and Education
Grand Challenges – Domain-driven PSE x x x x x
Environmental Systems Engineering Bioprocess Systems Engineering Biomedical Systems Engineering Materials and Molecular Systems Engineering Energy Systems Engineering
More than 670 abstracts from almost 60 countries were originally submitted to the symposium. Out of them 399 were finally selected for oral and poster presentations and included in the book. All papers have been peer reviewed – we are indeed grateful to the members of the international scientific committee for their evaluations,
xxxiv
Preface
comments and recommendations. We are also extremely grateful to the authors for their outstanding contributions. We hope that this book will serve as a valuable reference document to the scientific and industrial community and that it will contribute to the progress of process systems engineering. Efstratios N. Pistikopoulos Michael C. Georgiadis Antonis Kokossis ESCAPE-21 co-chairmen
Members of the International Scientific Committee Ali Abbas,
Iftekhar Karimi,
University of Sydney, Australia
National University of Singapore, Singapore
Claire Adjiman,
Jiri Klemes,
Imperial College London, UK
University of Pannonia, Hungary
Rakesh Agrawal,
Andrzej Kraslawski,
Purdue University, USA
University of Lappeenranta, Finland
Yiannis Androulakis,
Zdravko Kravanja,
Rutgers University, USA
University of Maribor, Slovenia
Adisa Azapagic,
Jay Lee,
University of Manchester, UK
KAIST, Republic of Korea
Miguel Bagajewicz,
Andreas Linninger,
University of Oklahoma, USA
University of Illinois at Chicago, USA
Julio Banga,
Sandro Macchietto,
CSIC, Spain
Imperial College London, UK
Ana Barbosa, IST -Technical Uni of Lisbon, Portugal
Sakis Mantalaris, Imperial College London, UK
David Bogle,
Costas Maranas,
University College London, UK
Pennsylvania State University, USA
Peter Bongers,
Christos Maravellias,
Unilever / DTU, The Netherlands
University of Wisconsin, USA
Ian Cameron,
Francois Marechal,
The University of Sydney, Australia
EPFL, Switzerland
Benoit Chachuat,
Wolfgang Marquardt,
Imperial College London, UK
RWTH-Aachen, Germany
Panagiotis Christofides, University of California LA, USA
Il Moon, Yonsei University, Korea
Prodromos Daoutidis,
Iqbal Mujtaba,
University of Minnesota, USA
University of Bradford, UK
Mario Eden,
Costas Pantelides,
Auburn University, USA
PSE Ltd, UK
Sebastian Engell, University of Dortmund, Germany
Lazaros Papageorgiou, University College London, UK
xxxvi
Members of the International Scientific Committee
Antonio Espuna,
Sauro Pierucci,
UPC, Spain
University Polytechnic of Milano, Italy
Panagiota Foteinou,
Valentin Plesu,
UCSB-MIT-Caltech/ARO, USA
Technical Uni of Bucharest, Romania
Ferenc Friedler,
Luis Puigjaner,
University of Pannonia, Hungary
UPC, Spain
Rafiqul Gani,
Rex Reklaitis,
DTU, Denmark
Purdue University, USA
Mahmoud El Halwagi,
Jose Romagnoli,
Texas A&M University, USA
Louisiana State University, USA
Chonghun Han,
Nick Sahinidis,
Seoul National University, South Korea
Carnegie Mellon University, USA
Georges Heyen,
Nilay Shah,
Université de Liège, Belgium
Imperial College London, UK
Marianthi Ierapetriou,
Sigurd Skogestad,
Rutgers University, USA
NTNU, Norway
George Jackson,
Raja Srinivasan,
Imperial College London, UK
National University of Singapore, Singapore
Christian Jallut,
Paul Stuart,
Université Claude Bernard, Lyon, France
Ecole Polytechnique Montreal, Canada
Sten Jorgensen,
Doros Theodorou,
DTU, Denmark
National Technical University of Athens
Emilia Kondili,
Harris Sarimveis,
Technology and Education Institute of Peiraia
National Technical University of Athens
Xavier Joulia,
Tapio Westerlund,
INPT-ENSIACET, Toulouse, France
Abo Akademi, Finland
Costas Kravaris,
Panos Seferlis,
University of Patras, Greece
Aristotle University of Thessaloniki & Chemical Process Engineering Research Institute, Greece
Yiannis Kookos, University of Patras, Greece
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns Moutasem JARADAT1, 3, Menwer ATTARAKIH 2,3 and Hans-Jörg BART1,3 1
Chair of Separation Science and Technology, TU Kaiserslautern, POB 3049, 67653 Kaiserslautern, Germany 2 Faculty of Eng. Tech., Chem. Eng. Dept., Al-Balqa Applied University, POB 15008, 11134 Amman, Jordan 3 Centre of Mathematical and Computational Modelling, TU Kaiserslautern, Germany
Abstract A comprehensive bivariate population balance model for the dynamic and steady state simulation of extraction columns is developed. The model is programmed using visual digital FORTRAN and then integrated into the whole LLECMOD program [23]. As a case study, the simulation tool LLECMOD is used to simulate the steady state performance of pulsed packed and sieve plate columns. Two chemical test systems recommended by the EFCE are used in the simulation. Model predictions are successfully validated against steady state and dynamic experimental data, where good agreements are achieved. Keywords: LLECMOD, Extraction Columns, Population Balance, Simulation.
1. Introduction Liquid-liquid extraction is an important separation processes encountered in many chemical process industries [1]. Different kinds of liquid-liquid columns are being used in industries; which can be classified into two main categories: agitated and nonagitated columns. Non-agitated (packed and sieve plate) columns are frequently used in liquid–liquid extraction operations due to their high throughput, high separation efficiency and insensitivity towards contamination of the interface, thus led to its wide applicability, particularly in the extraction of radioactive materials. These columns use difference in the density of the two phases to carry out the contact between them, thus they do not require external energy. Van Dijck [2] devised the use of external energy in the form of pulsing in sieve plate columns; which has found wide applications in nuclear fuel reprocessing. These columns have a clear advantage over other mechanical contactors when processing corrosive or radioactive solutions. The absence of moving mechanical parts in such columns obviates the need for frequent repair and servicing. The internals (packing/ perforated plates) reduces axial mixing; increases drop coalescence and breakage rates resulting in increased mass transfer rates, and affect the mean residence time of the dispersed phase. The performance of these columns can be enhanced by mechanical pulsation of the continuous phase. This is a result of an increase in shear forces and consequent reduction in size of dispersed droplets so that the interfacial area, and hence the mass transfer rate, is increased [3]. To shed more light on the extraction behaviour in the pulsed packed and sieve plate columns, the hydrodynamics as well as the mass transfer characteristics must be well understood. Our present knowledge of the design and performance of extraction columns is still far from satisfactory. The reason is mainly due to the complex behaviours of the hydrodynamics and mass transfer [4]. It is obvious that the changes in
2
M. JARADAT et al.
the characteristics (holdup, Sauter diameter, etc.) of the drop population along the column have to be considered in order to describe conveniently the behaviour of the column. The dispersed phase in the case of liquid-liquid extraction undergoes changes and loses its identity continuously as the drops break and coalesce. Accordingly, detailed modelling on a discrete level is needed using the population balance equation as a mathematical framework. The multivariate non-equilibrium population balance models have emerged as an effective tool for the study of the complex coupled hydrodynamics and mass transfer in liquid-liquid extraction columns. The development of computational tools to model industrial processes has increased in the last decades. However; to the best of the authors’ knowledge, there are no comprehensive non-equilibrium population balance models to describe in sufficient detail the behaviour of extraction columns. The main objective of this work is to develop a model that is capable to describe the dynamic and steady state behaviour of pulsed packed and sieve tray extraction columns. The models of both columns are integrated into the existing program: LLECMOD [Reference], which can also simulate agitated extraction columns (RDC and Kühni). LLECMOD can simulate the steady state and dynamic behaviour of extraction columns taking into account the effect of dispersed phase inlet (light or heavy phase is dispersed) and the direction of mass transfer (from continuous to dispersed phase and vice versa) [5]. Therefore, scale-up and simulation of agitated and non-agitated extraction columns based on population balance modelling can now be carried out successfully.
2. Mathematical model Mathematical modelling of pulsed extraction columns is considered by many researchers [6-13]. An empirical model for predicting the hydrodynamics in pulsed sieve plate columns was proposed by Kumar and Hartland [7]. A stagewise model for the transient behaviour of a sieve-plate extraction column taking into account the back flow and assuming constant hold-up was developed by Blass and Zimmerman [8]. Hufnagl et al. [9] evaluated a differential model of a Kühni column. Steiner et al. [10] modelled a packed column using differential contact model without axial mixing. Weinstein et al. [11] evaluated the differential model of a Kühni column. An improved dynamic model considering the influence of drop size distribution was developed by Xiaojin, et al. [12]. Several population balance models have been proposed by various authors: Garg and Pratt [13] developed a population balance model for a pulsed sieveplate extraction taking into account experimentally determined values for drop breakage and coalescence. Casamatta et al., [14] proposed a population balance model as described by Gourdon et al. [15]. Al Khani et al. [16] and Milot et al. [17] applied this model for dynamic and steady state simulations of a pulsed sieve-plate extraction column. Recently extensive work has been done on the population balance modelling of extraction columns many researchers [15, 18-23]. 2.1. The population balance model The general spatially distributed population balance model describing the coupled hydrodynamics and mass transfer can be written as [21]: 2 s[[ f sfd ,c (\ ) s[uy fd ,c (\ )] i d ,cy (\ )] y y st sz s[i i 1 (1) sfd ,c (\ ) ¯ Q in f in s ¡ y y y ° (d, cy ; t )E(z z y ) b \\ ^ ¡D ° sz ¡ y sz ° Ac vin ¢ ±
Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns In this equation the components of the vector \ [ d c y z t ] are those for the droplet internal coordinates (diameter and solute concentration), the external coordinate is z and t is time. The velocity vector along the internal coordinates is given by ] [d c y ] . The source term bw] represents the net number of droplets produced by breakage and coalescence per unit volume and unit time in the coordinates range [] , ] w] ] . The droplets axial dispersion is characterized by the dispersion coefficient, Dy. The second term on the right hand side is the rate at which the droplets entering the LLEC with volumetric flow rate, Qy,in, that is perpendicular to the column cross-sectional area, Ac, at a location Zy with an inlet number density, fyin . The dispersed phase velocity, uy, is relative to the walls of the column [23]. 2.2. Model parameters Equation (1) is general for any type of extraction column. However, what makes the equation specific is the internal geometry of the column as reflected by the required correlations for hydrodynamics and mass transfer. Experimental correlations are used for the estimation of the turbulent energy dissipation and the slip velocities of the moving droplets along with interaction frequencies of breakage and coalescence. In this work, correlations for packed and sieve plate columns concerning droplet velocity, coalescence and mass transfer are taken from the work of Henschke [24]. The slowing factor and the droplet breakage frequency are taken from the work of Garthe [25]. 2.3. Numerical solution The resulting model is composed of a system of integro-partial differential and algebraic equations that are dominated by convection and hence it calls for a specialized discretization approach. The model solved using an optimized and efficient numerical algorithms developed by Attarakih et al. [19, 21, 22].
3. LLECMOD program These aforementioned mathematical models, and in particular for pulsed extraction columns, are programmed in LLECMOD using Visual Digital FORTRAN. Recent correlations for fluid dynamics and mass transfer are now available and are extensively validated against experimental data collected from pilot and industrial columns. The graphical interface of the LLECMOD program contains the main input window and subwindows for parameter and correlation inputs. The main window contains all correlations and operating conditions that can be selected using drop down menus. The basic feature of this program is to provide an easy tool for the simulation of coupled hydrodynamics and mass transfer in liquid-liquid extraction columns based on the population balance approach for both transient and steady states conditions. Details about LLECMOD can be found in [23].
4. Results and discussion To completely specify the model, the following geometry is used for a pilot plant scale LLEC (packed pulsed column): column height (H) =4.4 m, inlet of the dispersed phase (zy) = 0.85 m, inlet of the continuous phase (zx) = 3.8 m, column diameter (d) = 0.08 m, the inlet feed is normally distributed with mean equals to 3.2 mm and standard deviation of 0.5 mm. The two EFCE test systems (toluene-acetone-water and butyl acetateacetone-water) are used. The direction of mass transfer is from the continuous to the dispersed phase. The inlet solute concentrations in the continuous and dispersed phases
3
4
M. JARADAT et al.
are taken for the toluene–acetone-water as 5.73 and 0 % and for the second system (butyl acetate-acetone-water) as 5.22 and 0 % respectively. The pulsation intensity (a. f) = 1 cm/sec and the total flow rate of the continuous phase: Qc = 40 lit/hr and dispersed phase: Qd = 48 lit/hr. 3.5 3
2.5
Droplet Diameter (mm)
Droplet Diameter (mm)
3
2 1.5 1 0.5 0
sim. exp. 0
1
2 3 Column Height (m)
4
2.5 2 1.5 1
sim. exp.
0.5 0 0
5
1
2 3 Column Height (m)
4
5
Fig.1: Simulated mean droplet diameter along the column height compared to the experimental data [25]. Left panel the test system is (toluene–acetone-water) and the right panel is (butyl acetate-acetone-water).
Fig.(1) shows the variation of the mean droplet diameter along the column height compared to the experimental data for both chemical systems. A fairly good agreement between the experimental and simulated profiles is achieved for both systems. A comparison between the simulated holdup profiles along the column height and the experimental data [25] is shown in Fig.(2). Again, a very good agreement is achieved for both test systems. 10
8
sim. exp.
8 Holduo (%)
Holdup (%)
6
4
2
0
1
2 3 Column Height (m)
4
4 2
sim. exp. 0
6
0
5
0
1
2 3 Column Height (m)
4
5
Fig.2: Simulated holdup profiles along the column height compared to the experimental data [25]. Left panel the test system is (toluene–acetone-water) and the right panel is (butyl acetate-acetone-water).
Fig.(3) shows the simulated and experimental solute concentration profiles as function of column height in both phases. The agreement between the simulation and experiment is excellent for both test systems. 0.06
0.06 Cx sim.
0.05
Cx exp.
0.04
Concentration (%)
Concentration (%)
0.05
Cy sim. Cy exp.
0.03 0.02
0.04 0.03
0.01
0.01
0
0
0
1
2 3 Column Height (m)
4
5
Cx sim.
0.02
Cy sim. Cx exp. Cy exp. 0
1
2 3 Column Height (m)
4
5
Fig.3: Simulated solute concentration profiles in both phases along the column height compared to the experimental data [25]. Left panel the test system is (toluene–acetone-water) and the right panel is (butyl acetate-acetone-water).
Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns LLECMOD provides also dynamic simulations to describe the transient behaviour of the extraction columns. Using the LLECMOD program, the transient column behaviour can be investigated numerically. To analyse the dynamic behaviour of the column, stepand exponential changes can be applied to the inlet variables to get the dynamic step response of the model. In the transient module, the following step and exponential changes can be applied: Inlet solute concentration in the dispersed phase, (Cy,in), inlet solute concentration in the continuous phase, (Cx,in). The dynamic evolution of solute concentration in the extract along with the experimental data will be discussed in a separate publication. It is obvious that LLECMOD is able to catch the dynamic behaviour of the extraction column with a good accuracy.
5. Conclusions The present nonequilibrium bivariate population balance model can be considered as an effective tool to describe the steady state and dynamic behaviour for hydrodynamics and mass transfer in extraction columns. In this work, pulsed packed and sieve plate extraction columns are considered. The transient and steady state performance of a pulsed packed extraction column is studied using the present model as an alternative to the commonly used models (backmixing and dispersion models). The simulation results from the present model are found in good agreement with the available experimental data.
References [1] T.C Lo et al. (Eds.), 1983, Handbook of Solvent Extraction, J. Wiley & Sons, New York. [2] W. J. D. Van Dijck, 1935, U.S Patent 2,011,186. [3] H. R. C. Pratt and G. W. Stevens, 1992, In: J. D. Thornton, Ed., “Science and Practice in Liquid-Liquid Extraction,” Oxford University Press, New York, 491–589. [4] G. Luo et al., 1998, Chem. Eng. Technol., 21, 10, 823–827. [5] M. Jaradat et al., 2010, Chem. Eng. J., 165, 2, 379-387. [6] S. Mohanty, 2000, Rev. Chem. Eng., 16, 3, 199–248. [7] A. Kumar and S. Hartland, 1995, Ind. Eng. Chem. Res., 34, 11, 3925–3932. [8] E. Blass and H. Zimmerman, 1982, Verfahrenstechnik, 16, 9, 682-690. [9] H. Hufnagl et al., 1991, Chem. Eng. Technol., 14, 301–306. [10] L. Steiner et al., 1995, Chem. Eng. Res. Des., 73, 5, 542-550. [11] O. Weinstein et al., 1998, Chem. Eng. Sci., 53, 2, 325–339. [12] T. Xiaojin et al., 2005, Chem. Eng. Sci., 60, 4409–4421. [13] M. O. Garg and H. R. C. Pratt, 1984, AIChE J., 30, 3, 432–441. [14] G. Casamatta and A. Vogelpohl, 1985, Ger. Chem. Eng., 8, 96-103. [15] C. Gourdon et al., 1994, in: J.C. Godfrey and M.J. Slater (Eds.), Liquid-Liquid Extraction Equipment, Wiley, Chichester, 137-226. [16] S. D. Al Khani et al., 1989, Chem. Eng. Sci., 44, 6, 1295-1305. [17] J.F. Milot et al., 1990, Chem. Eng. J., 45, 2, 111-122. [18] T. Kronberger et al., 1995, Comput. Chem. Eng., 19, 639-644. [19] M. Attarakih et al., 2004, Chem. Eng. Sci., 59, 2567-2592. [20] M. Attarakih et al., 2004b, Chem. Eng. Sci., 59, 2547-2565. [21] M. Attarakih et al., 2006, Chem. Eng. Sci., 61, 113-123. [22] M. Attarakih et al., 2006b, Chem. Eng. Tech., 29, 435-441. [23] M. Attarakih et al., 2008, Open Chem. Eng. J., 2, 10-34. [24] M. Henschke, 2004, Auslegung pulsierter Siebboden-Extraktionskolonnen, Shaker Verlag Aachen. [25] G. Garthe, 2006, Dissertation, TU München, Germany.
5
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-Scale modelling of a membrane reforming power cycle with CO2 capture Øivind Wilhelmsen, Rahul Anantharaman, David Berstad and Kristin Jordal SINTEF Energy Research, Sem Sælands vei 11, 7034 Trondheim, Norway
Abstract This work presents the initial investigations of an Integrated Reforming Combined Cycle (IRCC) process with CO2 capture using a membrane reformer. A geometrically generic 1-dimensional model of a membrane reformer has been implemented in Matlab 7.9. This model includes detailed balance equations for energy, momentum and mass in all three sections of the membrane reformer. Widely accepted empirical relations have been used to take into account the mass and energy transport across the membrane as functions of the conditions inside the chemical reactor. The reactor model has been integrated into an overall steady state IRCC process simulation model developed in HYSYS and GTPro. The work shows that multi-scale modelling is necessary to capture the behaviour of the process. The overall cycle efficiency of the process was 46.83 % with 85 % CO2 capture. Keywords: Carbon Capture Storage, Integrated Reforming Combined Cycle, Hydrogen Membrane Reactor, Multi-scale modelling
1. Introduction CO2 capture in power plants has been identified as an important technology to mitigate climate change. A key drawback of power plants with CO2 capture is the relatively large energy penalty associated with capture and subsequent efficiency drop (10-15% points for natural gas based cycles). Integrated Reforming Combined Cycles (IRCC) for precombustion capture of CO2 using aMDEA typically have an energy penalty of 13% points, of which the reformers and shift reactors contribute 6% points and the CO2 capture unit 2% points. Incorporating the reforming, shift and CO2 separation units into a single unit using a Hydrogen Membrane Reactor (HMR) can potentially improve the overall efficiency of an IRCC plant with CO2 capture since intermediate heating and cooling steps are eliminated. Due to the complexity of the membrane reformer, a sufficiently detailed unit model is necessary to provide realistic cycle studies. In this work, we will thus investigate the potential of an IRCC cycle with an integrated HMR (Figure 1) employing a steady-state one-dimensional membrane reformer unit model. Since the reforming is endothermic, hot exhaust gas from a gas turbine is used as the heating utility flowing co-currently with the feed. Nitrogen is used as sweep gas where both hydrogen and heat is transferred across the membrane in the membrane reactor.
2. Multi-scale modelling and numerical approach The HMR illustrated in Figure 2 was implemented in Matlab 7.9 and then linked to HYSYS and GTPro using Excel. This section will describe the balance equations of the HMR unit model. The model is generic, meaning that it can be applied to both the flatplate membrane reactor to the left in Figure 2, and the tubular membrane reactor displayed to the right. Heat is transferred from the hot exhaust gas (1) to the reacting
7
Multi-scale modelling and optimization of a membrane reforming power-cyle feed gas mixture (2). The section containing the exhaust gas is insulated and no heat is assumed lost to the ambient. The following reactions are assumed to take place near the surface of the catalyst pellets: ܪܥସ ܪଶ ܱ ՞ ܱܥ ͵ܪଶ Eq. 1 Eq. 2 ܱܥ ܪଶ ܱ ุ ܱܥଶ ܪଶ ܪܥସ ʹܪଶ ܱ ุ ܱܥଶ Ͷܪଶ Eq. 3 The overall production of hydrogen is endothermic and the reactions need energy from the hot exhaust gas to produce hydrogen. The reaction kinetics is modelled by the equations proposed by Xu and Froment [1]. Both heat and hydrogen is assumed transferred through the membrane.
Figure 1 Illustration of the power-cycle process. The main assumption of the one-dimensional model is plug flow for all sections. The parameters taking into account the geometry of the reactor are displayed in Table 1. The tubular reactor is assumed to have a length L and radii R1, R2 and R3 for the sections 1, 2 and 3 respectively (Figure 2). The flat-plate reactor is assumed to have a width W and heights H1, H2 and H3. The number of components is Nc and the number of reactions Nr. The energy balance of the exhaust gas section is: െߛଵ ܬǡଵ՜ଶ ݀ܶଵ ൌ ேǤభ ݀ݖ σୀଵ ܨଵǡ ܥǡ
Eq.4
Here, the subscript i denotes each component, subscript j each reaction and 1,2,3 the different sections. T is the temperature, Jq the heat flux, Fi the flow rate of component i and Cp the heat capacity.
Figure 2: Illustration of a membrane reactor. 1: The hot exhaust gas. 2: The feed gas mixture. 3: The permeate. A flat-plate membrane reactor configuration (left) and a tubular membrane reactor configuration (right).
8
Ø. Wilhelmsen et al.
Table 1: geometrical parameters
Parameter:
Tubular:
Flat-plate
Ȗ1 Ȗ2 Ȗ3
2ʌR3 2ʌR2 ʌR32-ʌR22
W W WH2
The momentum balance of the exhaust gas section was omitted due to an insignificant contribution to the simulations within the relative accuracy of 5E-6. This was validated by including the momentum balance from [2]. The energy balance of the feed gas section is: ேೝ ݀ܶଶ ߛଵ ܬǡଵ՜ଶ െ ߛଶ ܬǡଶ՜ଷ ߛଷ ߩ σ ߟ ݎ ൫െο ܪ ൯ ൌ ேǡమ ݀ݖ σୀଵ ܨଶǡ ܥǡ െሺͲǡ െܬுଶ ሻߛଶ ܬுଶ ൫݄ଶǡுమ െ ݄ଷǡுమ ൯ ே
ǡమ σୀଵ ܨଶǡ ܥǡ
Eq.5
Here, ȡB is the density of the catalyst, ߟ the effectiveness factor of reaction j, rj the reaction rate of reaction j and ο ܪ is the enthalpy of reaction j. hH2 denotes the intensive enthalpy of hydrogen and ܬுଶ is the flux of hydrogen through the membrane. The momentum balance is taken into account by Hicks equation which is described in [2], and the mole balances for the feed gas section are: ேೝ
݀ܨଶǡ ൌ ߛଷ ߩ ߟݎ ݒǡ െ ߜǡுଶ ߛଶ ܬுଶ ݀ݖ
Eq.6
Here, ߜǡுଶ denotes the Kronecker delta. Only hydrogen is assumed to permeate through the membrane. The mole balances at the permeate side are: ݀ܨଷǡ ൌ ߜǡுଶ ߛଶ ܬுଶ Eq.7 ݀ݖ The momentum balance for the permeate is neglected of the same reasons as in the exhaust gas section. This assumption has been assessed by including the same momentum balance for the permeate as in [3]. Ideal gas law is used as equation of state, giving an expression for the velocities, needed for the momentum balance in the feed section and also in the correlations for heat transfer across the tube walls. Finally, the energy balance of the permeate is: ݀ܶଷ ߛଶ ܬǡଶ՜ଷ ሺͲǡ ܬுଶ ሻߛଶ ܬுଶ ൫݄ଶǡுమ െ ݄ଷǡுమ ൯ ൌ ேǡయ ݀ݖ σୀଵ ܨଷǡ ܥǡ
Eq.8
Representative values for the effectiveness factors are found by including the mass balances for the catalyst pellets, assumed to be spherically shaped. The detailed balance equations are more closely described in [4]. The heat transfer coefficients and thermo physical models were modelled using semi-empirical expressions also found in [4]. The heat flux from the exhaust to the feed gas section was taken into account by a constant
Multi-scale modelling and optimization of a membrane reforming power-cyle overall heat transfer coefficient. The hydrogen flux model used in this work is identical to [5].
3. Results and discussion The overall process outlined in Figure 1 was modelled in HYSYS and GTPro. The Matlab HMR model was linked to HYSYS using Excel. This allowed the membrane reactor to be solved at the scale of the balance equations while the surrounding process was solved at a larger scale in HYSYS and GTPro. Exhaust gas from the hydrogen fired gas turbine (around 600 °C) was used as the heating medium. Nitrogen from a cryogenic ASU was used as the sweep gas. The nitrogen also acts as the necessary gas turbine fuel diluent for hydrogen rich combustion.
Figure 3 Temperatures profiles and conversion of methane in the reactor (left and right). Figure 3 shows the temperatures and the methane conversion in the membrane reactor. In accordance with Falco et. al. [5], the methane conversion is far from unity at these conditions. With 30-50% methane conversion, integration of the process unit in a power generation process involves additional processing such as an auto-thermal reformer downstream of the membrane reformer or an oxy-combustion power island downstream of the membrane reformer. In the process modelled in this work, the retentate from the membrane reformer is sent to an oxygen blown auto-thermal reformer (ATR) and a two stage shift reactor to convert unconverted methane and CO to H2 and CO2. An aMDEA based capture unit is designed to capture 95% of the CO2. An advantage of integrating the HMR with the ATR in this case is the relatively higher partial pressure of CO2 in the syngas stream and hence lower efficiency penalty for CO2 capture. The H2 rich syngas is then mixed with the retentate and fed to a H2 fired gas turbine. The temperature of the exhaust gas from the turbine after heat exchange in the reformer is around 520 °C and is increased to 580 °C using duct burning. The overall performance of the process is presented in Table 2. Note that the overall CO2 capture ratio is 85% due to duct burning and a significant amount of unconverted CO after the shift reactors.
4. Conclusion A one-dimensional model of a HMR has been developed aiming to provide a generic model that can be adapted to different scenarios. The membrane reformer model is integrated in a power cycle with CO2 capture modelled in HYSYS to evaluate its potential. By comparing the multi-scale power cycle modelling to previous work where
9
10
Ø. Wilhelmsen et al.
they used modelling only at the process scale, [6] it is obvious that the detailed membrane reformer model is vital to reveal the limits in methane conversion and the necessary additional process steps in potential integration schemes. The process scheme designed as part of this work has an overall process efficiency of 46.8%. For comparison purposes, an IRCC with an oxygen blown reformer with similar set of assumptions has an overall plant efficiency of 46%, while a NGCC with amine-based post-combustion capture has an efficiency of 49.5%. However, it must be noted that with the multitude of parameters that could be manipulated in such integration, a sensitivity analysis needs to be performed to identify the optimal process parameters and integration. This will be the subject of future work. Table 2: Overall performance of HMR integrated power plant with CO2 capture
NG Flow
t/h
Thermal energy of fuel - LHV basis [A]
MWth
868.85
62.52
Gas turbine output Steam turbine output Air expander
MWe MWe MWe
319.09 150.87 3.59
Gross electric power output [B]
MWe
469.96
Total ancillary power consumption [C]
MWe
63.08
Net electric power output [D]
MWe
406.89
Gross electrical efficiency [B/A*100]
%
54.09
Net electrical efficiency [D/A*100]
%
46.8
Carbon capture ratio
%
85.00
References [1] J. Xu and G.F. Froment, 1989, “Methane Steam Reforming, Methanation and Water-Gas Shift: I. Intrinsic Kinetics”, AlChe, 35, 1, 97-103 [2] M.H. Wesenberg, 2006, “Gas Heated Steam Reformer Modelling”, PhD-thesis, The Norwegian University of Science and Technology. [3] E. Johannessen and K. Jordal, 2005, “Study of a H2 separating membrane reactor for methane steam reforming at conditions relevant for power processes with CO2 capture”, Energy Conv. and Manag., 46, 1059-1071 [4] Ø. Wilhelmsen, 2010, “ The state of minimum entropy production in reactor design”, MSc. Thesis at The Norwegian University of Science and Technology [5] M. De Falco, L. Di Paola, L. Marrelli and P. Nardella, 2007 “Simulation of large-scale membrane reformers by a two-dimensional model”, Chem. Eng. J., 128, 115-125 [6] K. Jordal, R. Bredesen, H.M. Kvamsdal, O. Bolland, 2004, “Integration of H2-separating membrane technology in gas turbine processes for CO2 capture”, Energy, 29, 1269-1278
5. Acknowledgements This publication has been produced with support from the BIGCCS Centre, performed under the Norwegian research program Centres for Environment-friendly Energy Research (FME). The authors acknowledge the following partners for their contributions: Aker Solutions, ConocoPhilips, Det Norske Veritas AS, Gassco AS, Hydro Aluminium AS, Shell Technology Norway AS, Statkraft Development AS, Statoil Petroleum AS, TOTAL E&P Norge AS, GDF SUEZ E&P Norge AS and the Research Council of Norway (193816/S60).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process Mayank Shah,a Edwin Zondervan,a Anton A. Kiss,b Andre B. de Haana a
Process Systems Engineering, Department of Chemical Engineering and Chemistry, Eindhoven University of Technology, 5600 MB, The Netherlands b AkzoNobel – Research, Development & Innovation, Process Technology ECG, Velperweg 76, 6824 BM Arnhem, The Netherlands. E-mail:
[email protected]
Abstract The state of the art equilibrium model and the rate-based models for reactive distillation (RD) are well known and have been used since a couple of decades. However, these models are not sufficient to represent a slow reaction process that is kinetically controlled. This shortcoming is due to neglecting the effect of liquid back mixing on the whole process. This work starts with reviewing the modeling approach for the RD and then discusses the applicability of various models. The main focus is on the extension of the dynamic rate-based model to take into account the liquid back mixing. We also show how the axial dispersion is introduced into the RD model, without adopting the axial dispersion model. The results of the rate-based model were compared, with and without the axial dispersion. Remarkably, the extended model predicts more accurately the kinetically controlled process, as compared to the conventional rate-based model. Keywords: Reactive distillation modeling, liquid back mixing, Aspen Custom Modeler
1. Introduction Studying multi-component multistage separation processes such as distillation, gas absorption and reactive distillation by computer aided design and simulation is an important aspect of modern chemical engineering. Such studies are currently based on either the equilibrium (EQ) modeling or the rate-based modeling. In the equilibrium modeling, the vapor and liquid are assumed to be in equilibrium. However, this does not apply to the actual operation since a column rarely operates at equilibrium. The degree of separation depends on the mass and energy transfer between the phases being contacted on a tray, or within a packed section of the column. In practice, the theoretical number of stages obtained from the equilibrium model calculations are converted to the required real stages, either through overall efficiency of a tray or by height equivalent of a theoretical plate (HETP) for packed columns. This is a useful approach to simulate binary system or existing column. However, this approach is not reliable to simulate a multi-component system or an existing column with different operating conditions.1 Compared to the equilibrium modeling, the rate-based modeling offers accuracy in design of a column as it accounts for: 1) vapor-liquid equilibrium only at the interface between the bulk liquid and vapor phases 2) a transport-based approach to predict the flux of mass and energy across the interface, and 3) the real hydrodynamic situation of either a tray or a packed column. Due to these reasons, the over-design and underdesign is avoided, there is no need for efficiencies and HETPs and the column is designed more realistic as compared to EQ modeling, thereby reducing the capital and operating costs. Although the state of the art rate-based model predicts better than the EQ model, the model is limited to reliably predict the mass transfer limited processes.
12
M.shah et al.
The rate-based model is not sufficient to represent a slow reaction process that is kinetically controlled. This shortcoming is due to not taking into account the axial dispersion in the model which results in neglecting the effect of liquid back mixing on the whole process. The liquid back mixing is very important to predict accurately the end product composition in a slow reaction process, as the end product composition strongly influences the physical and chemical properties of the product. The kinetically controlled processes are often encountered in the specialty chemical sector, and they are often batch processes due to the small scale production requirement. The best examples of kinetically controlled processes encountered in the specialty chemicals sector are fatty acids, fatty acid nitriles and polyesters synthesis. In order to apply the reactive distillation (RD) technology in the specialty chemicals sector, the reactive distillation column must be designed in such a way that several products can be produced in same column, and while switching from one product to other the undesired product formation is avoided or minimized. The undesired product formation is minimized by reducing the back mixing in the system. These clearly suggest the necessity to incorporate the liquid back mixing in the model in order to investigate a multi-product RD process. In this work, the dynamic rate based model is extended to account for the liquid back mixing. We also show how the axial dispersion is introduced into the RD model, without adopting the axial dispersion model. The extended model is simulated in steady state mode to predict the process characteristics, and in dynamic mode to predict the influence of back mixing on the product change over. We also compared the results of the rate-based model with and without the axial dispersion.
2. Model development The dynamic rate-based model is extended to account for the liquid back mixing, by considering each stage as a stirred tank reactor. The extended model accounts for convection, mass transfer, reaction and axial dispersion. The liquid phase balance is discussed in detail in order to show explicitly how the axial dispersion is introduced into the RD model. The description of the vapor phase, the mass transfer between phases and the reactions in liquid phase remains the same as in the rate-based modeling. The model consists of n number of stages in series and each stage is considered as stirred tank reactor as shown in Figure 1. Note that n number of stirred tank reactors in series represents plug flow behavior hence the liquid phase balance can be represented by a plug flow reactor (PFR) model that is composed of partial differential equation (PDE): L
V Vj j-1
Lj-1
yi,j
Ci,j-1
Tvj
TLj-1
fvi,j yfi,j
fLi,j Cfi,j Vapour
Liquid
j RD
N
'z =h
Sj+1 - side draw from stage j+1 to j
j+1
L
Vj+1
Lj
yi,j+1
Ci,j
Tvj+1
TLj
V
Figure 1. Schematic view of a RD column and the corresponding stage balance
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process 13
wCi wt
v
. wCi wCi2 Dax Ri M i 2 wz wz
(1)
where, Ci is the concentration of component i (mol/kg), t is the time (s), v is the linear flow velocity (m/s), z is the position coordinate down the length of the column (m), Dax is the axial dispersion coefficient (m2/s), Ri is the reaction rate of component i .
(mol/kg/s) and M i is the mass transfer flux (mol/kg/s). By discretization of spatial derivatives of eq. (1), the liquid phase balance is represented as ordinary differential equation (ODE): dci dt
v
Ci , j Ci , j 1 'z
Dax
Ci , j 1 2Ci , j Ci , j 1 'z
2
.
Ri , j M i , j
(2)
where, Ci,j-1, Ci,j, Ci,j+1 is the concentration of component i at a stage j-1, j, j+1, respectively and 'z is the height of stage. In order to compare eq. (2) to the traditional liquid phase balance of the rate-based model, eq. (2) is formulated by replacing Ci = ni/M and 'z = h on the left and right sides, respectively:
dniL, j dt
L j 1Ci , j 1 L j Ci , j Dax M j
Ci , j 1 2Ci , j Ci , j 1 h2
M j Ri , j N iL, j F jL C fi , j
(3)
where, nLi,j is the number of moles of component i on a stage j, Lj-1, Lj is the liquid flow rate (kg/s) on a stage j-1 and j, respectively. Mj is the hold up on a stage j (kg), NLi,j is the mass transfer rate (mol/s), FL is the liquid feed low rate (kg/hr) and Cfi,j is the liquid feed concentration (mol/kg). The Dirichlet and Neumann boundary conditions2 are applied to solve eq. (3) for the top stage (j=1) and for the bottom stage (j=J), respectively. Taylor et al.3 have introduced a side draw from each stage in the rate-based model for coupled RD and side reactor process. We used this concept to introduce the concentration of component i, Ci, j+1 from stage j+1 to a stage j by using the side draw (Sj+1) from a bottom stage (j+1) to a subsequent top stage (j). Since eq. (3) accounts for convection, dispersion, reaction and mass transfer, eq. (3) represents the complete liquid phase component material balance for the kinetically controlled processes. The total material balance for liquid phase is given by:
dn Lj dt
i n
i n
i 1
i 1
L j 1 L j S j 1 M j ¦ Ri , j ¦ N iL, j F jL
(4)
The component and total material balances for vapor phase are described by eq. (5) and (6), respectively:
dniV, j dt
dnVj dt
V j 1 yi , j 1 V j yi , j N iV, j F jV y fi , j i n
V j 1 V j ¦ N iV, j F jV i 1
(5) (6)
The vapor-liquid equilibrium at the interface is represented by:
yi , j Pj
xi , j J i , j ( xi , j , T j ) pisat ,j
(7)
14
M.shah et al.
where, Pj is the total pressure (bar), Ji,j is the activity coefficient of component i as function of xi,j and Tj (K) and psati,j is the vapor pressure of pure component i. The mass transfer rates at the interface are represented by eq. (8), (9) and (10):
N iV, j
N iL, j
(8)
N iL, j
kl a(Ci*, j Ci , j )
(9)
N iV, j
k g a( yi , j yi*, j )
(10)
where, kl, kg are respectively liquid and vapor side mass transfer coefficients (kg/s/m2), a is the interfacial area (m2) and C*i,j, y*i,j are the liquid and vapor side equilibrium concentration of component i, respectively.
3. Results and discussions A reactive distillation model is developed to describe a kinetically controlled reactive distillation process. This model can be also used to simulate a multi-product reactive distillation column. In this work, Aspen Custom Modeler is used as a powerful CAPE tool to extend the traditional rate-based model so that complete model accounts for the effect of axial dispersion on the process. The simulations of a kinetically controlled process show that there is a significant influence of the axial dispersion on the conversion (as illustrated by Figure 2). When the axial dispersion is neglected, a conversion of 96% is predicted. However, the conversion significantly reduces to 90% with the increase of the axial dispersion coefficient. This clearly shows that – due to the low conversion of the reactant in the highly back mixing system – the end product composition is also significantly altered. In order to achieve a 96% conversion in the highly dispersed system, more stages are required, compared to the low dispersed system as shown in Table 1. This important difference shows that the axial dispersion should be included in the RD model in order to analyze reliably the kinetically controlled process. As the column internals significantly influence the axial dispersion, it is also necessary to investigate internals that have low axial dispersion in order to properly design a kinetically controlled process. Undesired products are often formed during the product change over in a multi-product continuous RD column. The dynamic simulation of product change over in multiproduct continuous RD column is shown in Figure 3, for various axial dispersions. top1 2
Table 1: Number of stages required in series to get a conversion of 96%
4
stages [-]
6
Dax [m2/s]
Stages [-]
0
20
2
0.0002
25
2
0.014
33
0.028
44
8 10 12 14 16
18 bottom 20 0
2
Dax = 0 m /s Dax = 0.002 m /s Dax = 0.014 m /s 2
Dax = 0.028 m /s 0.2
0.4 0.6 conversion [-]
0.8
1
Figure 2. Influence of axial dispersion on conversion
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process
15
feed compostions [-]
1
x
1
0.5
x
2
0 0
0.5
1
1.5
2
2.5
3
3.5
4
1.02
product compositions [-]
1
P
1
0.98 0.96 0.94 0.92
D D D D
0.9 0
ax ax ax ax
= 0.028 m2/s
P
2
= 0.014 m2/s = 0.002 m2/s = 0.0 m2/s 0.5
1
1.5
2 time [hr]
2.5
3
3.5
4
Figure 3. Dynamic profile of product change over Due to the fact that different conversions are achieved with varying axial dispersion coefficients (as illustrated in figure 2), the steady state composition of product P2 are different. It is noticeable that during the product change over, the product transition time from a steady state composition of product P1 to a steady state composition of product P2 is significantly higher in the highly dispersed system as compared to a low dispersed system. This results in more undesired product formation in the highly dispersed system. The dynamic simulations show that formation of such undesired product during the product transitions in highly dispersed system, is at least 1.5 times higher as compared to a low dispersed system. This demonstrates that the RD model without the axial dispersion is not sufficient to represent the reality of the process. The extended rate-based model improves the predictive capability of the slow reaction process (kinetically controlled process) compare to conventional RD models. The extended model also provides a better platform to analyze the multi-product RD process.
4. Conclusions The extended RD model proposed in this work represents more accurately a kinetically controlled RD process as compared to the conventional rate-based model. The extended model can also be used to describe realistically a multi-product RD column. The simulation results demonstrate that the axial dispersion influence significantly the kinetically controlled process and must not be neglected. The extended model predicts that the conversion is reduced in highly dispersed system as compared to a low dispersed system. The undesired product is formed at least 1.5 times higher in the highly dispersed system as compared to a low dispersed system.
Acknowledgment The financial support from the Dutch Separation Technology Institute (project SC-00005) and the industrial partners (DSM, AkzoNobel) is greatly acknowledged.
References 1. R. Bauer; R. Taylor; R. Krishna, 2000, Chemical Engineering Journal, 76, 33-47 2. H. Fogler, Elements of Chemical Reaction Engineering, 2nd ed., Prentice Hall (1992) 3. R. Taylor; R. Krishna, 2000, Chemical Engineering Science, 55, 5183-5229
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Application of computer-aided multi-scale modelling framework – Aerosol case study Martina Heitziga, Chistopher Gregsonb, Gürkan Sina, Rafiqul Gania a
CAPEC, Department of Chemical & Biochemical Engineering, Technical University of Denmark, Søltofts Plads, Bld. 227, 2800 Kgs. Lyngby, Denmark b Firmenich Inc., 250 Plainsboro Road, Plainsboro, NJ, 08536, USA
Abstract A computer-aided modelling tool for efficient multi-scale modelling has been developed and is applied to solve a multi-scale modelling problem related to design and evaluation of fragrance aerosol products. The developed modelling scenario spans three length scales and describes how droplets of different sizes are formed when a liquid fragrance product is sprayed from a pressurized can and how these droplets evaporate while they settle down due to sedimentation and convective mixing. Keywords: multi-scale modelling, modelling framework, aerosols
1. Introduction A computer-aided multi-scale modelling framework has been developed with the goal to increase the efficiency of the involved work-flows for model development and application [1]. This is achieved by designing the structure of the framework such that it can handle the work-flows and data-flows associated with different modelling tasks, combining state-of-the-art modelling techniques for the different work-flow steps as well as supporting model-documentation and model reuse. In this paper, a case study related to a multi-scale modelling problem (four different length scales) of an aerosol system is presented in order to highlight the main features of the modelling framework. The modelling scenario describes the spraying of a liquid fragrance product from a pressurized can which causes droplet formation. Furthermore, the fate of these droplets due to evaporation and transport has been considered. The work-flow to solve this and similar multi-scale problems has been developed and incorporated in the modelling framework. The developed models have been implemented in the model library of the framework so that they are available for application by other modelling projects. Section 2 introduces the developed work-flow for multi-scale model development while Section 3 describes a case study used to highlight the work-flow and the application of the modelling framework.
2. Work-flow in multi-scale modelling problems The work-flow of the framework supporting the development of multi-scale models, such as, the spraying of an aerosol, is summarized as: 1) Modelling objective and system description; 2) Identification of required models and model types (with respect to modelling objective); 3) Development, analysis, identification and validation of models; 4) Linking of models involved and solution strategy; 5) Evaluation of model performance and results. In step 1 the modelling objective is defined and available information on the system is collected. In step 2 the different elements of the system are identified and for each element it is investigated how it can be modelled. The work-flow
A new modelling tool for multi-scale model development and evaluation
17
identifies the need for multiple time and/or length scales for models based on the model assumptions, considered phenomena and desired model outputs. Once the models have been developed (step 3) they are evaluated on how they should be linked and solved sequentially or simultaneously (step 4).
3. Case study 3.1. Modelling objective and system description The modelling objective is to describe the spraying process of a fragrance product (for example fine fragrance, air freshener) so that the product qualities can be evaluated. The system under investigation is depicted in Figure 1. A pressurized liquid mixture of active ingredients, solvents, additives and propellants is released from a can to the surrounding atmosphere. The compounds are limonene (fragrance) and ethanol (carrier). During the release process a part of the liquid evaporates while the remaining liquid forms droplets of different sizes which account for the fragrance delivery. The generated droplets move downwards due to sedimentation and convective mixing as fragrance chemicals evaporate. Consequently, the modelling objectives of the system can be divided into two parts (see Figure 1): Part I: Describe the spraying of the compressed liquid from a can to the atmosphere and predict the ratio of released vapour to liquid as well as the size distribution of the formed droplets and their temperature. Part II: Model the fate of the droplets as they settle down and evaporate.
௩
Figure 1. Spraying process (Vsed-sedimentation velocity; ݉ሶ
ሺݐǡ ݄ሻ-evaporation mass flow of compound i)
3.2. Identification of required models and model types Part I: The ratio of liquid to vapour released as well as the temperature and composition of the droplets are predicted by an adiabatic flash model. Based on these results the droplet size distribution is determined by an experimentally regressed correlation (alternatively, a normal distribution may also be assumed). Part II: In order to describe the fate of the droplets, models are needed to describe the transport process as well as the evaporation together with appropriate constitutive models for different properties. 3.3. Development, analysis, identification and validation of models Because of page limits, we are presenting only the modelling of Part II in this paper. 3.3.1. Transport of droplets: The transport model of the droplets in the atmosphere (W. Koch, SprayExpo Program Description, Toxikologie und Experimentelle Medizin, Frauenhofer Institut, Hannover, Germany) has been adopted here. It considers the
18
M. Heitzig et al.
transport due to sedimentation as well as convective mixing by eddy diffusion. The sedimentation is modelled based on Stoke’s friction law. Coalescence between droplets and transport in horizontal directions is neglected. The droplets are assumed to be spherical. The corresponding model equations are given below: ఘ ȉ (1) ܣൌ ೣ ଵ଼ȉఎೌೝ డೝ
ൌ ܭௗௗ௬ ȉ
డ௧
డమ ೝ డమ
െ ܣȉ ܦௗ ଶ ȉ
డೝ
(2)
డ
3.3.2. Droplet evaporation: For the droplet evaporation the model from [2] has been modified by adding a dynamic energy balance. Important model assumptions are: the droplet is spherical, ideally mixed and consists only of a binary mixture; VLE is established at the droplet surface; convection and thermal diffusion are neglected; the gas phase is ideal; the temperature profile around the droplet is given by zeroth order approximation; the temperature of the surrounding gas phase ܶ is constant; and the concentration of the droplet compounds ݕǡ far away from the droplet is zero. The corresponding model equations are: ܶ௦ ൌ ݊ ൌ
൫ೣ ்ೞ ൯ ಿ
σసభ
ெௐ
ȉ
(3)
, ݔ ൌ σ , ݅ ൌ ͳǡ ǥ ǡ ܰܿ݉
ே , ܦௗ ൌ ఘ ே ݉ ൌ σୀଵ ݉ ,ݓ ൌ , ݅ ିସȉெௐ ȉఙ ܴ௩ǡ ൌ ݁ ݔቄ ቅ ఘ ȉோȉ் ȉ
ܸௗ ൌ σୀଵ
ௗ ௗ௧
ൌ
ଶȉగȉೝ ȉெௐ ȉ ȉ ோȉ்ೌ
ௗ൫ೣ ்ೞ ൯
(4), (5)
ଵൗ ଷ ȉ ቀ ೝ ቁ గ
ೞ
ೝ
ൌ ͳǡ ǥ ǡ ܰܿ݉, ߩ௫ ൌ ௦ǡ
, ܲ௦ ൌ ܲ
ଵିೞ ȉ௫ ȉఊ Ȁ
݈݊ ൜
(6), (7)
ଵି௬ೌǡ
σே ݓ ୀଵ
ȉ ߩ
(8), (9), (10)
ȉ ܴ௩ǡ
(11), (12)
ൠ
(13) ௗ
ே ଶ ସ ൌ ʹߨܦௗ ܭ ሺܶ െ ܶ௦ ሻ ߨܦௗ Ȟሺܶ െ ܶ௦ସ ሻ σୀଵ ܮ (14) ௗ௧ Here, ܶ௦ is the droplet temperature, ܦௗ the droplet diameter and ݉ is the mass of compound i inside the droplet. The model has been successfully validated using data by [3] for evaporating water droplets. 3.3.3. Constitutive models Correlations for the pure component properties with respect to changing temperature are taken from the ICAS database which is linked to the modelling framework. The required properties are: liquid heat capacity ܿ , liquid density ߩ , vapor diffusion coefficient in air ܦ , surface tension ߪ , heat of vaporization ܮ and vapour pressure ܲ௦ for all system compounds and the thermal conductivity of air ܭ . For the liquid phase activity coefficients ߛ the UNIQUAC model has been applied. The UNIQUAC parameters have been regressed through experimental data for the system ethanollimonene by [4]. ௗ௧
3.4. Linking of models involved and solution strategy: The developed models for the spraying process span four different length scales. The models of Part I, that is the adiabatic flash model and the droplet size distribution model, are at the macro scale. In order to describe the fate of the droplets (Part II), two size scales have been employed. On the meso scale (Eqs. 1-2) the transport of one droplet size fraction is considered, while the micro scale (Eqs. 3-14) describes the evaporation of a single droplet and the required properties are calculated on the nano scale with respect to temperature and composition. Figure 2 shows the linking scheme of the different models together with the data-flow and sketches of the modelled system on the different scales. The macro
A new modelling tool for multi-scale model development and evaluation
19
scale models are solved sequentially. Due to the data-flow requirements between the size scales, the remaining lower scale models must be solved simultaneously. The lower scale models need to be solved for each discrete droplet size fraction j in the macro scale (Ndis times). The droplet temperature Ts, the compound masses in the droplet mi(j) and the number of droplets NDr(j) are communicated from the macro scale to the lower scale models where they are used as initial values. In order to solve the meso scale model, the partial differential equation (Eq. 2) is discretized (in vertical direction h). This is done automatically by the modelling framework based on user specifications (method of lines, hmin=0.4 m, hmax=2.7 m, 184 discretization points). The height where the droplets are generated is 1.6 m. Results communicated back to the macro scale are the number of droplets in each discrete height h NDr(j)(h,t), the mass of the compounds i inside the droplet mi(j)(t) and the mass flow evaporating from the droplet mievaporating(t), all with respect to time. After solving the lower scale models for each discrete droplet size fraction j, the macro scale aggregates the results. Figure 2 also shows the output variables of each model in the linking scheme.
Figure 2. Linking scheme for spraying and fate of aerosol
3.5. Evaluation of results Simulations have been conducted for a total number of 1.02x1010 droplets having a droplet size distribution of 22 discrete diameters between 1.3 and 34 ȝm. Initially, all droplets had a composition of 5 vol% limonene and 95 vol% ethanol. The micro, meso and macro scale results are highlighted in Figures 3, 4 and5.
4. Conclusions The computer-aided modelling framework provides important features for the solution of the presented case study and similar multi-scale problems. It is structured based on the work-flows the modeller needs to follow, not only for model development, but also for model analysis, identification, validation and application for simulation and optimization. At each step of the work-flows the modelling framework supports the modeller by providing expertise as well as the required computer-aided tools, library and database connections. In that way, the process of model development is
20
M. Heitzig et al.
systematized and becomes more efficient. As regards the case study, the presented aerosol model allows the evaluation of product attributes such as how much vapour and of which composition is released at which height and at what time; how fast droplets settle down and when they disappear due to evaporation. This allows the product designer to simulate different scenarios (starting composition, spray-can pressure, release devices, etc.) and design the appropriate product with the corresponding delivery device. The developed models are also included in the modelling framework library and are ready for reuse for a different application context. If during application the available model needs to be further extended, the modelling framework provides strategies for this. The aerosol model can certainly be extended.
Figure 3. Micro scale results. Left: droplet composition during evaporation (34 ȝm droplet), Right: lifetimes of droplets for all 22 discrete diameters.
Figure 4. Meso scale results. Left: Location of droplets at droplet lifetime for different discrete diameters, Right: No. of 34 ȝm droplets in different height fractions vs. time.
Figure 5. Macro scale results. Left: Total mass of limonene released (by all droplets) vs. time, Right: Total mass flow of ethanol and limonene at different heights vs. time.
References [1] M. Heitzig, G. Sin, R. Gani, 2010, A Computer-Aided Framework for Modelling and Identification, submitted to Computers & Chemical Engineering [2] J. Kukkonen, T. Vesala, M. Kulmala, 1989, J. Aerosol Sci., 20, 7, 749-763 [3] W. E. Ranz, W. R. Marshall Jr., 1952, Chem. Eng. Prog., 48, 4, 173-180 [4] A. Cháfer, R. Muñoz, M.C. Burguet, A. Berna, 2004, Fluid Phase Equilib., 224, 251-256
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Sensitivity of shrinkage and collapse functions involved in pore formation during drying Seddik Khalloufia, Cristhian Almeida-Riveraa, Jo Jansena, Marcel Van-Der-Vaarta, and Peter Bongersab a
Unilever R&D Vlaardingen, Structured Materials and Process Science Department, 3130 AT Vlaardingen. The Netherlands. Tel. +31 10 460 8501, Fax. +31 10 460 5025,
[email protected] b
Chemical Engineering and Chemistry, Eindhoven University of Technology, Eindhoven, The Netherlands
Abstract The pore formation during drying is controlled by two mechanisms which are represented by two functions. These functions are expected to be universal, thus recurrent and applicable to relevant physical properties of the products during drying. This contribution aims at studying the sensitivity of shrinkage and collapse functions in predicting the porosity as a function of moisture content. A set of experimental data from an independent research group using air-drying of carrot were used to evaluate the sensitivity of these two functions. The results of this analysis showed that (i) at high moisture content, the porosity is not sensitive to the shrinkage function whatever the value of the collapse function is, (ii) in the case of air drying and at low moisture content, the porosity could be strongly affected by the shrinkage function, and (iii) the collapse function has a strong effect on porosity in products with a high volume of initial air. These findings are reported here for the first time and the approach used in this contribution could be very relevant to assess other parameters involved in drying processes such as bulk density and shrinkage coefficient. Keywords: Porosity, drying, sensitivity, theoretical model, shrinkage, collapse.
1. Introduction Drying is one of the major food processing technologies used to preserve and to increase the shelf-life of food products. Indeed, dried products are characterized by a low water activity, which inhibits microbial growth and undesirable enzymatic reactions (Mayor et al., 2004). In addition, drying facilitates handling, storage and transport of products without involving expensive cooling systems. During drying, food products undergo deformations that can be characterized by changes in volume, shape, porosity, density, shrinkage and/or collapse phenomena. These modifications are of extreme importance in terms of product quality and
S. Khalloufi et al.
22
characterization of mass and heat transfer phenomena. Optimization of these phenomena taking into account the quality of the output product and the cost of the processing is a requirement for the development and the perpetuity of drying technologies. Several mathematical expressions have been suggested to predict the porosity as a function of moisture content. Recently, a new theoretical model was suggested (Khalloufi et al 2009). One of the advantages of this new model is its ability to capture, with high accuracy, the porosity profiles regardless of the products and/or the technology used. Furthermore, this model involves two physical phenomena, namely shrinkage and collapse, that can be used to understand the mechanisms behind the pore formation. However, so far, there was no sensitivity study of the shrinkage and/or collapse functions in predicting the porosity and this is the main aim of this contribution.
2. Theoretical background of the model The main steps which were taken to build this new theoretical model are illustrated in Table 1 (Khalloufi et al. 2009). We proposed a mechanism-driven description of the porosity changes during drying in terms of shrinkage and collapse phenomena. The first phenomenon is related to the amount of water removed and replaced by air, which is represented by the shrinkage function. The second phenomenon refers to the variation of the air initially existing within the product, which is represented by the collapse function. Table 1: Main steps for building the new theoretical model (Khalloufi et al. 2009) Definition
Mathematical expression ε (X ) =
Porosity
Va ( X ) Va ( X ) + V w ( X ) + Vs
Va / a ( X ) = δ ( X )
Variation of the initial air
ε0 ª X0 « 1− ε0 ¬ ρw
Volume of initial air
Va0
Volume of air replacing water removed
Vw / a ( X ) = φ ( X ) ms
= ms
V a ( X ) = φ ( X ) ms
Total volume of air
ρw
Volume of the remaining water
Vw
= ms
Volume of the solid fraction
Vs
= ms
final expression of porosity Shrinkage function Collapse function
ε (X ) =
+
[X 0
[X 0 − X ] + δ ( X ) m
Va0
−
ρw
s
1 º
ρ s »¼ X]
1 º ε0 ª X0 + » « 1 − ε 0 ¬ ρw ρs ¼
X
ρw 1
ρs
δ ( X ) ε 0 [1 + β X 0 ] + φ ( X ) β [1 − ε 0 ][X 0 − X ] δ ( X ) ε 0 [1 + β X 0 ] + φ ( X ) β [1 − ε 0 ][X 0 − X ] + [ 1 − ε 0 ][1 + β X ] φ ( X ) = r1 + r2 X + r3 X 2 δ ( X ) = 1 − 0.5 [ 1 − Tanh [ p ( X − X c
)]]
Sensitivity of shrinkage and collapse functions involved in pore formation during drying
23
3. Sensitivity study approach To perform the sensitivity study of shrinkage and collapse functions in predicting the porosity, two maximum boundaries of ±10% or ±50% of both the nominal shrinkage and collapse functions were studied. In order to preserve the physical meaning of these functions, the following constraints were always respected: If δ ( X ) < 0 Then δ ( X ) = 0, If δ ( X ) > 1 Then δ ( X ) = 1 ® ¯If φ ( X ) < 0 Then φ ( X ) = 0, If φ ( X ) > 1 Then φ ( X ) = 1 At each moisture content, 20 (10 upper and 10 lower) random values of nominal shrinkage and collapse functions were calculated within each boundary. The porosity was then obtained at each moisture content by a random combination between the values of the shrinkage and collapse functions. Thus, a cloud of points around the nominal value of the porosity was generated. A variation of ±10% from the nominal values of the porosity was used to discuss the sensitivity of the shrinkage and collapse functions. To explore the accuracy and level of prediction of the proposed model, we formulated it as a constrained optimization problem (Khalloufi et al. 2009). The model was implemented in Matlab (R2007b, The MathWorks Inc., USA) using the fmincon function of the optimization toolbox.
4. Results and discussion A set of experimental data of air dried carrot reported by Lozano et al. (1980) was used to perform this sensitivity study. This choice aimed at covering a special porosity profile characterised by an inversion point. Figure 1 depicts the porosity as a function of moisture content and, as already demonstrated previously (Khalloufi et al 2009), this theoretical model showed very good agreement with the experimental data.
0.25
0.29
0.23 0.21
0.24
Porosity [ ]
Porosity [ ]
0.19 0.17 0.15 0.13
0.19
0.14
0.11 0.09
0.09
0.07 0.05
0.04 0
0.1
0.2
0.3
0.4
0.5
0.6
Normalized moisture (X/X0)
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized moisture (X/X0)
Figure 1. Porosity as a function of moisture content: ε(X). Empty circles are the experimental data. Lines are the results of the simulations obtained by the present model. Dashed lines represent the limits of ±10% of the porosity from the experimental data (published by Lozano et al. (1980)). Cloudy data (signs) are the result of the variation of shrinkage and/or collapse functions (left ±10%, right ±50% from the nominal values of shrinkage and/or collapse functions).
S. Khalloufi et al.
24
Figure 2 and 3 show the shrinkage and collapse functions, respectively, within two different levels of variation from the nominal values. The variation of shrinkage and collapse functions by ±10 or ±50% from their nominal values resulted in the cloudy points around the experimental data of porosity (Figure 1).
0.18
0.14
0.16
0.12 Shrinkage function [ ]
Shrinkage function [ ]
0.14 0.1 0.08 0.06 0.04
0.12 0.10 0.08 0.06 0.04
0.02
0.02
0
0.00 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized moisture (X/X0)
Normalized moisture (X/X0)
1
1
0.9
0.9
0.8
0.8
0.7
0.7
Collpase function [ ]
Collpase function [ ]
Figure 2: Shrinkage as a function of moisture content: Ɏ(X). Lines are the results of the nominal values. Cloudy data are the result from variation in the shrinkage function (left ±10%, right ±50% from the nominal values of shrinkage function). Experimental data published by Lozano et al. (1980).
0.6 0.5 0.4 0.3 0.2
0.6 0.5 0.4 0.3 0.2
0.1
0.1
0
0 0
0.1
0.2
0.3
0.4 0.5 0.6 0.7 Normalized moisture (X/X0)
0.8
0.9
1
0
0.1
0.2
0.3
0.4 0.5 0.6 0.7 Normalized moisture (X/X0)
0.8
0.9
1
Figure 3: Collapse as a function of moisture content: δ(X). Lines are the results of the nominal values. Cloudy data are the result from variation in the collapse function (left ±10%, right ±50% from the nominal values of collapse function. Experimental data published by Lozano et al. (1980). The variation of ±10% of shrinkage and/or collapse functions (Figure 2 and 3) results in predictions within ±10% of the experimental values of porosity (Figure 1). This result was confirmed with two other sets of experimental data reported by Krokida et al (1997) for carrot dried with hot air or freeze drying (data not shown). However, at a high variation level (±50%) of shrinkage and/or collapse functions, the errors in porosity predictions were not anymore within ±10% of the experimental values of porosity (Figure 1). The significant variation from the experimental data at a high variation level (±50%) of shrinkage and/or collapse functions could be explained by: (i) the high variation of shrinkage function (between 2% and 18%) in the middle of the drying process (0.05X/X00.95) (Figure 2),
Sensitivity of shrinkage and collapse functions involved in pore formation during drying
25
and (ii) the relatively high value of the initial porosity (ε0§12%) thus, the significant effect of the collapse function especially at the beginning of the drying process (Figure 3). According to Figure 1, and at a high variation level (±50%) of shrinkage and/or collapse functions, the porosity can only be underestimated at high moisture content (X/X0≥0.75). This interesting observation can be explained by: (i) the collapse function (Figure 3) cannot be higher than 1 according to the constraints given above, and (ii) at high moisture content the effect of the shrinkage function on the porosity tends towards zero because of the (X0X) term involved in the calculation of the shrinkage function.
5. Conclusions The results of this sensitivity study showed that (i) at high moisture content, the porosity is not sensitive to the shrinkage function, (ii) at low moisture content, the porosity could be strongly affected by the shrinkage function, however this effect depends on the initial porosity and (iii) the collapse function has a strong effect on the porosity for products with a high volume of initial air. This is the first time that these findings are reported in the literature and the approach used in this contribution could be very relevant to assess other parameters involved in drying processes such as bulk density, shrinkage coefficient and/or thermal conductivity.
Nomenclatures m s: mass of solid (kg) p: coefficient involved in collapse function r1, r2, r3: coefficients involved in shrinkage function X: water content (kg of water per kg of dry material) initial water content (kg of water per kg of dry material) X 0: coefficient involved in collapse function X c: δ(X): collapse function as a function of moisture content (dimensionless) β: density ratio (ρs/ρw) (dimensionless) ε 0: initial porosity (dimensionless) ρs: solid density (kg/m3) ρw: water density (kg/m3) Ɏ(X): shrinkage function as a function of moisture content (dimensionless)
References J. Lozano, E. Rotstein, and M. Urbicain. (1980). Total porosity and open-pore porosity in the drying of fruits. Journal of Food Science, 5 (45), 1403-1407 L. Mayor,. and A. Sereno. (2004). Modelling shrinkage during convective drying of food materials: a review. Journal of Food Engineering, 3 (61), 373-386 M. Krokida,. and Z. Maroulis. (1997). Effect of drying method on shrinkage and porosity. Drying Technology, 10 (15), 2441-2458 S. Khalloufi, C. Almeida-Rivera, and P. Bongers. (2009). A theoretical model and its experimental validation to predict the porosity as a function of shrinkage and collapse phenomena during drying. Food Research International. (42) 1122-1130.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A REDUCED-ORDER APPROACH OF DISTRIBUTED PARAMETER MODELS USING PROPER ORTHOGONAL DECOMPOSITION M. Valbuena a, D. Sarabia a, C. de Prada a a
Department of Systems engineering and Automatic, C/Real de Burgos s/n, Valladolid, University of Valladolid
Abstract This paper presents a reduced model of a pipeline of a hydrogen network in a petrol refinery obtained with the use of Proper Orthogonal Decomposition (POD) method. The original first principles model is a distributed parameter one composed of several PDE. The reduced model provides a good approximation of the main operations performed in the pipeline with a smaller computational cost, allowing its use for advanced controller design. Keywords: Proper orthogonal decomposition, model reduction, partial differential equations, hydrogen network.
1. Introduction Model reduction methods are used to obtain simplified models (in terms of number of equations, variables, etc.) of dynamic systems, while maintaining the main characteristics of the original complex model. These kind of methods are used in processes described by partial differential equations (PDEs) because the spatial discretization increases a lot its complexity and integration time required to solve it, being impossible to use the model in real time, for example, in model based predictive control or in on line optimization. A more detailed description of different approaches of numerical methods to integrate it is described in [Schilders]. The POD method (Proper Orthogonal Decomposition) has been widely used to compute efficient bases for dynamic systems and to derive low order models of dynamical systems. It was introduced in the context of the simulation of turbulences by Lumley and is also known as the Karhunen-Loéve decomposition and principal component analysis [Kunisch].
2. Proper Orthogonal Decomposition The POD method is based on patterns generated by the simulation data or the experiments. Suppose the values of a variable ߠ along the domain at every time step can be expressed as the linear combination of ܭpatterns: ߠሺݔǡ ݐሻ ൌ ܽଵ ሺݐሻ߮ଵ ሺݔሻ ܽଶ ሺݐሻ߮ଶ ሺݔሻ ڮ ܽ ሺݐሻ߮ ሺݔሻ
(1)
Where ߠሺݔǡ ݐሻ is the vector of the variables over the whole spatial domain and at time step ݐ. This vector contains ܭelements when the spatial domain is divided into ܭgrid cells. In mathematical terminology, the patterns are denoted by ሼ߮ ሺݔሻሽୀଵ and are called the basis functions or the modes. The patterns or basis functions are independent of each other. It means that the basis functions are orthogonal to each other. Suppose the number of patterns can be reduced only to ݊ patterns such that ߠሺݔǡ ݐሻ can be expressed as a linear combination of ݊ patterns:
A reduced-Order approach of parameter distributed models using proper orthogonal 27 decomposition
ߠሺݔǡ ݐሻ ൎ ܽଵ ሺݐሻ߮ଵ ሺݔሻ ܽଶ ሺݐሻ߮ଶ ሺݔሻ ڮ ܽ ሺݐሻ߮ ሺݔሻ
(2)
Where ݊ is substantially smaller than ܭin (1). If the process variables can be expressed as a linear combination of very few patterns, then an approximate model of the process variable can be derived by building a model for the first time ݊ varying coefficients. The coefficients ܽ ሺݐሻ in the equation (2) are determined by: ܽ ሺݐሻ ൌ ሺߠሺݔǡ ݐሻǡ ߮ ሺݔሻሻ
Being ܽ
ሺݐሻଶ
(6)
the amount of energy of ߠሺݔǡ ݐሻ in the direction of ߮ ሺݔሻ.
3. Example 3.1. Pipeline Hydrogen is one of the main products used in modern petrol refineries. It is distributed through a network comprising many pipelines between production centres and consumer plants. Figure 1 represents one of these pipelines. Overall control of flows in the network is a difficult problem because of the interactions between flows and pressures, so that a model based strategy is required for this purpose. Using a first principles approach, it is possible to obtain a realistic model involving the main variables in a pipeline [Valbuena].
Figure 1. Pipeline
The example consists into considered the system of the Figure 1. At the begin of the pipeline there is a production unit and at the end of the pipeline there is a valve. Then, the boundary conditions in the production unit are the pressure, temperature and composition and the boundary conditions in the valve is the flow. To know the dynamics of the variables in function so much time as of the longitudinal coordinate of the conduction, we use the global and individual mass balance, the equation of quantity of movement and the energy balance based on a macroscopic description. In addition it is supposed that in the direction of the radial coordinate there is no variation of density, flow, pressure and temperature. The equations of the model distributed of the collector are the following ones [Ames]. The equation (7) describes the global mass balance when the phenomenon of transport is only due to the convention: ߲݉ ߲ሺ݉ݒሻ ൌͲ ߲ݐ ߲ݔ
(7)
Being ݉, ݒ, ݔand ݐthe mass, velocity, longitudinal coordinate and time, respectively. Equations (8) and (9) show the individual mass balance being ܥ the composition of the component ݇ (hydrogen and impurities): ߲ሺ݉ܥ ሻ ߲ሺ݉ܥݒ ሻ ൌͲ ߲ݐ ߲ݔ ܥ ൌ ͳ
(8) (9)
The equation (10) shows the quantity of movement balance where and ݀ are the friction losses and diameter, respectively: ߲ሺ݉ݒሻ ߲ሺ݉ ݒଶ ሻ ൌ െ݉ ݒԡݒԡ ߲ݔ ߲ݐ ʹ݀
(10)
28
M. Valbuena et al.
Equation (11) shows the corresponding energy balance: ߲ሺ݉ܶሻ ߲ሺ݉ܶݒሻ ൌ െܷሺܶ െ ܶ௫௧ ሻ ߲ݔ ߲ݐ
(11)
Being ܶ and ܷ the temperature and global coefficient of heat transmission, respectively. Finally, we can use the ideal gases equation because the pressure and temperature operation are not high: ܸܲ ൌ
݉ ܴܶ ܲܯ
(12)
Being ܸ the volume, ܲ ܯthe molecular weight and ܴ the constant of the ideal gases. The spatial domain is discretized using finite differences in 100 nodes. Figure 2 - Figure 4 show the step performed in the original model (pressure, temperature and composition of hydrogen in the inlet of the pipe) and Figure 5 - Figure 10 show the result obtain (it only shows some of the 100 nodes that divide the spatial domain).
Figure 2. Step in the Composition of hydrogen
Figure 3. Step in the Pressure
Figure 4. Step in the Temperature
Figure 5. Evolution of the Pressure
Figure 6. Evolution of the Temperature
Figure 7. Evolution of the Composition of H2
Figure 8. Evolution of the density
A reduced-Order approach of parameter distributed models using proper orthogonal 29 decomposition
Figure 9. Evolution of the velocity
Figure 10. Evolution of the mass flow
Figure 11 shows the energy captured by the first empirical eigenfunctions. The energy captured is calculated as follow being ߣ are the elements of the diagonal of the correlation matrix: ܲ ൌ
σୀଵ ߣ Ǣ ݊ ൌ ͳǡ ǥ ǡ ܰ σே ୀଵ ߣ
(13)
1 0,9998
P T
%Energy
0,9996
C m
0,9994
v
0,9992 1
2
3
4
5
6
7
8
n Eigenvalue
Figure 11. Energy captured by the first empirical eigenfunctions
As show the above figure, we can considerer that the first 7 nodes can be representing the process variables. Then, an approximate model can be derived by building a model for the first time 7 varying coefficients. Table 1 shows the comparison in the number of nodes, equations, variables and CPU time for the original model and the reduced model. The CPU time in the reduced model is substantially smaller than in the original model. Table 1. Parameters of comparison Original Model Reduced Model
Number of nodes
Number of equations
Number of variables
CPU time (s)
100
2003
2031
24.041
7
143
171
0.768
Relative error is calculated as a percentage using the same set of simulation data (Figure 12 - Figure 15). The largest deviations are found in the moments of the steps. As show the Figure 2-Figure 4, the first step consist in a change of the composition, next a change in the pressure and finally a change in the temperature. The obtained mistakes are inside the acceptable range.
30
M. Valbuena et al.
0,3 0,2 0,1 0 -0,1 0 -0,2
5000
10000
C[1,H2] C[2,H2] C[3,H2] C[4,H2] C[5,H2] C[6,H2] C[7,H2]
2
%Relative Error
1,5 1 0,5 0
10000
-1 -1,5 -2
2 1,5 1 0,5 0
Time (s)
5000
2,5
-0,5 0
Figure 12. Relative error % for the pressure
-0,5 0
3
Time (s)
Figure 14. Relative error % for the Composition of H2
2000
4000 6000 Time (s)
8000
10000
Figure 13. Relative error % for the temperature
%Relative Error
% Relative Error
0,4
T[1] T[2] T[3] T[4] T[5] T[6] T[7]
3,5
%Relative Error
P[1] P[2] P[3] P[4] P[5] P[6] P[7]
0,5
0,015 0,01 0,005 0 -0,005 0 -0,01 -0,015 -0,02 -0,025 -0,03 -0,035
5000
rho[1] rho[2] rho[3] rho[4] rho[5] rho[6] 10000 rho[7]
Time (s)
Figure 15. Relative error % for the density
4. Conclusions It has studied the POD methodology and it has used in models of distributed parameters, where the main features of the original model have been maintained. The POD method has been applied in a complex example like the hydrogen pipeline of a refinery. The response of reduced models is similar to the response of the original models. This is evidenced by the graphs of the relative errors. As can be seen in the tables, using the POD method, has greatly reduced the number of varaiables, the number of equations and the CPU time of calculation.
References W. F. Ames, 1977. “Numerical Methods for Partial Differential Equations”. Academic Press, Inc, Second Edition. K. Kunisch, S. Volkwein, 1999. “Control of the Burgers Equation by a Reduced-Order Approach Using Proper Orthogonal Decomposition”. Journal of Optimization Theory and Applications: Vol 102, No. 2, pp. 345-371. H. A. Schilders et al., 2000. “Model Order Reduction. Theory, Research Aspects and Applications”. The European consortium for mathematics in industry. Springer. M. Valbuena et al., October 13-15, 2010. “Dynamical Simulation of a collector of H2 using methods of numerical integration and a graphical library in EcosimPro®” 22nd European Modeling & Simulation Symposium (Simulation in Industry), FES, Morocco.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment Ingo Thomasa a Linde
AG, Linde Engineering Division, Dr.-Carl-von-Linde-Str. 6–14, 82049 Pullach
Abstract Rigorous dynamic simulation is becoming increasingly important at Linde Engineering, and therefore more detailled process unit models are required. In fact, the refinement of existing process unit models as well as the development of new ones is often part of dynamic simulation projects, especially when it comes to innovative processes. In order to facilitate model development within the heterogeneous simulation environment of Linde Engineering, a new Process Units Modeling Framework has been developed. Models developed within this framework may be used in R Linde’s inhouse simulator OPTISIM , as well as in the commercial process R simulator UNISIM , which are both widely used at Linde Engineering. Keywords: Hierarchical Modeling, Multiscale Modeling
1. Introduction As a leading international engineering and contracting company, the Linde Engineering Division of The Linde Group designs and builds turnkey process plants for a wide variety of industrial users and applications: chemical industries, air separation, manufacturers of hydrogen and synthesis gases, natural gas treatment, and more. R [ESLBK97] Linde’s in-house process simulation and optimization tool OPTISIM has been employed and enhanced for decades. As an efficient equation-oriented system, it is successfully applied and widely accepted by a large number of engiR neers as a steady-state process design tool. OPTISIM ’s dynamic simulation features have also been extended according to user’s needs [KSBK01]. R cover rigorous dynamic equipment More recent applications of OPTISIM simulation. An important application is the systematic survey of heat exchanger temperature differences during startup or shutdown. Another application is the propagation of pressure waves through pipelines due to cavitation or valve shutdowns. Compared to a stand alone simulation tool, the major advantage in studying these effects within a process simulation environment is that feedback of the process can be taken into account.
32
Ingo Thomas
These applications require more detailed models of the equipment. For instance, the simulation of pressure waves in pipelines requires a momentum balance in addition to material and energy balance equations. For the practical development of this new generation of process models it is beneficial to combine a descriptive model definition as applied in modern simulation environments for rapid development of models (as discussed, for instance, R in [KFGE97]) with the undisputed strengths of OPTISIM regarding process simulation and optimization. R R , UNISIM is also increasingly being used for dynamic Besides OPTISIM simulations at Linde, which brings up similar requirements regarding model enhancements. Hence, it is obviously beneficial to have the opportunity to easily transfer specific proprietary unit models from one simulator to the other. Therefore, the modeling environment is implemented so as to allow a straightforward R integration within OPTISIM as a generic unit model framework as well as in R UNISIM as a unit extension. 2. A new lightweight modeling environment The new declarative modeling environment is implemented as a lightweight C++ library which provides a small C/C++/FORTRAN API. The library basically consists of a virtual machine (VM) and a compiler. Mathematical expressions are compiled into “virtual machine code” or bytecode, similar to Java, Perl or Visual Basic. There are two ways of using the library: 1. The model is a part of a larger system. In that case, the virtual machine returns the residual of the expression, optionally together with its derivatives. This is how the library is used within the equation-oriented simR . ulator OPTISIM 2. The model is solved in a standalone manner. The virtual machine has interfaces to a number of numerical algorithms (Newton solvers, optimization codes, and DAE integrators); these solvers may be used to obtain the numerical solutions. This is how the liR brary is used within UNISIM .
"
!
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment
33
The virtual machine optionally computes derivatives of the model equations. The structure of the Jacobian is computed during compilation. During evaluation, the derivatives are computed similar to the forward mode of automatic differentiation codes like ADOL-C [GJU96]. The virtual machine provides means for callbacks to other APIs, either via R COM, Dll interfaces or by static linkage. For instance, UNISIM ’s physical property system as well as Linde’s in-house physical property package GMPS (General Multi-Phase Property System) is made available by callback. The modeling environment provides means for modeling discrete-continuous processes. There are if-then-else structures, max/min statements and so on. These features are implemented along the lines of the “HSML” framework (see [Tay99]), i.e. each discontinuous function comes with a switching function, R whose sign changes indicate discrete state changes. Based on that, OPTISIM provides sophisticated means for re-factoring jacobi matrices, consistent reinitialization of higher-index DAEs and so on (see [KMG92, KSBK01]). New features compared to other dynamical integrators are the following: • Functionals. A “functional” is an abstraction of a quantitative relationship between state variables. Functionals may be compared to function pointers in C++ or FORTRAN. An application is the specification of a heat flux density through a tube wall by a functional (describing insulation loss or heating). Another application is the description of chemical reactions by conversion rate functionals as (optional) right hand sides to the component balance. • Tensors. Multi-dimensional arrays (which may have any number of dimensions) are assumed to be “tensors”, that is, they come with multiplication and addition operations using differential-algebraic conventions. In especially, linear algebra notations, e.g. dot (x) + A x = b (dot(x) being the vectorial time derivative of x), are valid. This feature is welcomed in particular by feedback control experts. 3. Modeling Requirements in a Heterogeneous Simulation Environment: An Example In general customer or project requirements enforce the use of specific tools. R Inhouse, dynamic machine simulations are often executed using OPTISIM . However, on customer request, we transferred such a simulation to UNISIM R . R valve model is not conform with Linde-specific We observed, that the UNISIM DIN standard requirements. Using the new modeling environment, a valve R model which agrees with the DIN standard model that matches the OPTISIM requirements could easily be generated. Though it may be theoretically possible to do this by CAPE OPEN interfaces R R (using gPROMs for the valve model and couple it to UNISIM ), this approach R is not practical due to risk and cost issues. Extending UNISIM by a Unit Extension, on the other hand, requires too much time and effort for a single ongoing project.
34
Ingo Thomas
4. Workflow and Strategical Benefits The modeling environment provides a software-technical abstraction layer, which separates the thermodynamical or physical model from the software engineering details of the process simulation environment. The practical and strategical consequences will be surveyed below. • Process unit model development is no longer overloaded with software engineering details; the unit model developer does not need to bother with compilers, linkers, memory management and so on. The software engineering details to be taken care of in complex systems R R as OPTISIM or UNISIM are not to be underestimated. Software engineering perils (e.g. memory management details) have been significant obstacles in a number of development projects. • Process unit models may be developed independently of the process simulation environment in which the model is to be used. By now, models may R R and UNISIM ; later on, a CAPE OPEN ESO be used in OPTISIM implementation may be discussed. This increases the safety of investment of the expensive development of detailed models. • Lets face it: Model development practise is mostly debugging. A declarative modeling environment reduces debugging time in several ways: – If derivatives have to be coded explicitly, they are a major source of subtle errors, which detoriate convergence speed and model reliability. Using automatic differentiation eliminates this source of errors. – Another common source of errors in discrete-continuous systems are bookkeeping errors regarding switching function states. This source of errors is eliminated as well. 5. Experiences R for some The evolving modeling environment has been part of OPTISIM years now. Hence, there already are some experiences worth noting. R • Thanks to its new modeling capabilities OPTISIM is being used for in-depth modeling of heat exchangers as well as for machine simulations.
• The modeling platform simplifies modeling of innovative processes, as “gaps” in terms of models for new apparatuses are closed easier. • The new capabilities opened up new vistas for studying feedback control strategies [See10]. The development of advanced control strategies is boosted by the simplicity of their implementation [Hel10]. • Especially conditional statements (if-then-else) simplify the implementation of flexible set-ups of process flowsheets. Hence, on the one hand, process models may be set up more generically, which boosts the development of standardized flowsheets.
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment
35
6. Conclusion We presented some aspects of a Linde in-house development of a declarative modeling environment. Though a number of comparable systems are on the market, there are technical and strategical drawbacks integrating them into a process simulation environment. Hence, an in-house development turned out to be a viable alternative. References [ESLBK97] E. Eich-S¨ollner, P. Lory, P. Burr, and A. Kr¨oner, Stationary and dynamic flowsheeting in the chemical engineering industry, Surveys on Mathematics for Industry 7 (1997), 1–28. [GJU96] A. Griewank, D. Juedes, and J. Utke, Algorithm 755: Adol-c: a package for the automatic differentiation of algorithms written in c/c++, ACM Trans. Math. Softw. 22 (1996), no. 2, 131–167. [Hel10] S. Heldt, Dealing with structural constraints in self-optimizing control engineering, Journal of process control 20 (2010), no. 9, 1049– 1058. [KFGE97] L.U. Kreul, G. Fernholz, A. Gorak, and S. Engell, Erfahrungen mit den dynamischen Simulatoren DIVA, gPROMS und ABACUSS, Chemie Ingenieur Technik (CIT) 69 (1997), 650–653. [KMG92] A. Kr¨oner, W. Marquardt, and E.D. Gilles, Computing consistent initial conditions for differential-algebraic equations, Computers & Chemical Engineering 16 (1992), no. Supplement 1, S131 – S138, European Symposium on Computer Aided Process Engineering– 1, 23rd European Symposium of the Working Party on Computer Aided Process Engineering 463rd Event of the European Federation of Chemical Engineers (EFChE). [KSBK01] Th. Kronseder, O.v. Stryk, R. Bulirsch, and A. Kr¨oner, Towards Nonlinear Model-Based Predictive Optimal Control of Large-Scale Process Models with Application to Air Separation Plants, Online Optimization of Large Scale Systems (M. Gr¨otschel, S. O. Krumke, and J. Rambau, eds.), Springer, Berlin, 2001, pp. 385 – 410. [See10] Th. Seel, Modeling, Order Reduction and Multivariable Control Designs for Cryogenic Separation & Liquefaction Plants, Diplomarbeit, Otto-von-Guericke Universit¨ at Magdeburg, Germany, 2010. [Tay99] J. H. Taylor, Rigorous hybrid systems simulation with continuoustime discontinuities and discrete-time agents, vol. Software and Hardware Engineering for the 21st Century, ch. 60, pp. 383–388, World Scientific and Engineering Society Press, NY, 1999.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Mathematical description of mass transfer in supercritical-carbon-dioxide-drying processes Cristhian Almeida-Riveraa, Seddik Khalloufia, Jo Jansena and Peter Bongersa,b a
Unilever R&D Vlaardingen, Olivier van Noortlaan 120, 3130 AC, Vlaardingen, The Netherlands,
[email protected] b Hoogwerff chair in Product-Driven Process Engineering, Eindhoven University of Technology, PO Box 513, 5600 MB, Eindhoven, The Netherlands
Abstract For thermo-sensitive food products, supercritical-carbon-dioxide (SC-CO2) drying process could be a promising technology. The process takes place in three steps: (i) removal of water from the food matrixes, (ii) adsorption of the water removed in the adsorber bed, and (iii) regeneration of the adsorber with hot air. In this investigation, a mathematical model is derived to describe the changes of water concentration in SCCO2, in the solid food matrix and in the adsober bed during the entire drying processes. The mass balance equations of the model involve several parameters such as the geometry of the autoclave and the adsorber bed, mass transfer coefficients, diffusion coefficients, equilibrium constants between the solids and the fluids, the specific interfacial area of the solid matrixes, the porosities of the packed beds, the SC-CO2 flowrate and the particle size. Preliminary results obtained with the model suggest that each parameter may contribute differently to the drying kinetics. This finding allows the identification of the bottlenecks encountered in drying processes and offer leads and strategies to overcome them. The present model could eventually be used as a tool for optimizing the operating conditions and process scale-up in SC-CO2 drying. Keywords: mathematical simulation, supercritical-carbon-dioxide, drying, food, packed beds
1. Introduction Extending the shelf life of food products by reducing their water activity has been one of the challenges faced by the food sector globally. One of such preservation approaches involves the removal of water from the food matrixes by dedicated technologies. Among them, freeze drying is regarded as the golden standard technology due to the remarkable quality of the final product. On the other hand, freeze drying technology requires considerable operational and capital costs, which make this processing route unaffordable for low added value products [1]. Recently, the study of an alternative drying technology, assisted by supercritical carbon dioxide (Sc-CO2) was addressed [2, 3], highlighting the successful execution of extraction technology operated at such supercritical conditions. In the supercritical region, there is a continuous transition
Mathematical description of supercritical-CO2 drying
37
between the liquid and the gas phases, with no possible distinction between these two phases (Fig. 1). Beyond this point, the special combination of gas-like and liquid-like properties makes the supercritical fluid an excellent solvent for the extraction industry [4]. Thus, fluids at supercritical conditions exhibit a solvent power closer to that of liquids and viscosity and diffusivity comparable to those of gases [5]. Among the fluids used at supercritical conditions in extraction applications, currently the most and widely desirable fluid in foods and medicine is carbon dioxide [6]. In this contribution, a dynamic model is
Figure 1. Schematic phase diagram of CO2 around its critical point
presented for the Sc-CO2-assisted dehydration of solid matrix coupled with the dehumidification of the Sc-CO2 stream in regeneration unit.
2. System description 2.1. Physical description of the system The ScCO2-assisted drying unit is composed of three key elements: (i) the drying chamber, where the material to be dried gets in contact with a continuous stream of ScCO2; (ii) a regeneration unit, containing usually zeolite, and where the water in the Sc-CO2 stream gets absorbed, and (iii) a recirculation pump to maintain the CO2 stream at supercritical conditions. In this configuration, water is extracted from the food material by concentration gradient and carried by SC-CO2 stream to the zeolite material, where it is adsorbed.
38
C. Almeida-Rivera et al.
Figure 2. Schematic representation of a SC-CO2 drying unit.
3. Model Development and Implementation In our previous publications a detailed model derivation of the drying chamber was presented [2, 3]. The governing expressions accounted for a realistic description of the Sc-CO2-assisted drying process, albeit some simplifying assumptions were introduced. In one of such assumptions we considered the incoming Sc-CO2 stream to be water-free, i.e. it was implied that the amount of zeolite was sufficient to adsorb instantaneously the water extracted from the food matrix. Although the degree of prediction of the model was remarkably accurate (see Fig. 3 in [3]), we acknowledge that in real practice such assumption would imply an infinitively large (and thus non-realistic) zeolite reactor and a 100% efficient zeolite. In this contribution, we relax the water-free assumption for the incoming Sc-CO2 stream and provide a simplified modelling approach for the integrated description of the system. 3.1. Modelling of the drying chamber As described in detail in [2, 3] the following set of governing expressions can be derived for the dehydration of a solid matrix (subscript s) by the flow of a Sc-CO2 stream (subscript f), dCf dt
= D
d 2Cf 2
dz
− U
dCf dz
§1−ε ρs · dCs ¸ − ¨ ¨ ε ρ f ¸ dt ¹ ©
[
dCs = − K a Cs − Keq Cf dt
]
The following initial and boundary conditions are considered for the water concentration per unit mass C, where the flux entering a boundary must be equal that passing through the boundary.
Mathematical description of supercritical-CO2 drying
t = 0 and 0 z L:
Cs ( z ) = Cs 0
C f ( z) = 0
39
t > 0 and z=0 or z= L ∂C ∂z
f
=
∂C
0
∂z
z=L
f
= z=0
U (C D
f
−C
f , in
)
3.2. Modelling of the zeolite regeneration chamber The water adsorption in the zeolite regeneration chamber was modelled using a first order kinetics. Under the assumption of incompressibility conditions and negligible volumetric change due to water adsorption, the dynamic behaviour of the water concentration leaving the zeolite chamber was given by the expression, C f ,in =
1
η zeo
(
C f (t , L) 1 − e −kt
)
The kinetic constant k is the inverse of the residence time of the Sc-CO2 inside the zeolite regeneration chamber and is directly related to the size of the zeolite unit and the flowrate of the Sc-CO2 stream. An additional parameter (ηzeo) accounting for the zeolite efficiency has been introduced. The set of differential equations was discretised using finite difference method and the complete set of equations (differential and algebraic) was implemented and solved in Matlab/Simulink using an implicit Runga-Kutta solver.
4. Results Taking as reference case when the incoming Sc-CO2 stream is water-free (Cf,in=0) and the zeolite is 100% efficient (i.e. ηzeo=1, all water is absorbed instantaneously in the zeolite matrix), a parametric study has been performed. In this study the kinetic constant and zeolite efficiency are varied within sensible ranges and the overall performance of the process was assessed. This performance indicator was estimated in terms of the moisture content in the solid matrix at the end of the drying processing time. An increase in k implies small zeolite reactors (at a given flowrate) or the increase of flowrate (at a given chamber size). A decrease in ηzeo implies an slower partial absorption of water in the zeolite. As can be seen in Fig. 3-left, the kinetic constant k plays a key role in the system performance. As the kinetic constant increases (i.e. decreasing the chamber volume or increasing the Sc-CO2 volumetric flow), the drying time to achieve a moisture content target increases dramatically, reaching asymptotic behaviour. The effect of the zeolite efficiency is depicted in Fig. 3-right. As expected, the moisture profile when the efficiency decreases is comparable to those obtained when the kinetic parameter increases. Hence, desired level of moisture might not be attainable, as an increasing amount of water is being recirculated to the drying chamber.
40
C. Almeida-Rivera et al.
Figure 3. Simulation results for the moisture content in the solid matrix at various absorption kinetics constants (left) and at various zeolite efficiency values for a given kinetic constant (right).
5. Conclusions and Future Work In this contribution a mathematical model was derived for the description of the combined dehydration of solid matrices by Sc-CO2-assisted drying and the continuous dehumidification of the Sc-CO2 stream in a zeolite regeneration unit. Two parameters affecting the Sc-CO2 dehumidification step were considered and related to the size of the chamber and to the effectiveness of the zeolite. The simulated results showed that the performance of the drying process might be strongly diminished by insufficient amount of zeolite in the chamber and by poor-efficient zeolite. These results can be used as preliminary design principles for the optimization and scale-up of drying process assisted by Sc-CO2 extraction.
References [1] Ratti, C. Hot air and freeze-drying of high-value foods: A review. Journal of Food Engineering 2001, 4 (49), 311–319 [2] Khalloufi, S.; Almeida-Rivera, C.P.; Bongers, P. Supercritical-CO2 drying of foodstuffs in packed beds: Experimental validation of a mathematical model and sensitivity analysis. Journal of Food Engineering 2010, 96, 141–150 [3] Almeida-Rivera, C.P.; Khalloufi, S.; Bongers, P. Prediction of Supercritical Carbon Dioxide Drying of Food Products in Packed Beds. Drying Technology, 28: 1157–1163, 2010 [4] Nalawade, S.P.; Picchioni, F.; Janssen, L.P.B.M. Supercritical carbon dioxide as a green solvent for processing polymer melts: Processing aspects and applications. Progress in Polymer Science 2006, 1 (31), 19–43 [5] Barbosa-Ca´novas, G.V.; Tapia, M.S.; Cano, M.P. Novel Food Processing Technologies; CRC Press: Boca Raton, FL, 2005
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Three-moments conserving sectional techniques for the solution of coagulation and breakage population balances Margaritis KostoglouĮ, Michalis C. Georgiadisb a
Department of Chemistry, Aristotle University, Univ. Box 116, 54124 Thessaloniki, Greece,Email:
[email protected] b Department of Chemical Engineering, Aristotle University of Thessaloniki, 54124, Greece. E-mail:
[email protected]
Abstract Sectional (zero order) methods constitute a very important class of methods for the solution of the population balance equation offering distinct advantages compared to their competitors, namely, higher order and moment methods. For the last ten years a particular sectional method, the so called fixed pivot technique (Kumar et al, 1996) has been the one most extensively used in the scientific community for the solution of the coagulation and breakage equations because it offers arbitrary grid choice and conservation of two moments of the particle size distribution. More recently, a new method (called cell average technique; Kumar et al, 2006; Kostoglou, 2007) has been developed which gives more accurate results than the fixed point technique. In the present work, the extension of this new method in order to conserve three moments is attempted. A stable algorithm for the solution of the coagulation and breakage equation is developed. The new method allows improved computation of moments of practical interest. Keywords: Population balances, sectional methods, coagulation, breakage, moment conservation
1. Main Text Coagulation and breakage (alternatively called fragmentation) are of paramount importance in several processes of technological and/or fundamental scientific interest. These phenomena concerns several scientific disciplines. For example, regarding polymer technology, the mechanism of polymer degradation can be considered to be breakage whereas regarding catalytic processes it influences their efficiency through the catalyst attrition. Regarding industrial aerosol processes, coagulation is an important step for nanoparticle production. In atmospheric sciences, coagulation and breakage are related to rain formation and in astrophysics to the size distribution of asteroids and to planet formation. In biotechnology, the cell division process can be described as a spontaneous breakage process. Other processes where breakage is the essential mechanism are those related to size reduction of solids (e.g. crushing, milling, grinding) whereas coagulation is of paramount importance in crystallization, precipitation,
M. Kostoglou and M. Georgiadis
42
pelletization and granulation. The bubble size distribution in bubble columns is largely due to coagulation/breakage, which in turn determines the characteristics of the flow field in the column. Furthermore, coagulation and breakage are very important in emulsion technology determining the droplet size distribution and the emulsion stability. The dynamics of a particle population undergoing coagulation and breakage is described by the coagulation-breakage equation that belongs to the more general class of the population balance equations. This equation is a non-linear partial integrodifferential equation and its numerical solution is by far no trivial. This is the primary reason for the development of so many methods for its solution, obtained from various scientific disciplines.
2. Problem Formulation The coagulation breakage population balance is the following non-linear partial integrodifferential equation:
wf (x, t) wt
x
f
1 K(y, x y)f (y, t)f (x y, t)dy f (x, t) K(x, y)f (y, t)dy 20 0
³
x
³
(1)
³
p(x,y)b(y)f(y,t)dy-b(x)f(x,t) 0
where t is the time, x is the particle volume, f(x,t) is the number concentration density function and K(x,y) the coagulation frequency between two particles with sizes x,y respectively, b(x) the breakage frequency and p(x,y) the probability distribution of particles of volume x resulting from the breakup of a particle of volume y. The first term of the right hand side of (1) represents the rate of generation of particles of volume x by coagulation, the second the loss by coagulation, the third the gain by breakage and the last the loss by breakage. The above equation must be solved for the evolution of the particle size distribution (PSD) having as initial condition a given PSD f(x,0)=fo(x). There are several approaches to the solution of the equation (1). At the one limit the socalled higher order methods ensure high accuracy requiring large computational effort and at the other limit the moments methods require a small computational effort but sometimes leading to questionable accuracy. The so-called sectional methods constitute the best compromise between the two approaches bridging their accuracy and computational requirements. The large range of particle sizes considered in practical problems suggests non-uniform discretization of equation (1) in size domain. The main problem of the older sectional methods is that they can conserve only one moment of the PSD in case of a non-uniform discretization. This problem was overcome 15 ago by the so called Fixed Pivot Technique (FPT) allowed the simultaneous conservation of two moments. Despite the conservation of the zeroth and first moment FPT exhibits large errors in the computation of the second moment. A significant improvement with respect to this problem was achieved by the Cell Average Technique (CAT) five years ago. Here a new approach based on CAT but requiring conservation of three moments of the PSD is introduced with the name Extended Cell Average technique (ECAT).
Three-moments conserving technique for the solution of coagulation and breakage population balances 43 in c o m in g p a r tic le s (a )
\\
x
i-2
x
i-1
v
x i
v
i
x i+ 1
i+ 1
incoming particles
)(b)
average particle size
x i-2
x i-1
xi
vi
v i+1
x i+1
incoming particles (c) average particle size
x i-2
x i-1
xi
vi
v i+1
x i+1
in c o m in g p a r tic le s (d ) a v e r a g e p a rtic le s iz e
x i-2
x i-1
vi
xi
v i+ 1
x i+ 1
I n c o m in g p a r t ic le s (e ) a v e r a g e p a r t ic le s iz e
X i- 2
X i- 1
V i
X i
V i+ 1
X i+ 1
Figure 1. Handling of the incoming particles in the section i by the sectional techniques: (a) Fixed Pivot Technique (coagulation and breakage) (b,c) Cell Average Technique (coagulation and breakage) for average incoming particle size smaller (case b) or larger (case c) than the pivot size xi (d) Extended Cell Average Technique (breakage) (e) Extended Cell Average Technique (coagulation)
44
M. Kostoglou and M. Georgiadis
3. Solution techniques A typical discretization scheme transforms the coagulation-breakage equation to a system of ordinary differential equations having as independent variable the time and depended variables the number of particles contained in the section i (i.e. particles of sizes between vi and vi+1. The symbols v1,v2,v3… stand for the finite volume (sectional) discretization of the particle volume domain. In case of a uniform grid and a discrete initial PSD (all particles consisting of monomers having a specific size xm) the discretized equation (1) degenerates to the discrete coagulation-breakage equation by a direct substitution x=ixm. The uniform grid has the property of moment conservation, i.e. the moments of the new particles resulting from a breakage event are the same as the moments of the parent particles (both new and parent particles must be assigned to some grid point). But whereas for some applications (e.g. bubble or droplet size distributions) the competition between coalescence and breakage, or the steep reduction of the breakage frequency with decreasing bubble size, leads to narrow PSD which can be modeled using a uniform grid, in other cases (e.g. size reduction of solids, fundamental studies of breakage) the particle volume (independent variable) may extend to many orders of magnitude rendering necessary the use of non-uniform (usually geometric) grid. The fact that for a non-uniform grid the moment conservation property is not satisfied led to the development of several sectional techniques based on the requirement of moment conservation. All the particles of the PSD are assigned to specific particle sizes called pivots (x1,x2,x3….) which define the grid of the discretization technique. The sections boundaries vi are related to the pivots as vi=(xi-1+xi)/2. The main issue concerning sectional techniques is how to distribute a fresh particle produced by a coagulation or breakage event, among the classes. The straightforward discretization corresponds to assigning the new particle at the size class it belongs. Although it seems to be a tautology, this approach leads to the conservation of only one moment. The main idea in FPT is the distribution of each new particle among two classes depending on the size of the new particle (classes i and i+1 for new particles larger than xi and classes i-1 and i for new particle smaller than xi). The distribution of the new particle among the two classes is such to ensure the conservation of two moments of the PSD. The difference in CAT is that not each new particle is distributed to the class but first an average particle entering the class i is constructed and this is distributed among the classes following the laws of FPT. This procedure still conserves two moments but the numerical results for other non-conserved moment are better than those of FPT. The concept of the average particle is also employed by ECAT but in this case it is distributed among three classes in order to conserve three moments of the PSD. The only combination of classes in which the average new particle entering the class i must be distributed, that ensures the stability of the numerical algorithm is the classes i-1,i,i+1 for coagulation-produced particles and the classes i-2,i-1,i for breakage produced particles. The above described approaches for handling the new particles entering the class i are presented graphically in Figure 1.
Three-moments conserving technique for the solution of coagulation and breakage population balances 45
4. Results Only separate tests of the new methods for the case of coagulation and breakage equation were performed until now. In case of coagulation ECAT leads to improved computation of the moments of the PSD but its performance with respect to the entire PSD depends on the coagulation kernel. In case of uniform coagulation kernel the ECAT leads to improved PSDs but in case of the sum kernel ECAT cannot improve the results of CAT implying that there is no one to one correspondence between moments and entire PSD computation. This statement is confirmed by the results for the breakage equation. In all cases of breakage models tested, ECAT leads to better estimation of the moments of the PSD but the entire PSD is better computed by the FPT method. In Figure 2 the ratio of approximate to exact PSD moments of order i is shown, under typical breakage conditions, as computed by several sectional techniques. Details on kernels and initial distribution can be found in Kostoglou and Karabelas (2009).The zeroth and first moments are conserved by FPT and CAT and in addition the r-th moment by ECAT. The superiority of ECAT regarding moment computation is evident.
Figure 2: Ratio of approximate to exact moments of order i computed by several sectional techniques under typical breakage conditions.
References S. Kumar, D. Ramkrishna, 1996, Chem. Engng Sci. 51, 1311. J. Kumar, M. Peglow, G. Warneke, S. Heinrich, L. Morl, 2006, Chem. Engng Sci. 61, 3327. M. Kostoglou, 2007, J. Colloid Interface Sci. 306, 72. M. Kostoglou, A.J. Karabelas 2009, Comp. Chem. Engng 33, 112.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modelling and Simulation of Forced Convection Drying of Electric Insulators Cristea Vasile-Mircea, Goga Firuta, Mogos Liviu Mihai Babes-Bolyai University, 11 Arany Janos Street, 400028 Cluj-Napoca, Romania,
[email protected]
Abstract The aim of the present works is to develop a model, implemented in a drying simulator, for describing the heat and mass transfer processes taking place inside the electric insulator body and the hot air surrounding it. Drying systems optimization and control relies on the capability of modelling these phenomena as they directly guide the way process manipulated variables have to be changed in time. The complex time evolution and spatial distribution of moisture content and temperature of the drying product associated to the temperature, velocity and humidity of the drying medium are predicted by the proposed model. The 3D model has been developed for dynamic simulation conditions. Experimental data, associated to literature records, have been used to fit parameters of the developed model that will be further used in the industrial unit for operation optimization and control purposes. Keywords: 3D model, CFD, drying, electric insulator
1. Introduction The traditional high-voltage electric insulator production requires a first batch drying step intended to reduce the moisture content of the drying product from 25-30% moisture (based on dry basis) to about 2-5%. This is performed in special gas-heated drying chambers. The second drying step is carried out in high temperature ovens, in order to achieve the desired moisture content of the final product. In the first step of the drying process air temperature is controlled according to a special program, mainly designed according to experimental tests. One of the most difficult problems to be solved during convective drying of porous materials consists in avoiding the cracking phenomena. If the drying rate is not properly established, problems may result in deformation and generation of material defects. Setting the appropriate drying rate reduces the drying time and leads to the desired quality of dried products. This may be performed by on basis of a complex model involving simultaneous heat and mass transfer both inside the drying body and between the body and the surrounding heating air. The whole process consists of several periods described by different mechanisms of drying. The mathematical modelling of the drying process enables appropriate equipment design, optimization and efficient control. The wet clay-kaolin body devoted to high-voltage electric insulators production is subject to convective drying. The paper presents the development of a 3D CFD based dynamic simulator for drying the electric insulator body, having typical geometry, in hot air stream.
Modelling and Simulation of Forced Convection Drying of Electric Insulators
47
2. Model description The theoretical background and main considerations for the model used in the paper are presented by Kowalski [1-3] and Kowalski and Strumillo [4]. Most of the literature reports divide the drying process into several steps: preheating, constant-rate and one or two falling-rate periods [5]. During the preheating period, which is usually very short, the clay body is heated from the ambient temperature to the wet-bulb temperature. Afterwards, during the constant-rate period, the liquid from the interior of the body migrates towards the liquid film situated at its surface. The liquid movement is driven by the capillary forces. The temperature at the surface continues to be constant at the wet-bulb temperature value, as equilibrium between the amount of evaporated liquid at the surface and diffused vapours into the surrounding heating air is attained. Phase transitions inside the dried material are ignored and the whole evaporation of the moisture is assumed to take place on the boundary of the dried material. Shrinkage stresses caused by non-uniform distribution of the moisture content are beginning to develop in this period. After the critical point, the liquid water starts to withdraw from the body surface towards its interior, opening the first falling rate period. The liquid water moves form body interior through the continuous liquid medium within the pores. But vapours may be also formed and they also migrate to the body surface. The temperature of the dried body exceeds the wet-bulb temperature. During the second falling rate period the liquid water predominantly evaporates inside the drying body. Non-continuous liquid and vapour regions inside the body are formed and water is transported by the evaporation and condensation mechanism. The temperature inside the dying body rises and reaches almost the surrounding hot air temperature while the drying rate is diminished. The equation describing the moisture content in the dried body is developed on the basis of the mass balance for moisture, relating the moisture flux with the gradient of moisture potential ȝ. It is associated to the heat transport equation. The rate equations consider the equations for: moisture transport (capillary, diffusion, and thermodiffusion), phase transition of liquid into vapours, heat transport including the heat convection by moving moisture, and heat exchange between body components. The model describes the drying process as a whole by including or excluding the individual heat and mass transfer mechanisms in the several stages of drying. The system of equations describing the electric insulator body moisture and temperature change in time and space are [1]: <
UsT
/l 2 (cT - c X T ) F (cT - c X T ) <
U cv s
(1)
/ - l F (cT - c X T ) T
2
The boundary conditions are given by:
/l (cT - c X T ) |wB n D m ( P |wB P a )
/T - |wB n D m (-a - |wB ) lD m ( P |wB P a ) The moisture driving force potential is assumed to be [6]:
(2)
(3) P |wB P a 0.462 Ta ln( x |wB / xa ) (7.36 0.462 Ta ln( x |wB )(T |wB Ta ) The equation used for describing the flow of hot air around the electric insulator body is the weakly compressible Navier-Stokes momentum equation which is associated to the continuity equation: wu § · §2 · U U u u p ¨ K ( u ( u )T ) ¨ K N dv ¸ ( u ) I ¸ F (4) wt ©3 ¹ © ¹
Cristea V.M. et al.
48
wU (5) ( U u) 0 wt Heat transfer in the hot air was described using the following heat transfer equation: wT (6) UC p ( k T) Q U C p u T wt The drying simulator has been implemented in COMSOL Multiphysics CFD software.
3. Simulation results In the first step, the simulator has been build on the basis of parameters emerged form literature data. In order to validate the simulation, a set of drying experiments have been performed on a cylindrical clay-kaolin body, having the diameter and height of 60 mm. Experimental drying has been performed for forced convection drying of the clay– kaolin body, with hot air of constant temperature (T=330 K) and moisture content (relative humidity 10%). Measurements of the overall mass change in time, temperature and moisture in different locations in the drying body (points on the central axis and on the middle radius - at different heights, associated to different points on the body surface) have been used for model validation. The comparative results obtained with the drying simulator and the drying experiments for the cylindrical shaped clay-kaolin body, are presented in figure 1 and 2. Figure 1 presents the overall mass change in time of the clay-kaolin body and figure 2 shows the temperature change in the centre of the cylinder. 65
simulation experiment
0.35
Temperature [grd C]
0.3
m [kg]
simulation experiment
60
0.25
0.2
55 50 45 40 35
0.15 30
0.1 0
0.5
1
1.5 Time [s]
2
2.5
3 4
x 10
Fig. 1: Comparative results of the overall mass change in time of the clay-kaolin body
25 0
0.5
1
1.5 Time [s]
2
2.5
3 4
x 10
Fig. 2: Comparative results of the temperature change in time, in the centre of the clay-kaolin cylindrical body
Simulation results show good agreement between the simulation and experimental data both for moisture content and temperature inside the dried body. The drying periods are very well revealed by the simulator. In the second step, the already tuned drying simulator has been used to describe the forced convection drying of the electric insulator clay-kaolin body in the flow of hot air. The simulation describes the complex flow of hot air around the electric insulator body, associated to the heat transfer between the hot air and the drying body. Model parameters for the mass and heat transfer inside the electric insulator have been fit based on model developed in the first step and the experimental measurements. Results of the simulation are presented in figures 3 to 8. Figures 3 and 4 present the streamline and velocity field (in a median cross section) of the hot air entering along the x the direction in the stream of hot air flowing over insulator body.
Modelling and Simulation of Forced Convection Drying of Electric Insulators
Fig. 3: Streamlines of hot air flowing over electric insulator body
49
Fig. 4: Velocity filed distribution in the median section of the hot air stream
Results presented in figures 5 and 8 reveal the moisture content and temperature on the surface of the dried insulator associated to a detailed moisture content and temperature distribution in the median cross section of the body (along the hot air flow direction).
. Fig. 5: Moisture content distribution on the surface of the insulator body
Fig. 6: Moisture content distribution in the median cross section of the insulator body
Fig. 7: Temperature field on the surface of the insulator body
Fig. 8: Temperature field in the median cross section of the insulator body
50
Cristea V.M. et al.
The time moment considered for the representations is of t=5000 s from the beginning of the drying process. As may be noticed from the simulation results, the temperature and moisture fields inside the drying body are influenced by the flow and temperature patterns of the hot air and by the geometry of the solid.
4. Conclusions The developed simulator is a versatile tool for revealing the temporal and spatial moisture content and temperature distribution inside the clay-kaolin insulator body during the forced convection dying process. The complex geometry of the drying body has a direct effect on the flow of hot air affecting the heat and mass transfer inside the drying insulator. Gradient assessment of the moisture content and temperature in the electric insulator body may reveal practical solutions for efficient operation while reducing the cracking phenomena. Based on new experimental data the simulator can be used with appropriate adjustments for other geometries and drying materials. Predictions offered by the simulator bring improvement for the drying process control and optimization as short drying time, minimum energy consumption, and prevention of material destruction may be attained.
5. Acknowledgements The authors gratefully acknowledge financial support from the national research project PN Mod. III 407.
Nomenclature cT=coefficient of thermodiffusion, J/kg K cv=specific heat of dried body, J/kg K cx=coefficient of diffusion, J/kg Cp=specific heat of air, J/kg K F= body force, N/m3 k= thermal conductivity, W/m K l=latent heat of evaporation, J/kg m=mass of dried sample, kg p=pressure, Pa Q=heat sources/sinks, W/m3 t=time, s T=temperature, K u= velocity vector, m/s x=mole fractions of vapour in air, 1
X=moisture content (dry basis), 1 Įm=vapour transfer coefficient, kg s/m4 ĮT=heat transfer coefficient, W/m2 K Ș= viscosity, Pa s ș=X-Xr increment of moisture content, 1 - = T-Tr increment of temperature, oC țdv=dilatational viscosity, Pa s ȝ= chemical potential, J/kg ȡ=mass density, kg/m3 Ȥ= rate of phase transition, kg s/m5 ȁl=moisture transport coefficient, kg s/m3 ȁT=thermal conductivity, W/m K Sub-superscripts: wB = boundary; a=air; s=solid
References [1] S.J. Kowalski, 2010, Control of mechanical processes in drying. Theory and experiment, Chemical Engineering Science, 62 (2), 890–899. [2] J. Banaszak, S.J. Kowalski, 2005, Theoretical and experimental analysis of stresses and fractures in clay like materials during drying, Chemical Engineering and Processing, 44, 497–503. [3] S.J. Kowalski, 2003, Thermomechanics of drying processes, Springer-Verlag, 365. [4] S.J. Kowalski, Cz. Strumillo, 1997, Moisture transport in dried materials: boundary conditions, Chemical Engineering Science, 52 (7), 1141–1150. [5] A.S. Mujumdar (Ed.), 2007, Handbook of Industrial Drying, Taylor and Francis Group, New York. [6] S.J. Kowalski, A. Rybicki, 2007, The vapour-liquid interface and stresses in died bodies, Transport in porous media, 66 (1-2), 43–48.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Comprehensive Mathematical Modeling of Controlled Radical Copolymerization in Tubular Reactors Mariano Asteasuain,a Daniel Covan,a Claudia Sarmoria,a Adriana Brandolina Carolina Leite de Araujo,b José Carlos Pintob a
PLAPIQUI (CONICET-UNS), Camino La Carrindanga km 7, Bahía Blanca 8000, Argentina b Programa de Engenharia Química da COPPE/UFRJ, Universidade Federal do Rio de Janeiro,Cidade Universitária, CP:68502, Rio de Janeiro, RJ 21945-970, Brazil
Abstract In this work a comprehensive mathematical model of the nitroxide mediated polymerization in tubular reactors is developed. The model is able to predict average molecular properties, such as the average molecular weights and copolymer composition. Besides, detailed calculation of the copolymer microstructure is included. The model is able to predict the bidimensional molecular weight distribution of the copolymer, the sequence length distribution, the global molecular weight distribution and the copolymer composition distribution. In particular, styrene–α ҏmethyl styrene and styrene–methyl methacrylate copolymerizations are studied. Model outputs are consistent with known features of the system. The detailed information on the copolymer molecular structure provided by the model makes it a valuable tool for the process design. Keywords: tubular reactor, copolymerization, nitroxide mediated polymerization.
1. Introduction Controlled/living radical polymerization (CRP) has experienced an exponential growth since the 1990s as an attractive route for synthesizing polymers with controlled structure. This process shows great potential for industrial applications because it combines advantages of conventional free radical and living polymerizations, as discussed in the review by Destarac [2010]. CRP in tubular reactors offers the possibility of preparing copolymers with tailor-made molecular structure in a continuous process. For instance, lateral feeds allow applying different comonomer feeding policies. Besides, manipulation of partial conversion at the lateral feeding points makes possible to control the copolymer block lengths in each reactor section. Previously, we presented a steady state model of the nitroxide mediated copolymerization in tubular reactors [Asteasuain et al., 2009a, b]. It predicted average molecular properties and sequence length distribution (SLD). In particular, styrene (S)–α methyl styrene (AMS) and S–methyl methacrylate (MMA) systems were studied. In this work we present the fitting of that model to experimental data. Besides, we extend the model by incorporating the prediction of the bivariate copolymer MWD and copolymer composition distribution (CCD).
52
M. Asteasuain et al.
2. Mathematical Model 2.1. Kinetic Mechanism The kinetic mechanism adopted to describe the system is summarized in Table 1. Table 1. Kinetic mechanism. Step Initiation
Equation kd I ⎯⎯ → 2 Ri ;
efic ,ki Ri + M j ⎯⎯⎯ → R2j− j , j −1
Styrene thermal initiation
kth 1 1 3M1 ⎯⎯ → R1,0 + R2,0
Capping and uncapping
⎯⎯⎯ → Dnj,m X + Rnj,m ←⎯⎯ ⎯
Propagation and depropagation
⎯⎯→ Rnj+ 2− j ,m + j −1 Rni ,m + M j ←⎯⎯ −1
Transfer to monomer
trnij → R2j− j , j −1 + Pn ,m Rni ,m + M j ⎯⎯⎯
Termination by combination
tcij Rni ,m + Rrj,q ⎯⎯→ Pn+r ,m+q
Termination by disproportionation
tdij Rni ,m + Rrj,q ⎯⎯→ Pn,m + Pr,q
kcapj
kuncapj k pij k pj
k
k
k
In this table, I is the initiator (BPO), Ri is the initiation radical, X is the capping agent (TEMPO), M1 is S, M2 may be either MMA or AMS, Rni ,m and Dni , m are, respectively, a live and a dormant copolymer chain with n units of M1 and m units of M2, with an Mi final unit. Finally, Pn , m is a dead copolymer chain with n units of M1 and m units of M2. 2.2. Average Molecular Weights and Sequence Length Distribution Population balance equations for a plug flow, steady-state tubular reactor were drawn from the above kinetic mechanism for the polymer species Rni ,m , Pn , m and Dni , m . The well-known method of moments was applied to these population balances for predicting average molecular properties (molecular weights and composition). The model fitting of the kinetic parameters for the S–AMS system was carried out using data of conversion, copolymer composition and average molecular weights obtained in our labs. Experiments were performed in a tubular reactor of 6.3 m length and 6.4·10-3 m internal diameter. The experimental device included an insulating jacket surrounding the reactor, whose temperature was regulated by a PID controller monitoring the reactor temperature along the axial dimension. Data on twelve experimental syntheses performed with different feed concentrations of monomers, initiator and TEMPO were available for the parameter estimation. Four syntheses were S homopolymerizations (runs 1-2, 3 and 12; numbers joined by dashes indicate replications) and the remaining eight were copolymerizations (runs 4-7 and 8-11). In all cases the reaction temperature was uniform at 135 ºC and the residence time was approximately 1 h. Very good agreement with the experimental data was achieved. As an example, Figs. 1 and 2 show the results of the model fitting for the weight average molecular weight (Mw) and the copolymer composition. Work is being carried out to validate the remaining kinetic parameters for the S–MMA system. The fitted model parameters are used to obtain the subsequent model results shown for the S–AMS system, while literature values are used for the S–MMA system. In order to model the SLD, we formulated a parallel kinetic mechanism considering monomer sequences as the reacting species. The kinetic mechanism adopted for these reacting species is shown in Table 2.
Comprehensive Mathematical Modeling of Controlled Radical Copolymerization…
Figure 1. Comparison between lab data and the fitted model output for Mw.
53
Figure 2. Comparison between lab data and the fitted model output for copolymer composition.
Table 2. Kinetic mechanism for the SLD prediction. Step
Equation •j
Initiation
kd i → S1 I ⎯⎯ → 2 Ri ; Ri + M j ⎯⎯⎯
Styrene thermal initiation
k th 3 M 1 ⎯⎯ → S1•1 + S 2•1
Capping and uncapping
⎯⎯⎯ → S nIj X + Sn• j ←⎯⎯ ⎯
Homo-propagation and depropagation
⎯⎯→ S n•+j1 S n• j + M j ←⎯⎯ −1
Cross-propagation
pji S n• j + M i ⎯⎯→ S ndj + S1• i
Transfer to monomer
trmij → S ndj + S1• i S n• j + M i ⎯⎯⎯
Homo-termination by combination
tcjj S n• j + S m• j ⎯⎯→ S ndj+ m
Cross-termination by combination
S n• j + S m• i ⎯⎯→ S ndj + S mdi
Termination by disproportionation
tdij S n• j + S m• i ⎯⎯→ S ndj + S mdi
efic ,k
kcapj
kuncapj k pjj k pj k
k
k
k tcij
k
Here S indicates a monomer sequence forming part of a copolymer chain. The subscript represents the number of monomers in the sequence, and the superscript the type of monomer (1 or 2) and the state of the sequence: live (•), dead (d) or dormant (i). Population balances are deduced for the monomer sequences. It is straightforward to solve the resulting equations due to the short length of the monomer blocks. Figure 3 shows examples of the SLD prediction for the two systems studied in this work. It can be observed that the SLDs of both monomers are almost identical in the S-MMA system. This may be a consequence of their similar reactivities (r1= kp11/kp12=0.57 and r2=kp22/kp21=0.41). In the S–AMS system, the [-S-] sequence distribution is similar to that in the S–MMA copolymerization, which is consistent with the value of r 1=0.55. On the contrary, only single [-AMS-] sequences are present. This is because depropagation prevents the linking of more than one AMS unit in the copolymer chains at the reaction temperature used in the calculations [Barner-Kowollik and Davis, 2001].
3. Modeling of the Bivariate MWD The system of (bivariate) population balance equations of polymer species is coupled and infinitely sized, because chain lengths (in each dimension) may theoretically take
54
M. Asteasuain et al.
Figure 3. SLD prediction in the S–MMA and S–AMS systems.
values between 0 and . Although the maximum chain lengths can, under certain conditions, be set at finite values, the number of equations is still intractable because those values are typically between 103-105. Hence, a modeling strategy is needed in order to calculate the bivariate MWD. In this work, the bidimensional MWD of the copolymer is modeled using 2-D probability generating functions (pgf) [Asteasuain and Brandolin, 2010]. The method is based on the transformation of the set of infinite population mass balances into the pgf domain, obtaining a system of equations describing the pgf transform of the distribution. For instance, balance equations of the form
∂Qn, m ∂τ = rQn ,m
n = 0,! , ∞; m = 0,!, ∞
(1)
where Qn,m is any of the polymer species, are transformed into the pgf domain to obtain ∂ψ ( z1, z 2) ∂τ = rψ ( z1, z 2)
z1, z 2 ∈ [0,1]
(2)
where ψ(z1,z2) is the pgf transform of the bivariate distribution Qn,m, and z1 and z2 are the dummy variables of the pgf. An inversion method represented by the set of algebraic equations h
Qni , m j = h(ψ ( z1k , z 2l )) i = 1,!, I ; j = 1,!, J ; k = 1,!, K ; l = 1,!, L
(3)
allows recovering the distribution Qn,m from its pgf transform for a set of arbitrary values (ni and mj) in the domains of n and m. The inversion method requires pgf evaluations at a finite set z1k and z2l in order to obtain Qn,m. The mathematical model is represented by Eq. (2), parameterized for the required values of z1 and z2, and Eq. (3). The number of required values of z1 and z2 is such that the resulting number of equations is reasonable. The model was solved using gPROMS. As an example of its outputs, Fig. 4 shows the bivariate MWD corresponding to the system S–MMA at the operating conditions indicated in Fig. 3. The global CCD (gCCD(y)) and MWD (gMWD(n)) of the copolymer were obtained from the bivariate MWD data according to Eqs. (4) and (5). ∞
gCCD ( y ) = ¦ wyl ,(1− y ) l , l =1
n −1
y = 0,! ,1
gMWD(n) = ¦ wl ,n−l , n = 1,!, ∞ l =1
(4) (5)
Comprehensive Mathematical Modeling of Controlled Radical Copolymerization…
Figure 4. Contour plot of the bivariate copolymer MWD at the reactor exit for the S–MMA system.
55
Figure 5. Global CCD at the reactor exit for the S–MMA system.
where y is the copolymer composition in M1 (number basis), n is the total chain length of the copolymer, and wx,y is the weight fraction of the copolymer with x and y monomer units. The required values of wx,y are obtained by a two-dimensional interpolation of the MWD coarse grid. Figure 5 shows the global CCD calculated in this way, for the same operating conditions as in Fig. 4.
4. Conclusions A mathematical model of nitroxide mediated copolymerizations in a tubular reactor was developed and applied to S–AMS and S–MMA systems. The model is able to predict the bivariate copolymer MWD, global MWD, sequence length distribution, copolymer composition distribution, average compositions, average molecular weights and monomer conversion as a function of reactor length. The model provides detailed information which is extremely useful for a deep process analysis and for efficient process operation.
References M. Asteasuain, Soares, M., Brandolin, A., Sarmoria, C., Pinto, J. C., Mathematical Model of Nitroxide-Mediated Living Radical Copolymerization in Tubular Reactors. First Part2009a, CD of the V Argentine-Chilean Polymer Symposium (ARCHIPOL’09). M. Asteasuain, Soares, M., Brandolin, A., Sarmoria, C., Pinto, J. C., Mathematical Model of Nitroxide-Mediated Living Radical Copolymerization in Tubular Reactors. Second Part, 2009b, CD of the V Argentine-Chilean Polymer Symposium (ARCHIPOL’09). M. Asteasuain, Brandolin, A., 2010, Mathematical Modeling of Bivariate Polymer Property Distributions Using 2d Probability Generating Functions, 1 - Numerical Inversion Methods, Macromol. Theory Simul., 19, 6, 342-359. C. Barner-Kowollik, Davis, T. P., 2001, Using Kinetics and Thermodynamics in the Controlled Synthesis of Low Molecular Weight Polymers in Free-Radical Polymerization, Macromol. Theory Simul., 10, 4, 255-261. M. Destarac, 2010, Controlled Radical Polymerization: Industrial Stakes, Obstacles and Achievements, Macromol. React. Eng., 4, 165-179.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
An Efficient High Resolution FEM for PDE Systems Duc Hoang Minha, Harvey Arellano-Garciaa, Lorenz T. Bieglerb a
Chair of Process Dynamics and Operation, Berlin Institute of Technology, KWT-9, Str. des 17. JUni 135, D-10623 Berlin, Germany b Dept. of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213-3890, USA
Abstract Non-physical oscillations due to numerical errors or non-uniqueness of the solution in process simulation are still a fundamental issue in fluid dynamics. This is especially true when treating equation systems of hyperbolic Partial Differential Equations (PDEs) or systems of parabolic PDEs with very large Peclet number. This work aims to find efficient Finite Element Method (FEM) approaches for a general formulation so as to solve both parabolic and hyperbolic PDEs, which leads to non-oscillation without loss of the solution accuracy, meaning mass conservation, while keeping superior sharp fronts. For this purpose, two case studies are presented. In the first case study, the Streamline Upwind/Petrov Galerkin (SUPG) method is applied on a Simulated Moving Bed (SMB) process described by an equilibrium model consisting of a parabolic PDE. The second example is a pressure swing adsorption (PSA) process, which is described as a system of hyperbolic PDEs. Applying the Galerkin FEM with Flux Corrected Transport (FCT) shows superior results. The non-oscillating sharp front still holds in case of shock phenomena and strong nonlinearities, whereas oscillations can also be reduced with the SUPG approach.
Keywords: finite element method, moving fronts, partial differential equation, hyperbolic, flux corrected transport
1. Introduction Simulation of dynamic systems with distributed domains covers a variety of operations in process engineering, especially in the fields of air separation, e.g. production of synthesis gas and carbon dioxide removal, chromatographic separation of biochemical components and chemical intermediates or in recent membrane reactor processes. The feature of those processes is their characteristic of moving fronts. We understand this as a moving wave function of process variables, e.g. concentration, pressure and temperature resulting from a step or an impulse at the inlet of the operation unit. Morover it is well known that finite difference methods cause strong oscillations with diffusion-advection equations and total instability in case of nonlinear advection equations, especially when sharp fronts occur with steep gradients. Leveque showed [9] that a mass conservation law with an integral form like Galerkin FEM or Finite Volume Method (FVM) should be the basis for solving problems with moving fronts, since their formulations allow discontinuities in the solution. Leveque presented flux limiters for the FVM as a new class of high resolution schemes [10]. Biegler et al. developed a smoothed Van Leer flux limiter for optimization of a hyperbolic PSA system [6]. Studies with Galerkin FEM and moving element strategies were investigated in [11] and [5]. Although the above-mentioned FVM and the moving mesh strategy are superior in their special domains and greatly accepted by the fluid dynamics community, they still lack some generality in the authors’ opinion. FVM gives superior results for nonlinear hyperbolic systems [JiaBie04, AgaBie10] but up to now there are only few implementation techniques for parabolic systems, e.g. the mixed FVM [2] which lacks of a general mesh formulation. The crux of an adaptive mesh strategy is to refine the balance between the convective and diffusive length scales [3]. Nevertheless hyperbolic
Efficient high resolution FEM
57
systems are devoid of any diffusion, thus it is supposed to be unstable regardless of mesh resolution. For instance, Jiang and Carey observed numerical instability with nonlinear hyperbolic systems when shocks occur [7]. With the purpose of generality in mind the Galerkin FEM is suggested as the basic scheme and the mesh should be fixed for PDE-systems in general. For the linear diffusion-advection equation the SUPG method is presented for the SMB case study. On the other hand, the focus on generality will be presented by using the Galerkin FEM combined with the FCT methodology in a PSA process with a nonlinear hyperbolic equation system. The two case studies aim to show the general Galerkin/SUPG formulation and the possibility of the Galerkin-FCT to be applied on any other basic scheme.
2. Theory In a typical process engineering system a parabolic PDE for a conserved variable like concentration can be written as follows
ൌ
Ϋ
Ϋ
(1)
This equation becomes hyperbolic if the diffusion term is neglected
ൌΫ
Ϋ
(2)
The PDE is not solved directly in the FEM-framework but in its weak formulation
ൌ
(3) In FEM the solution is approximated as a sum over products of the unknown element’s nodal values and their corresponding basis trial functions ሺሻ. Inserting in (3) yields the following scheme for one element, with N as the order of the trial function: π െ
∑
π
ൌ ∑
π െ െ π
(4)
Alternately for all elements with resulting constant coefficient mass matrix M, transport matrix and source matrix we consider the case of the linear diffusion-advection equations as:
ൌ
(5)
2.1. Streamline Upwind Petrov-Galerkin (SUPG) FEM Hughes pointed out that the oscillations of the standard scheme are primarily due to the central grid stencil and he introduced an upwind perturbation term into the weighting function to weight more the downwind node[4]. Moreover this perturbation function is chosen to be discontinuous and acting only in the flow direction. Here we use the onedimensional perturbation function ൌ ∑
ǡ ൌ
ࢋ √ ||
(6)
for the SMB case study, which was shown by Raymond and Garder[RayGar76] to be optimal for the one-dimensional case. Inserting (6) in (3) doesn't change the matrixequation form of (5). Thus any solution schemes remain unaffected. 2.2. Flux corrected Transport on the Galerkin FEM scheme The crux of the generalized FCT methodology is to switch between a high- and a loworder scheme in an adaptive way. Consider again the high order scheme (5) but this time for the numerically more difficult hyperbolic system (3) in which the element solutions are also ሺ ሻ instead of just as proposed by Fletcher[Fle83]:
D. Hoang Minh et. al
58
ൌ
(7)
The general form in eq. (5) still remains after discretization with the difference that the transport matrix is now a function of the velocity u. The following correction steps are done after the discretization in (7), thus it works on every discretization approach. First the discrete upwinding diffusion operator D [8]is applied on (7) creating an unconditionally stable low order scheme:
ൌ ǡ ൌ
(8)
The velocity is assumed to be known from outer iteration loops so that L is then a constant matrix. The difference between (7) and (8) is exactly the removed raw antidiffusion which can formally be written as ൌ െ െ
(9)
െ
Due to the idea of the FCT methodology it is desirable to add back as much of the removed raw antidiffusion as possible to the low order scheme without generating new extremes or accentuating already existing ones. For this purpose the raw antidiffusive flux is multiplied by a flux limiter which is defined as follows: ൌ ∑ ǡ ൏
(10)
In the FCT methodology we choose as Zalesak's solution-dependent limiter[Zal79] resulting in ሺ̅ ሻ. A fully discrete scheme can now be written as follows:
శ ο
ൌ ǡ
(11)
The Predictor-Multi-Corrector (PMC) method (described in [4]) is used to solve (11) in case of nonlinear velocity fields as in the PSA example described below.
3. Case studies 3.1. SMB process The equation system for a SMB process described by an equilibrium model is:
ൌ
כൌ
െ
െ
ǡ
ൌ
ࡾ
ሺ כെ ሻ
(12) (13)
with "Neumann" boundary conditions:
ൌ
ࢇ࢞
െ and
ࣔࢉ ቚ ࣔ࢞ ࢞ୀࡸ
ൌ
Mass balances for the fluid and adsorbent phase are given in (12). The nonlinear Langmuir isotherm (13) is used to describe the adsorption equilibrium.
Efficient high resolution FEM
59
Fig.1) SMB Adsorption: Galerkin vs SUPG with 50 elements (2nd order) The SUPG approach can reduce the oscillations of the standard scheme but shows smearing effects due to the additional diffusion. 3.2. PSA process The equation system for the PSA process is taken from [1]:
ൌെ
࢙
࢈
כൌ ǡ
ൌെ
ൌ
െ
&
∑ స
ࡿ "
∑ !&'ǡ
*+)
ǡ
!ǡ
∑! ο#$ǡ
# ൌ ∑ $ %&ǡ )ൌ
* ࢈
࢈
ൌ ሺ כെ ሻ
(14)
(15)
∑ స
െ % െ %
(16)
െ " !& ࢉǡ !
!
,- *Ǥ .+
ࢉǡ ( *࢈
(
࢈
(17) $ࢉǡ )
) '(
(18) భ
∑ /- -! *0+ మ 1
(19)
with Dirichlet boundary conditions for the adsorption step: ሺǡ 2 ൌ )ሻ ൌ ,
, ǡ 2 ൌ ) ൌ ǡ,
and
3ǡ 2 ൌ ൌ 3$
The conservation law for mass (fluid and adsorbed phase) and energy are given in (14) and (16). The nonlinear dual site Langmuir isotherm (15) is used to describe the adsorption equilibrium. Strong nonlinearities are also included with the storage terms in (17) and in the momentum balance approximation by the Ergun-equation(19). The figures below represent part of the adsorption step of a H2/CO2 separation.
Fig.2 Concentration and mole fraction profiles, FCT-FEM with 20 elements (1st order)
D. Hoang Minh et. al
60
Fig.3 Temperature and pressure profiles In both Figures 2 and 3 the fronts are free of oscillations. The sharpening effect of the FCT approach can be clearly seen in the comparison between the low order and the high resolution scheme (Figure 2).
4. Conclusion and Outlook PDEs of parabolic and hyperbolic types were simulated and in both cases. The Galerkin FEM on a fixed mesh is used as the basic spatial discretization approach. The SUPG method was implemented on the standard scheme of the linear parabolic equation system of a SMB process. In the nonlinear hyperbolic equation system the GalerkinFCT was applied. This work shows that the FCT method totally removes oscillations despite shocks and nonlinearities whereas the SUPG method is suitable for parabolic PDEs because of its general and consistent formulation. Since the FCT methodology should be used only on the advection term, the combination of both approaches represents a promising alternative.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]
Biegler L. T., A. Anshul, "Superstructure-Based Optimal Synthesis of PSA-Cycles for Precombustion CO2 Capture", Ind. Eng. Chem. Res. 49, 2010, 5066–5079 Eymard R., Gallouet Thierry, Herbin Raphaele, "Finite Volume Methods", update of the preprint in ZAMM-Handbook of num. Anal. 7, 1997, 713-1020 Finlayson B. A., "Nonlinear Analysis in Chemical Engineering", McGraw-Hill, 1980 Hughes T. J., Brooks A. N., "SUPG Formulations for Convection dominated Flows", Computer Methods in App. Mechanics and Eng. 32, 1982, 199-259 Hrymak B.M., Westerberg A.W., "An Implementation of a Moving Finite Element Method" J. Computional Physics 63, 1986, 168-190 Biegler L. T., Jiang L., "Recent Advances in Simulation and Optimal Design of PSA Systems", Separation and Purification Methods 33, 2004, 1-39 Jiang B.N., Carey G.F., "A Stable Least-squares Finite Element Method for Non-linear Hyperbolic Problems", Int. J. Num. Meth. Fluids 8, 1988, 933-942 D. Kuzmin, R. Löhner, S. Turek, "Flux-Corrected Transport", Springer, 2005 R. J. Leveque, "Num. Methods for Conservation Laws", Birkhäuser, 1992 R. J. Leveque, "Finite-Volume Methods for Hyperbolic Problems", Cambridge Texts in applied Mathematics, 2004 K. Miller, R.N. Miller, "Moving finite Element Methods Part I/II", SIAM Journal on Numerical Analysis 18, 1019-1057
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Simulation of Reactive Absorption: Model Validation for CO2-MEA system Chinmay Kalea, Inga Tönniesb, Hans Hasseb, Andrzej Góraka a
Laboratory of Fluid Separations, Department of Biochemical and Chemical
Engineering, TU Dortmund University, Emil Figge Strasse70, Dortmund, D-44227, Germany b
Laboratory of Engineering Thermodynamics, Department of Mechanical and Process
Engineering, University of Kaiserslautern, P.O. Box 3049, Kaiserslautern, Germany
Abstract Post combustion CO2 capture is an important method to reduce CO2 emissions. Reactive absorption of CO2 using aqueous amines provides an attractive option to remove CO2 from flue gases at relatively mild operating conditions. To commercialise this technology for large industrial plants, a well-established scale-up procedure is essential. A reliable, flexible and completely accessible model of reactive absorption is a very important tool in the scale-up study. In this article, a user-defined rigorous rate-based model for packed bed absorption column is presented. The model is completely open to the user and hence offers high flexibility to implement and/ or modify the governing equations of mass transfer and reaction kinetics. Validity of the model is demonstrated by predicting the concentration and temperature profiles of experiments performed in a pilot plant. Additionally, the simulation results were compared with the results obtained from the commercially available Aspen RateSep™ (ARS) model. Keywords: Reactive absorption, scale-up, CO2–MEA, model validation, rate-based model
1. Introduction Reactive absorption (RA) is a rate controlled process in which absorption of gaseous species is combined with chemical reaction. Chemical reaction, which takes place in the liquid phase, enhances the solubility of CO2 and accelerates the mass transfer [1]. This article presents the newest version of a rigorous model for RA, which takes into account reaction kinetics, presence of ionic species, thermodynamic non-idealities and coupling of chemical reaction with multicomponent heat and mass transfer. Furthermore, the hydrodynamics of column internals such as liquid hold-up and pressure drop are also considered [1, 2]. Over the years, many modelling concepts have been used to describe RA processes. In general, equilibrium-stage models and non-equilibrium stage (rate-based) models are used. The equilibrium stage model assumes that the streams leaving each packing segment are in thermodynamic equilibrium. Additionally, chemical reactions are considered either by reaction equilibrium or by integrating reaction kinetics in the mass
62
C. Kale et al.
and heat balances. For fast reactions, the equilibrium-stage model can be extended by using reaction equilibrium and tray efficiency. For slower reactions where chemical kinetics is a dominant factor, reaction kinetics must be integrated into mass and energy balances. In RA, thermodynamic equilibrium can seldom be reached. Hence, equilibrium stage models give reliable results only in few cases [1]. Another way of describing the reactions in RA models is by considering enhancement factors which take into account the acceleration of mass transfer due to chemical reactions in the liquid film. They depend mainly on reaction characteristics and reaction order. Enhancement factors are either derived by fitting experimental data or calculated using several simplified assumptions. It is generally difficult to find enhancement factors for complex, reversible, multicomponent reaction systems from binary experiments based on simple assumptions [1, 2]. Therefore, this approach is questionable for complex, multicomponent reactive absorption processes.
2. Rate-based model In the current work, a physically more consistent rate-based model is used to describe the RA process. In this approach, the packed bed column is axially discretised into smaller packing segments and the liquid film is discretised into small segments. In each segment, multicomponent mass and heat transfer as well as chemical reactions are taken into account directly along with the hydrodynamics [3]. The model is extended from already developed models within the group [2]. All the equations used in this model have been implemented in the equation oriented, commercially available process simulator Aspen Custom Modeler® (ACM). 2.1 Mass transfer, reaction coupling and balance equations In our model, mass transfer across the gas-liquid interface is described by the two-film theory. Multi-component diffusion in the film is calculated using the Maxwell-Stefan equations [3]. It is assumed that phase equilibrium exists only at the interface. Mass transfer coefficients are calculated from empirical correlations which are in turn dependent on physical properties, packing type and hydrodynamics [2]. The liquid film can be further discretised in radial direction into several segments to study fast reactions. The mole and heat balance equations for the multicomponent system are solved in each segment. Because chemical reaction takes place only in the liquid phase, the balance for the liquid side includes contribution by chemical reaction. Chemical equilibrium constants are used to calculate the equilibrium composition. Bulk phase balances are completed by summation equations for the liquid and gas side. To determine axial temperature profiles, differential energy balance equations are solved. Molar fluxes in the mole balances are determined by mass transport in the film region. The mass transfer equations in the film include a differential component balance for each component with the reaction term [1, 3].
3. Process simulator A wide variety of commercially available and in-house developed process simulators have been used to study reactive absorption processes. Aspen Plus® (RateSep), Aspen HYSYS®, ProTreat™, ProMax and gPROMS are commercially available simulators whereas Chemasim and CO2SIM are the examples of in-house process simulators [4]. Main advantages of these simulators are the availability of different chemical kinetics, different types of columns with variety of column internals, thermodynamic models and empirical mass transfer correlations as built-in functions.
Simulation of Reactive Absorption: Model Validation for CO2 -MEA system
63
model equations. The user cannot examine the influence of the model equations on simulation results. Furthermore, modifications and application of newer correlations becomes difficult. Some simulators work only in a specific operating window and fail to predict experimental results for wider operating windows. This is especially an important issue in scale-up studies where changing column dimensions can cause a complete different process behaviour both in terms of hydrodynamics and chemical kinetics. Here it is necessary that the user has a complete access to the model equations for a better understanding of the model results. The model presented here (see Section 2) allows the user to formulate all the equations and correlations. This helps in analyzing each equation and parameter of the model. The model is implemented in simulation environment Aspen Custom Modeler® which uses an interface with Aspen Properties® to calculate the thermodynamic and physical properties. Electrolyte NRTL model was chosen to describe the non-idealities in the liquid phase. The gas phase non-idealities are described with the Redlich-Kwong equation of state. Other important property models are summarised in Table 1. Table 1: Thermodynamic properties and corresponding calculation models [2]. Calculated property Model Enthalpy - gas / liquid DIPPR / Watson Dynamic viscosity - gas / liquid Dean-Stiel / Andrade / DIPPR Molar volume - gas / liquid Soave-Redlich-Kwong / Rackett Thermal conductivity - gas/liquid Stiel-Thodos / DIPPR / Sato-Riedel Surface tension Hakim-Steinberg-Stiel / DIPPR Diffusion coeff. - gas / liquid Dawson-Khoury-Kobayashi/ Wilke-Chang The model also allows to define the model structure as per the requirement. A brief overview of the structure of the model is given in Fig. 1. It shows a packing segment of the absorption column and the main sections (dark blocks) in the packing segment. A separate sub-model is written for each of these sections. Furthermore, Fig. 1 also shows supporting sub-models (e.g. heat transfer, mass transfer etc.) which calculate physical, chemical and hydrodynamic parameters used in the main sections. This offers user to study the process in depth by switching to different options, e.g. –adiabatic and nonadiabatic process, Fick’s diffusion or Nernst-Planck diffusion etc.
4. Case study: Absorption of CO2 by MEA To validate the above-described model, pilot plant experiments of CO2 absorption in monoethanolamine (MEA) were used. The pilot absorption column has an inner diameter of 0.125 m and a packing height of 4.2 m. Mellapak 250 Y packing is used [5]. The reaction system includes two kinetically controlled reactions and four equilibrium reactions [2]. The kinetic rate constants and equilibrium reaction constants are taken from literature [2]. CO2 takes part in two kinetically controlled reactions but its reaction with MEA to form carbamate is the dominating reaction.
Gas bulk
Gas film
Interface
Fig. 1: Model structure
Absorption column
Film discretisation
Heat & mass transfer, Hydrodynamics
Liq. Liquid film Bulk
Reaction
C. Kale et al.
64
5. Results Steady-state simulations were carried out using the inputs from experimental results (Table 2). The height of an individual packing segment was changed in order to get stable concentration and temperature profiles (Fig. 2, 3 – for liquid flow rate of 2.43mol/s). It was found that 25 stages (or packing segment height of 0.168m) were enough to predict the experimental results. For the reaction between CO2 and MEA, kinetics of Hikita et al. [6], Danckwerts et al. [7] and Kucka et al. [2] were implemented and compared. Mass transfer correlations of Rocha et al. [8, 9] were used along with the correlation of Tsai et al. [10] for the calculation of the interfacial area. It was found that the kinetics of Hikita et al. along with the above mentioned mass transfer correlations were the optimal combination to predict experimental results correctly.
4,0
Exp.
3,0
ACM
Column Height (m)
Column Height (m)
Table 2: Experimental inputs and simulation results. CO2 absorbed (mol/s) Gas Liq. CO2 in Flow Flow (mol/s) (mol/s) (mol/mol) Exp. ACM ACM Hikita Kucka 0.9802 1.2057 0.0548 0.0402 0.0407 0,0422 0.9816 1.8495 0.0542 0.0403 0.0413 0.0433 0.6816 2.3601 0.1322 0.0393 0.0437 0.0440 0.9824 2.4371 0.0548 0.0406 0.0394 0.0404 0.9859 3.3984 0.0537 0.0400 0.0457 0.0495 0.9804 4.2788 0.0547 0.0406 0.0445 0.0497
2,0 1,0 0,0 0,00
ACM Danckwerts 0.0431 0.0444 0.0444 0.0454 0.0500 0.0502
4,0
Exp. ACM
3,0 2,0 1,0 0,0
0,02
0,04
0,06
0,08
0
10
20
x_CO2 (mol/mol)
30
40
50
60
70
T (°C)
Fig. 2: Concentration profile
Fig. 3: Temperature profile
1,0
80
0,8
60
T (°C) - ACM
x (mol/mol) ACM
ARS Hikita 0,0360 0,0363 0,0409 0,0361 0,0360 0,0351
0,5
± 5% 0,3
40
± 5%
20 0
0,0 0,0
0,3
0,5
0,8
1,0
x (mol/mol) Experiments
Fig. 4: Parity chart–liquid concentration
0
20
40
60
80
T (°C) - Experiments
Fig. 5: Parity chart–liquid temperature
Simulation of Reactive Absorption: Model Validation for CO2 -MEA system
65
The model could predict concentrations and temperatures within ± 5% accuracy (Fig. 4, 5). Overall CO2 removal showed an acceptable agreement with experimental results and Aspen RateSep™ (ARS) model results (Table 2).
6. Conclusions and future work
A user defined rate-based model of RA implemented in Aspen Custom modeler® (ACM) is validated using experimental results. It was also shown that the results of our model are in reasonable agreement with the results from the commercially available Aspen RateSep™ (ARS). The model can thus also be used for optimization and scale-up studies. The model has separate sub-models for the calculation of reaction kinetics, hydrodynamics and mass transfer parameters, which offers high flexibility. Unlike other commercially available simulation tools, the model gives the user complete access to governing equations and correlations which can be modified for a variety of systems. This is very advantageous to study the scale-up of the process with different column internals and different absorption solvents. The model uses an interface to Aspen Properties to calculate thermodynamic properties and physical parameters which allows changing chemical systems and implementing new experimental data easily.
7. References [1] E. Kenig, A. Górak, “Reactive Absorption” in Sundmacher K., Kienle A. Seidel M. A. (Edi.), 2005, Ch. 9 in Integrated Chemical Processes, Wiley, Weinheim. [2] L. Kucka, 2003, Modellierung und Simulation der reaktiven Absorption von Sauergasen mit Alkanolaminlösungen, PhD Thesis, TU Dortmund. [3] C. Noeres, E. Kenig, A. Górak, 2003, Modelling of reactive separation processes: reactive absorption and reactive distillation, Chem. Eng. and Proc., 42, 157-178. [4] X. Luo, J. Knudson, D. de Montigny, Sanpasertparnich T., R. Idem, D. Gelowitz, R. Notz, S. Hoch, H. Hasse, E. Lemaire, P. Alix, F. Tobiesen, O. Juliussen, M. Köpcke, H. Svendsen, 2009, Comparison and validation of simulation codes against sixteen sets of data from four different pilot plants, En. Proc., 1, 1249-1256. [5] H. Mangalapally, R. Notz, S. Hoch, N. Asprion, G. Sieder, H. Garcia, H Hasse, 2009, Pilot plant experimental studies of post combustion CO2 capture by reactive absorption with MEA and new solvents, En. Proc., 1, 963-970. [6] H. Hikita, H. Ishikawa, T. Murakami, T. Ishii, 1977, The kinetics of reactions of carbon dioxide with monoisoporpanolamine, dyclycolamine and ethylenediamine by a rapid mixing method, Chem. Eng. J., 14, 27–30. [7] P. Danckwerts, 1979. The reaction of CO2 with ethanolamines. Chem. Eng. Sci., 34, 443–446. [8] J. Rocha, J. Bravo, J. Fair, 1993, Distillation columns containing structured packings: A comprehensive model for their performance 1. Hydraulic models, Ing. Eng. Chem. Res., 32, 641-651. [9] J. Rocha, J. Bravo, J. Fair, 1996, Distillation columns containing structured packings: A comprehensive model for their performance 2. Mass Transfer model, Ing. Eng. Chem. Res., 35, 1660-1667. [10] R. Tsai, A. Seibert, R. Eldridge, G. Rochelle, 2009, Influence of viscosity and surface tension on the effective mass transfer area of structured packing, En. Proc., 1197-1204.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A CFD-Population Balance Model for the Simulation of Kühni Extraction Column Mark W. Hlawitschkaa,b, Moutasem Jaradata,b, Fang Chena,b, Menwer M. Attarakihc, Jörg Kuhnertb,d, Hans-Jörg Barta,b a
TU Kaiserslautern, Kaiserslautern, Germany Al-Balqa Applied University, Amman, Jordan c Centre of Mathematical and Computational Modelling, TU Kaiserslautern, Germany d Frauenhofer ITWM, 67663 Kaiserslautern, Germany b
Abstract In this work, computational fluid dynamics (CFD) calculations coupled with DPBM are compared to LLECMOD (Liquid-Liquid Extraction Column MODule) simulations and to Laser Induced Fluorescence (LIF) measurement of the phase fraction using an isooptical system of calcium chloride/water and butyl acetate. The results show a good agreement between the simulations and experimental data. The CFD requires a high computational load compared to LLECMOD, but gives local information about the droplet size and the phase fraction and is independent from geometrical constraints. Keywords: CFD, PBM, Extraction, Kühni.
1. Introduction Liquid-Liquid extraction is an important separation process used in hydrometallurgy, waste water treatment, oil industry as well as pharmaceutical industry and biotechnology [1]. Despite of the continuous development and the variety of applications, the design process of extraction columns still demands improvement. Till now, the design is based on the engineer’s knowledge, pilot plant tests and empirical correlations. The empirical correlations are still based on hydrodynamic (energy dissipation), geometrical (e.g. height and width of a compartment) and operating parameters (throughput, stirrer speed) and do not account the complex interactions of the dispersed phase. The droplets interact with each other, coalesce and break-up due to an energy input and this leads to a continuous change of the droplet distribution along the column. A better prediction of the dispersed phase is reached by a combination of hydrodynamic models with population balance modeling accounting the breakage and coalescence occurrences. A one dimensional tool based on a multivariate non-equilibrium population balance model is LLECMOD, which was specially developed for liquid-liquid extraction columns. It is a fast tool for simulation of pulsed, packed and agitated columns. Necessary parameters are available for the miniplant Kühni column and give already good results based on the 1-D axial model [2]. A highly resolved modeling of extraction columns can be performed using 2-D axis-symmetricl models (CFD models) combined with PBM. First promising results were obtained for a RDC column by Drumm et al. [3]. The simulation of a Kühni column transfers the simulation from a 2-D axissymmetric calculation to full 3-D. A high computational load is needed, but therefore, local information about the velocity field, the droplet size and the phase fraction can be obtained. In comparison to LLECMOD, no geometrical correlations are needed for
"A CFD-Population Balance Model for the Simulation of Kuehni Extraction Column"
67
droplet rising velocities and back-mixing effects. For validation of the simulation, especially in respect to the phase fraction, there is a lack of data in literature. Integral values are given by [4-6] using amongst others conductive, intrusive and pressure sensitive measurement techniques. A non-intrusive measurement technique is presented by Liu [7] who used a LIF system to investigate the phase inversion in stirred vessels. In contrast to this, the LIF system gives the possibility to obtain local phase fraction information from a single laser induced observation plane without disturbing the flow field. The result of the CFD calculation is compared to the LLECMOD simulation whereas the phase fraction of both simulations is validated with the LIF measurement.
2. LIF Measurement Laser induced fluorescence measurements were performed to obtain the local dispersed and continuous phase fraction. As shown in Fig. 1, the Kühni column is surrounded by the LIF system, consisting mainly of a laser, a collimator that generates a laser plane and a CCD camera with an optical filter. The camera focuses the fourth compartment from the bottom. A personal computer is connected to the camera for data processing. Rhodamin 6G is added to the dispersed phase resulting in highlighted droplets on the pictures, whereas the surrounding fluid stays black (s. Fig. 1b). The pictures were transformed to black and white pictures. In a postprocessing, a value of 1 was given to the droplets (white), representing 100% of dispersed phase and a value of 0 was defined for the continuous phase (black). Over 500 single pictures were averaged to obtain the local averaged phase fraction inside the compartment.
Figure 1: a) Setup for the LIF measurement; b) LIF measurement (top) and black and white picture (bottom). 1) PC, 2) laser, 3) camera, 4) dispersed phase inlet, 5) continuous phase outlet, 6) dispersed phase outlet, 7) stirrer, 8) column, 9) motor, 10) continuous phase inlet, 11) optical planar box
68
M. W. Hlawitschka et al.
2.1. Mathematical background In this work, two simulation tools were mutually compared. On the one hand for the 3D simulations, the commercial CFD code FLUENT 12.0.3 and on the other hand LLECMOD, which is a 1-D code, is used. Due to this LLECMOD converges after minutes, whereas the FLUENT simulations require a high computational load and days for a converged solution. 2.2. FLUENT Simulation An Euler-Euler model is used for the two phase simulation whereas the two liquids are treated as interpenetrating continua. A one group model, presented by Drumm et al. [3] is coupled by the use of user defined functions to FLUENT to account for breakage and coalescence. As breakage kernel, the model of Mart´ínez-Bazán et al. [8] is used whereas the coalescence kernel is derived from Prince and Blanch [9]. The kernels were adjusted accordingly to preliminary tests to fit to the used system. The miniplant Kühni column (diameter 32 mm, active height of 196 mm) was rebuild with the pre-processor Gambit and is shown in Fig. 2. The stators in forms of rings, separate each compartment from the other. Three baffles are orientated in an angle of 120° to each other. The initial mesh consisted of 512 213 cells and is split into a moving part around the stirrers and a stationary part including the baffles and stators for the use of multiple reference frame (MRF). After a converged solution, the mesh was refined at the column wall. The bottom boundary condition was set to a velocity inlet condition, whereas the top of the column is defined as pressure outlet. The column walls, the stirrer, shafts and baffles were defined with no-slip boundary conditions. Standard wall functions were used to model the near wall region. The first order implicit solver was applied for the discretization in time and for discretization in space, first order upwind schemes were used. The simple algorithm was used for pressure-velocity coupling and standard relaxation factors were applied for the calculation.
Fig. 2: Miniplant Kühni column with 7 compartments and a single compartment with the used mesh
2.3. LLECMOD LLECMOD is a window based program for simulation of liquid-liquid extraction columns based on a 1-D axial dispersion model. The influence of the geometry on hydrodynamic is accounted by correlations. Among these are correlations for the dispersion coefficient, the droplet velocity and swarm velocity, the breakup and coalescence frequency, the number of daughter droplets as well as correlations for the
"A CFD-Population Balance Model for the Simulation of Kuehni Extraction Column"
69
Fig. 3: Measured phase fraction (left), Simulated phase fraction (middle) and simulated droplet size (right)
energy input. A mulitude of experiments ranging from retention time measurements to single droplet and swarm experiments at different stirrer speed and throughput has to be performed to determine all these correlations in a typical time frame of an experimental PhD-thesis. A non-equilibrium multivariate population balance model is used to solve this model based on an efficient numerical technique developed by Attarakih et al. [10]. The simulations and measurements were performed under same conditions. The throughput of the dispersed phase (butyl acetate), as well as the continuous phase (water/calcium chloride) was 10 m³/m²h and the stirrer speed was set to 300 rpm. The inlet Sauter diameter was set to 3.6 mm.
3. Results and discussion Fig. 3 show the comparison between the measured phase fraction and the CFD simulation on the 3D miniplant Kühni column. In addition, the droplet size distribution in a single compartment is visualized. The phase fraction of the 1-D axial LLECMOD simulation is depicted in Fig. 4 for the 7 compartments. The LIF measurement, which was done for the fourth compartment and the CFD simulation show up the local phase fraction distribution. The droplets circulate through the compartment, whereas the axial way through the compartment can be predicted with both techniques. The dispersed phase enters the compartment at the bottom, rises to the stirrers, are pushed to the outside and rise to the next stirrer where a high phase fraction (accumulation) can be observed. The averaged phase fraction is about 7.4% for the simulation and 8% the measurement, whereas an averaged droplet size of d32=1.23 mm was predicted for the simulation. The largest droplet size of 1.4 mm can be seen in the area of the stators. The smallest droplet size is predicted at the stirrer tip with a value of 1.1 mm. The LLECMOD simulation shows a decrease of the phase fraction from the first to the last compartment. For the reference compartment, a value of 8.3% is calculated, which is in the range of uncertainty of the measurement. The average droplet size out of LLECMOD is d32=1.24 mm.
70
M. W. Hlawitschka et al.
Fig 4: Phase fraction calculated for the used column. 7 compartments were simulated.
4. Conclusion The 3-D simulations with FLUENT shows a slightly underpredicted phase fraction compared to the calculated phase fraction of LLECMOD and to the phase fraction measurement with LIF, whereas the average droplet size of both simulations fit to each other. LLECMOD, as 1-D axial dispersion model, gives good results in regard to the droplet size and phase fraction within minutes, but needs several years to gain needed correlations out of miniplant and pilot plant experiments. In comparison, FLUENT requires a high computational load, but locally resolves the e.g. phase fraction and diameter of the dispersed phase, the velocity of both phases and the turbulent energy dissipation. In addition, the 3-D simulation is independent from geometrical constraints and does not need time-consuming experiments for obtaining correlations and therefore allows changes of geometry to any dimension (scale-up ability).
References [1] T. Steinmetz, S. A. Schmidt, H.-J. Bart, 2005, Modellierung gerührter Extraktionskolonnen mit dem Tropfenpopulationsbilanzmodell, Chem. Ing. Tech., 77, No. 6, pp. 723-734 [2] T. Steinmetz, S.ԜA. Schmidt, H.-J. Bart, 2005, Modellierung gerührter Extraktionskolonnen mit dem Tropfenpopulationsbilanzmodell, Chem. Ing. Tech., Volume 77, Issue 6, pp. 723–734 [3] C. Drumm, M. Attarakih, M. W. Hlawitschka, H.-J. Bart, H.-J., 2010, One-Group Reduced Population Balance Model for CFD Simulation of a Pilot-Plant Extraction Column. Industrial Eng. Chem. Res. 49 (7), pp. 3442-3451 [4] T. Steinmetz, 2007, Tropfenpopulationsbilanzgestütztes Auslegungsverfahren zur Skalierung einer gerührten Miniplant Extraktionskolonne, PhD Thesis, TU Kaiserslautern [5] P. Kolb, Hydrodynamik und Stoffaustausch in einem gerührten Miniplantextraktor der Bauart Kühni, PhD Thesis, TU Kaiserslautern [6] L. N. Gomes, M. L. Guimarães, J. Stichlmair, J. J. Cruz-Pinto, 2009, Effects of Mass Transfer on the Steady State and Dynamic Performance of a Kühni Column - Experimental Observations, Ind. Eng. Chem. Res. 48, pp. 3580–3588 [7] L. Liu, O. K. Matar, E. de Susana Perez Ortiz, G. F. Hewitt, 2005, Experimental investigation of phase inversion in a stirred vessel using LIF, Chem. Eng. Sci. 60 (2005) 85 – 94 [8] C. Mart´ínez-Bazán, J. L. Montanes, J.C. Lasheras , 1999, On the breakup of an air bubble injected into fully developed turbulent flow. Part 1. breakup frequency. J. Fluid Mech. 401, pp. 157-182 [9] M. J. Prince, H.W. Blanch, H.W., 1990, Bubble coalescence and break-up in air sparged bubble columns. AIChE J. 36, pp. 1485-1499 [10] M. Attarakih, M. Jaradat, C. Drumm, H. - J. Bart, S. Tiwari, V. K. Sharma, J. Kuhnert and A. Klar, 2009, A Multivariate Population Balance Model for Liquid Extraction Column, in: Computer-Aided Chemical Engineering, 26, 1339, J. Jezowski and J. Thullie (Eds), Elsevier
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
CFD Study on the Application of Rotary Kiln in Pyrolysis Ka-Leung Lam, Adetoyese O. Oyedun, Chi-Wai Hui Department of Chemical and Biomolecular Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong
Abstract The pyrolysis of bulk feed requires the use of some alternative pyrolysis reactors other than the conventional fluidized bed reactors used in the fast pyrolysis of biomass. An indirect-fired rotary kiln was suggested to be a suitable choice subject to the need for a better thermal efficiency. An approach to utilize the Computational Fluid Dynamics (CFD) simulation and the pyrolysis kinetics for the design of pyrolysis rotary kilns with a better thermal efficiency is proposed. A case study of the internal configuration of the kiln with a qualitative discussion was used to demonstrate how the approach can be utilized for the kiln design process. Keywords: CFD modelling; Rotary kiln; Pyrolysis.
1. Introduction Pyrolysis, which yields carbonaceous residues, liquid hydrocarbons and combustible gases, is the thermal decomposition of organic materials at elevated temperatures in the absence of oxygen. It has been extensively studied and there are already some pyrolysis plants in pilot or commercial operation. Among these plants, fluidized beds reactors are more commonly used [1]. They provides a very high heating rate to the pyrolysis materials and a short vapour residence time. This type of pyrolysis, namely fast pyrolysis, is performed on feedstock that are in fine particle form of usually less than 6mm [2]. Biomass such as wood sawdust and agricultural waste is the common pyrolysis material and pyrolysis oil is generally the target product. Shredding of material into fine particle form is energy intensive. Therefore, in order to pyrolyse some bulk feed, such as municipal solid waste, sewage sludge, waste tyre and large wood waste, other type of reactors which can handle feed in the magnitude of centimetre have to be used. Rotary kiln, which has been extensively used in cement production, can be a solution. Typical rotary kiln is direct-fired, in which the combustion phase is directly in contact with the bed material inside the kiln [3]. On the other hand, in pyrolysis application, an indirect fired rotary kiln has to been employed to avoid the mixing of the heating gas and the product gas. Some examples of indirectfired rotary kiln in pyrolysis application would be one that is in Burgau for the pyrolysis of municipal solid waste to produce pyrolysis gas for electricity generation [4] and one that is developed by Mitsubishi Heavy Industries for the production of char from sewage sludge pyrolysis in Japan [5]. Despite of these examples, the used of indirect fired kiln for pyrolysis is rather scarce. To enhance the applicability of indirect fired kiln for pyrolysis, the thermal efficiency is a key factor and is the study of this work.
K.L. Lam et al.
72
2. Problem Statement Due to the inherent nature of indirect-fired rotary kilns, they have a low thermal efficiency and thus, they are typically small and limited to niche applications [3]. In order to utilize indirect-fired rotary kiln for pyrolysis application in a larger scale, its thermal efficiency has to be improved. This work aims to propose the use of Computational Fluid Dynamics (CFD) simulation and known kinetics as an approach to design an indirect-fired rotary kiln with a better thermal efficiency for pyrolysis application. An investigation for the effect of the addition of internal walls was performed as an example to demonstrate the approach.
3. Methodology To simulate the reaction, phase change, heat transfer and mass transfer for the pyrolysis process in a rotary kiln, CFD simulation is used as the modelling platform. The proposed approach is to integrate the pyrolysis kinetics and the known properties of the feedstock into the CFD model. The developed model is then used to study the effect of different kiln design on the pyrolysis performance and the thermal efficiency. The commercial CFD package ANSYS FLUENT 12.0 is used for the simulation. In the case study, a simple CFD model was set up to simulate the pyrolysis of wood waste in an indirect-fired rotary kiln with the conditions in Table 1. Wood waste is fed into the kiln at a rate of 20kg/s in room temperature. The Arrhenius parameters in Table 1 are used to represent the pyrolysis of wood feed by an one-step global kinetics [6]: Wood Æ v Volatiles + (ͳ-v) Char Table 1. Conditions used in the simulation Parameter
Value
Ref.
Kiln diameter Kiln length Kiln rotation speed Kiln inclination angle Heat of pyrolysis Pre-exponential factor Activation energy Volatiles fraction, v
2m 25m 1.2rpm 2o 4.3 x 105 J/kg 4.38 x 109 s-1 141.2kJ/mol 0.8
Based on [7] Based on [7] [7] [7] [6] [8] [8] [6]
Apart from using the conditions in Table 1, the following assumptions were made: x The solid bed movement simplifies as a viscous fluid flow, like in some works [9-10]. In Chen et al.‘s work, a mathematical model was developed considering solid flow as a continuous flow of viscous fluid [9]. x The density of the solid bed is constant such that natural convection, which is buoyancy-driven, does not occur. Therefore, conduction is the major heat transfer phenomena within the pseudo-fluid solid bed. x The kiln wall is able to maintain at a constant temperature of 873.15K. With the assumption of viscous flow for the solid bed, the Volume of Fluid (VOF) model was used as the multi-phase framework to model the flow of all the different
CFD Study on the Application of Rotary Kiln in Pyrolysis
73
immiscible phases. The finite-rate/eddy-dissipation species model was employed to model the wood pyrolysis kinetic. It accounts for the effect of possible turbulence fluctuations and enables the definition of Arrhenius chemical kinetic. The standard k-ڙ model describes the flow of the gas phase and the bed phase. Heat transfer to the feed is mainly by convection between the kiln surface and the pseudo-fluid solid bed. The first-order upwind scheme was used to discretize the flow equation, kinetic energy, turbulent dissipation rate and the reaction-mixtures fraction equations.
4. Results and Discussion 4.1. Case study for the addition of internal walls 4.1.1. Scenarios Four cases with different kiln internal configuration were simulated and are shown in Figure 1.
Case 1: Typical kiln without any internal walls
Case 2: Kiln with 2 baffles perpendicular to flow
Case 3: Kiln with 7 baffles perpendicular to flow
Case 4: Kiln with 2 annular walls Figure 1: The kiln internal configuration for different simulation cases
4.1.2. Simulation result The simulation was performed with the specified conditions in Table 1 and the results at a particular instant for different cases were compared. Table 2 summarizes the simulation results for different cases at the 150th second. Figure 2 shows the contours for case 3 at the 150th second. Table 2. Summary of the simulation results at the 150th second Case
1
2
3
4
Heat transfer rate (kJ/s) 1240 1269 1365 1309 Mass fraction of volatiles 0.070 0.082 0.144 0.131 Position of bed front (m) 14.35 14.15 12.50 12.30 Pressure drop along the kiln (kPa) 2.787 2.955 3.046 3.603 From Table 2, the flows of the feed are faster in both case 1 and 2 when compared to other cases and this results in a shorter residence time. It is clear that the higher the heat transfer rate, the greater the amount of volatiles is produced. In addition, when the
K.L. Lam et al.
74
number of baffles increases, the pressure drop along the kiln also increases. Figure 2(a) shows the increase of the static temperature along the kiln environment while it can be observed from figure 2(b) that the baffles help to divide the feed flow. (a)
(b)
Figure 2 Simulation result (150s) for case 3, contours of (a) static temperature; (b) kiln bed
4.1.3. Impacts of the internal configuration The simulation result shows that the addition of interior walls would have a great impact on the performance of the kiln. The effect is mainly attribute to the increase in the feed residence time and the heat transfer rate. It is in agreement with Li et al.‘s experimental work that the addition of internal structure (circular ribs) increases the mean residence time inside the rotary kiln [11]. The increase in feed residence time does not actually help the heat transfer directly, as the pile-up of material in a specific zone of a kiln does not favour heat transfer. In fact, the introduction of baffles increases the heat transfer rate by providing some additional heat transfer surfaces and inducing the mixing of bed material. In addition, the feed residence time also increases which provides a longer heating time for the completion of pyrolysis. On the other hand, an increase in the number of baffles would raise the pressure drop along the kiln and limit the gas flow. This, in turn, hinders the removal of the pyrolysis products from the hot kiln zone. This would be undesirable if the target major product is the pyrolysis oil because a long vapour residence time in the hot zone would favour secondary cracking reaction which would reduce the yield of pyrolysis oil [12]. Also, in practical application, the addition of interior structure within an indirect-fired kiln would raise similar maintenance issue as the addition of a chain section in a dry-process kiln, which is a high maintenance item [7]. The result shows that the addition of baffles would improve the thermal efficiency of the kiln, but the orientation of the baffles must be carefully considered for the inclusion to be effective. 4.2. Utilization of CFD simulation and kinetics data for the design of pyrolysis kilns The case study performed demonstrates the approach can assist in the design of a pyrolysis rotary kiln, especially when experimental data are limited [13]. Despite of using only a simplified bed movement model and a one-step wood pyrolysis kinetics, the approach can provide a brief insight for the effect of kiln internal configuration on the pyrolysis performance, like the experimental work by Li et al. [11]. In order to further utilize the approach, the development of a detail bed model for solid movement, the thermal analysis of the feedstock, the characterization of the feedstock and the use of a higher order numerical computation are required and are under development. With
CFD Study on the Application of Rotary Kiln in Pyrolysis
75
the proposed approach, in addition to the internal design of the kiln, other aspects of the kiln can be further studied for the purpose of designing indirect-fired pyrolysis rotary kilns and some are listed in Table 3. Table 3. Considerations for the design of pyrolysis rotary kilns Design aspect
External hot flue gas movement Fast removal of pyrolysis volatiles Operating conditions such as rotational speed, kiln dimension and operating pressure Targeting of different products Multiple temperature zones For example, an unique feature of an indirect-fired kiln is its capability of having compartments of different temperature zones [3]. This would provide a better control of the pyrolysis and vapour recovery when the pyrolysis behaviour of the feed such as the enthalpy change and the pyrolysis profile are known. The use of the proposed approach with the above considerations would help and improve the design of a pyrolysis kiln.
5. Conclusions An approach to utilize CFD simulation and pyrolysis kinetics for the design of pyrolysis kiln was suggested in this work. A case study on the effect of internal configuration on the kiln performance was then used to demonstrate how the use of the approach can assist the design. According to the case study, the addition of baffles would improve the thermal efficiency of an indirect-fired kiln, but the consideration of vapour removal should not be neglected. Some additional design aspects such as the kiln bed movement and the external hot flue gas movement have to be considered for a more thorough design of an indirect-fired rotary kiln for pyrolysis application.
6. Acknowledgement The authors gratefully acknowledge the support from Hong Kong RGC research grant (No. 613808, 614307).
References [1] S.N. Naik, V.V. Goud, P.K. Rout, and A.K. Dalai, Renewable and Sustainable Energy Reviews, 14 (2010) 578. [2] A.V. Bridgwater, D. Meier, and D. Radlein, Organic Geochemistry, 30 (1999) 1479. [3] A.A. Boateng, Rotary kilns : transport phenomena and transport processes, ButterworthHeinemann, 2008. [4] T. Malkow, Waste Management, 24 (2004) 53. [5] A. Takeshi, T. Mizuhiko, K. Yoichi, O. Satoshi, and T. Akira, Mitsubishi Heavy Industries Technical Review, 44 (2007). [6] A. Galgano and C.D. Blasi, Industrial & Engineering Chemistry Research, 42 (2003) 2101. [7] K.E. Peray, The Rotary Cement Kiln, Chemical Publishing Co., Inc., 1986. [8] C. Di Blasi and C. Branca, Industrial & Engineering Chemistry Research, 40 (2001) 5547. [9] J. Chen, T. Akiyama, H. Nogami, J.-i. Yagi, and H. Takahashi, ISIJ International, 33 (1993). [10] H. Nogami and J.I. Yagi, ISIJ International, 44 (2004). [11] S.Q. Li, J.H. Yan, R.D. Li, Y. Chi, and K.F. Cen, Powder Technology, 126 (2002). [12] C. Di Blasi, Progress in Energy and Combustion Science, 34 (2008). [13] X.Y. Liu and E. Specht, Chemical Engineering and Processing: Process Intensification, 49 (2010).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Towards a Generic Simulation Environment for Multiscale Modelling based on Tool Integration Yang Zhao, Cheng Jiang, Aidong Yang Chemical and Process Engineering, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, UK E-mail address: {y.zhao, c.jiang, a.yang} @surrey.ac.uk
Abstract Computer-aided multiscale modelling (CAMM) may be implemented in three successive stages, namely conceptual modelling, model realization, and model execution. Following earlier research on a conceptual modelling tool, prototypical tools for realizing conceptual models and for the execution of simulation are developed in this work, with the assumption that a multiscale simulation is to be carried out by means of integrating existing single-scale models. The design and implementation of these tools are presented. A preliminary case study on heterogeneous chemical reactor simulation is reported to prove and demonstrate the concepts. Keywords: multiscale modeling, tool integration, computer-aided modelling
1. Introduction Multiscale modelling as an emerging modelling paradigm is widely regarded as a promising and powerful tool in various disciplines. Through combining the models of different resolution scales of a complex system, multiscale modelling is able to offer a high-quality characterization or improved computational efficiency (Charpentier, 2002; Braatz et al., 2004; Vlachos, 2005). However, a multiscale model is usually much more difficult to develop than a single scale model due to a range of conceptual, numerical and software implementation challenges (Yang and Zhao, 2009). A number of efforts have been made to address the challenges of multiscale modelling. This includes conceptual developments such as the classification of multiscale modelling approaches (Pantelides, 2001; Ingram et al., 2004; Vlachos, 2006) and the support of integration between specific types of modelling tools (Bezo et al., 2004; Morales-Rodríguez & Gani, 2009). Based on the tool integration environment CHEOPS (Shcopfer et al., 2004), Kulikov et al. (2005) investigated the coupling of CFD and population balance modelling. In an ongoing effort which explores the feasibility of developing generic tools for Computer-Aided Multiscale Modelling (CAMM), a methodology has been proposed (Yang and Zhao, 2009) which bases CAMM tools on a hierarchical conceptualization of multiscale systems and which embraces a three-stage approach by which computerbased support of various degree of “automation” is offered to conceptual modelling, model realization, and model execution. Following this methodology, a conceptual modelling tool has been prototyped (Zhao et al., 2010), which makes use of the ontology developed by Yang and Marquardt (2009) as the theory for generic multiscale systems. Termed a conceptual model, the output of the tool is a specification of what scales are involved, how each scale should be modeled (in terms of components, phenomena, properties, and laws), and how different scales are connected.
Multiscale Modelling based on Tool Integration
77
A conceptual model may be used as a systematic documentation of a multiscale model. Moreover, the three-stage methodology for CAMM suggests that a conceptual model can be taken as the input to the subsequent modelling stages eventually leading to the successful execution of the intended multiscale simulation. In this paper, Section 2 presents a tool that supports model realisation. A model execution system (MES) centred around a simulation coordinator is presented in Section 3 to address the stage of model execution. A case study for demonstrating the functions of MES is described in Section 4.
2. Model Realization Starting with a conceptual model, it is envisaged that a multiscale model may be realized by automatic code generation, following an approach similar to the one proposed by Yang et al. (2004). The essence of this approach is that a mathematical model is composed by selecting and customizing elements from a library of basic building blocks according to the conceptual model. This approach would result in a “single” set of equations that can be solved collectively to realize a multiscale simulation. However, the applicability of this approach is limited given the fact that, in contrast to building a model completely from scratch, a more realistic mode of multiscale simulation is by the integration of existing modelling tools, each simulating one particular scale of a multiscale system. To support the latter mode of model realization, a simulation script generator (SSG) is developed in this work, which facilitates a modeller in constructing a multiscale simulation based on the combination of several existing single-scale modelling tools, such as gPROMS, Fluent and Aspen Plus. The SSG is developed upon the same ontology of general multiscale systems as used by the conceptual modelling tool developed earlier (Zhao et al., 2010). The objective of SSG is to take a conceptual model as input and produce a simulation script that specifies which modelling tool is used for simulating a scale defined in the conceptual model, what software component should be executed to realize a pre-defined inter-scale link, and what is the order of running these tools and components. In particular, the need of dynamic simulation is considered; the basic requirement includes the specification of overall period of time to be simulated as well as the size of a simulation step each scale runs before interactions take place between connected scales. A prototype of SSG is developed by Java and the popular ontology processing package Jena (http://jena.sourceforge.net/ontology/). The SSG analyzes the input conceptual model and presents its scale structure to the modeller. Based on this scale structure, the modeller can select appropriate modelling tools and software components implementing inter-scale links. S/he can also define the global simulation time and simulation step length (cf. Figure 1). According to the user’s input, an executable simulation script is generated.
Figure 1. Screenshot of using the simulation script generator.
78
Y. Zhao et al.
3. Model Execution In general, model execution is responsible for solving a multiscale model based on the output of the model realization step. Dealing with cases where a multiscale model is realised by integrating existing tools, a model execution system (MES) centred around a simulation coordinator is proposed. As illustrated by Figure 2, the MES makes use of a simulation coordinator to connect individual model realization components (including single-scale tools and inter-scale components) by means of predefined interfaces. To execute a specific simulation run, the simulator coordinator reads the simulation script produced by the aforementioned SSG, calls corresponding components in the predefined order, and carries out the exchange of data between different components until the simulation task is completed. As for the integration of existing modelling tools, the component-based philosophy similar to that of CAPE-OPEN (Braunschweig et al., 2000) and CHEOPS (Schopfer et al., 2004) is followed: a set of standard software interfaces is to be provided so that the tools to be integrated either support the interfaces “natively” or are linked to the MES by wrappers that implement the interfaces.
CFD tool
Other tools
Scale Component Interface
Molecular simulation tool
Simulation script
Simulation Coordinator
Inter-scale Component Interface
Bulk phase simulation tool
Aggregator
Disaggregator
Other components
Figure 2. Design of the model execution system (MES).
Like the other CAMM tools developed in this project, the MES is being implemented using Java language. CORBA (http://www.corba.org/), a mature technology for developing component-based systems has been adopted to serve as the middleware between the simulation coordinator and individual modelling tools. Figure 3 shows the interface for scale components (i.e. those simulating individual scales) and inter-scale components (including aggregator and disaggregator; interface for the latter omitted due to space limitations) by CORBA interface definition language (IDL). module SolverComponent { struct Data { sequence<string> name; sequence<double> value; }; interface ScaleSolver { void resolveInitialData(in Data data); void setComputationTime(in double time); double getComputationTime(); void initialize(); void compute(); Data getResult(); Data getUnknownData(); void setUnknownData(in Data data); }; interface Aggregator { void getUpperScaleParameter(in Data data); void getLowerScaleParameter(in Data data); void setComponentNumberOfLowerScale(in double number); void aggregate(); Data getResult(); }}
Figure 3. Scale/Inter-scale component interface (described by CORBA IDL).
4. Case Study In this section, a homogeneous-heterogeneous chemical reactor model (Vlachos, 1997) is utilized to demonstrate the features of proposed MSE. This reactor model couples two
Multiscale Modelling based on Tool Integration
79
different parts: the homogenous bulk fluid phase and the heterogeneous solid catalyst surface. For the bulk fluid phase, a continuum transport model is adopted to describe the diffusion of reactants towards the surface whilst a mass conservation model incorporating the kinetics rates of the surface catalytic reaction is applied to characterize the solid surface. Additionally, the solid surface is further represented by a molecularlattice comprising a number of sites. On each site a set of adsorption, desorption or reaction phenomena may occur. The Monte-Carlo (MC) method is involved to construct the non-continuum model for these three micro-processes. As schematically shown in Figure 4, this reactor model contains two scales. Scale two involves the continuum models for both fluid phase and surface as well as the coupling between them whilst scale one includes the non-continuum model for the sites of molecular-lattice. The connection between these two scales is presented by data aggregation and disaggregation. As for the continuum surface model in scale two, a specific vector property, q, which is required for calculating the particular kinetics rate related to the lateral interactions of species at the surface, is determined by the occupation-site function įi and the local coordination lci for each site i of the molecularlattice. Thus, scale two is linked to scale one by means of aggregation. On the other hand, for each site i of the lattice, two properties are needed to be characterized, namely the temperature at the site Ti and the concentration of the reactant A at the fluid phase boundary layer adjacent to the surface CA0 ,i. These properties of the site are determined by the corresponding properties of the entire solid surface, namely T and CA0 respectively, by means of disaggregation relations. In this proof-of-concept case study, a continuum model for scale two and a Monte-Carlo model for scale one were implemented separately, each providing the software interface “ScaleSolver”. Furthermore an aggregator and a disaggregator were also implemented offering the corresponding interfaces. The result of the simulation conducted using the MES was found identical to that of an implementation of the entire multiscale model in a single C++ code. In Figure 5, the left plot presents the logarithm of the concentration of reactant A at the fluid phase boundary layer adjacent to the surface versus the logarithm of simulation time; the right plot is the coverage of A at the surface versus the logarithm of simulation time. Note that no difference between the result from MES and that of the single-trunk model is visible. In terms of computational time, the two simulations were also comparable to each other in this particular case. Generally speaking, tools integration based simulation runs can be more time consuming than those realised by a single modelling tool due to the computational overhead associated with the integration. However, integration of multiple tools should be viewed as a means of realising a multiscale model which would otherwise be impractical to develop; in that case the computational burdens would become a secondary concern.
5. Conclusion Following a previously proposed three-stage methodology for CAMM, this work developed proof-of-concept tools for model realisation and model execution to deal with cases where multiscale simulation is to be carried out by integrating existing single-scale simulators. The experience gained in this study gives preliminary indication that it is feasible to develop generic CAMM tools without assuming case-specific details. This is enabled fundamentally by building these tools on a common conceptualisation of multiscale systems (particularly in a form of a generic ontology) as the basis for implementing the “general logic” for multiscale modelling. This approach has the potential to allow for the separation of the general logic from case-specific information and knowledge; the latter can be treated as the case-specific input to
80
Y. Zhao et al.
CAMM tools. This has been the case of a domain ontology, defined according to the common ontology for multiscale systems and used as the input to the generic conceptual modelling tool (Yang et al., 2010). Similarly as reported in this current paper, a simulation script, generated using the Simulation Script Generator for the specific realisation of a multiscale model, is taken as the input to the generic Model Execution System in order to perform a specific simulation run. Future work will further evaluate the methodology, extend the prototypical tools, and carry out more sophisticated case studies. Scale Two CA|x=0 = CA0
Fluid Phase (Continuum model)
Surface (Continuum model)
rAin = rd rAout = ra q
T, CA0
Aggregator
¥ , lc i
Disaggregator Ti, CA0, i
i
Scale One Sites of Lattice (Noncontinuum model)
Figure 4. Structure of the heterogeneous reactor model. 0
10
1
Coverage of A at the surface
Concentration of A at the boundary layer
0.9
-1
10
0.8 0.7 0.6 0.5 0.4 0.3
-2
10
-5
10
-4
10
-3
-2
10
10 Time
-1
10
0
10
0.2 -5 10
-4
10
-3
-2
10
10
-1
10
0
10
Time
Figure 5. Simulation results of the multiscale reactor model.
Acknowledgement This work is supported by the EPSRC (UK) under Grant No EP/G008361/1.
References Bezzo, F., Macchietto, S., Pantelides, C.C. (2004). Comput Chem Engng 28, 501–511. Braatz, R.D., Alkire, R.C., Rusli, E., Drews, T.O. (2004). Chem Eng Sci, 59, 5623 – 5628. Braunschweig, B. L., Pantelides, C. C., Britt, H. I., & Sama, S. (2000). Chemical Engineering Progress 96, 65. Charpentier, J.-C. (2002). Chemical Engineering Science, 57, 4667-4690. Ingram, G.D., Cameron, I.T., Hangos, K.M. (2004). Chem Eng Sci, 59, 2171 – 2187. Kulikov V., Breisen, H., Marquardt, M. (2005). Chem. Eng. Res. Des. 83 (A6), 706. Marquardt, L.V. Wedel and B. Bayer (2000). In: M.F. Malone, J.A. Trainham and B. Carnahan, Editors, Foundations of computer-aided process design, 96, 192–214. Morales-Rodríguez, R., Gani, R. (2009). Computer Aided Chemical Engineering 26, 495. Pantelides, C. C. (2001). Computer Aided Chemical Engineering, 9, 15-26. Schopfer, G., Yang, A., von Wedel, L., Marquardt, W. (2004). International J. on Software Tools for Technology Transfer, 6, 186-202. Vlachos, D. G. (1997). AiChe Journal, 43, 3031–3041. Vlachos, D. G. (2005). Adv. Chem. Eng., 30, 1. Yang A., Marquardt, W. (2009). Comput Chem Engng, 33, 822–837. Yang A., Zhao Y. (2009). PSE’2009, August 2009, Brazil. Zhao Y., Jiang C., Yang A. (2010). 20th European Symposium on Computer Aided Process Engineering, June 2010, Italy.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integral Formulation of the Population Balance Equation using the Cumulative QMOM Menwer Attarakiha,b,c, M. Jaradatb,c, M. Hlawitschkab,c, H.-J. Bartb,c, J. Kuhnertd a
Al-Balqa Applied University, Faculty of Eng. Tech, POB 15008, 11134 Amman, Jordan TU Kaiserslautern, Lehrstuhl für Thermische Verfahrenstechnik, POB 3049, 67653 Kaiserslautern, Germany c Centre of Mathematical and Computational Modelling, TU Kaiserslautern, Germany d Fraunhofer Institut Techno- und Wirtschaftsmathematik, Kaiserslautern, Germany b
Abstract The integral formulation of the population balance equation using the CQMOM presents a novel and hierarchical method to couple the QMOM and the physically evolving particle size distribution. Here, not only is the cumulative number density function reconstructed, but also its low-order moments. The numerical analysis of the method shows two desirable properties: First, it can be considered as a free-mesh method, since the solution of each integral equation at the current grid point does not depend on the other ones. Second, the accuracy of the targeted low-order cumulative moments depend only on the nodes and weights of the cumulative Gauss-Christoffel quadrature, but not on sampling the continuous low-order cumulative moments. So, the CQMOM is a general integral formulation of the population balance equation and is an effective numerical scheme in which the QMOM is imbedded as a limiting case. Keywords: Integral Population Balance, CQMOM, Numerical Solution.
1. Introduction The population balance equation is used in numerous scientific and engineering applications. Such applications include multiphase flows and turbulence modelling, aerosol science and kinetic theory and biological and biomedical engineering [1-7]. Fluid phases which are discrete either at the molecular or particle level can be described by a statistical Boltzmann-type equation, which is called the population balance equation (PBE) [4, 5]. The PBE is a hyperbolic integro-partial differential equation characterized by a nonlinear integral source term. This source term accounts for various interactions with which particles of a specific state can either form or disappear from the system [6]. These interactions can describe the system behavior up to any degree of detail. Thus, PBEs are very suitable for understanding and investigating many single processing units such as crystallizers, turbulent flame reactors, polymerization reactors, bubble phase reactors, and extraction columns [5, 7, 9]. However; due to their complexity and the lack for general analytical solution, these population balance models are hardly used in flowsheeting programs to simulate whole plants made up of many interacting units or CFD simulations using complex equipment geometry [3, 4, 6-8]. Therefore, there is a need to have a simple reduced model for discrete phases, without losing the detailed description of single phenomena inherently embedded in the PBE. One of the popular model formulations is the transformation of the PBE into a set of self-contained integral equations that describe the moments’ evolution of the particle size distribution. In general, these methods belong to the moment transformation of the PBE at the expense of destroying the distribution itself [2-4]. In contrast to the classical
M. Attarakih et al.
82
method of moments, where closure problem is overcome by a priori assumed distribution or simplified kernel functions [10], the Quadrature Method Of Moments (QMOM) provides not only an efficient closure to the moment problem, but also an extremely efficient Gauss-like integration quadrature [2-4]. The QMOM is based on the global low-order moments of the weight function and requires only a few nodes (two or three) to converge. However; the major drawback of the moment methods in general and the QMOM in particular is their inability to reconstruct the particle size distribution (PSD). The PSD plays a decisive role in the determination of the physicochemical and mechanical product properties made of particulate systems [6, 9, 10]. Recent advances and development in online measurements and control provide real-time access to system parameters, which are estimated, based on the whole size distribution [1, 6]. John et al. [9] investigated rigorously the PSD reconstruction from its low-order moments, where they addressed the theoretical and practical difficulties in solving the ill-posed inverse moment problem. This ill-posedness and uniqueness of the PSD are still open problems and are recently addressed by Mnatsakanov and Hakobyan [11]. This work presents a method, which obviously overcomes the above shortcomings. The present method uses a hierarchical novel formulation of the PBE, in which the rth (with respect to particle property space) cumulative distribution is reconstructed at given arbitrary grid points. Here, the rth cumulative moment is function of the particle internal property and evolves in time and physical space. Since the rth cumulative moment at any grid point along the particle property space is not expected to affect the global moments, the number and structure of the grid points are not going to affect the accuracy of the cumulative integration quadrature. The present method provides a continuous Gauss-Christoffel quadrature for which a two-node quadrature with equal and unequal weights is derived analytically. For the nth-node quadrature, the standard Product Difference Algorithm (PDA) is used [3]. Due to the combination of the cumulative (rather than global moments) moments with the QMOM, the method is given the name: Cumulative QMOM or shortly CQMOM.
2. The Cumulative Quadrature Method Of Moments (CQMOM) Let XX Z T be a nonnegative integrable function on the interval ;A B = and all its cumulative moments exist. This function can be expanded at any given point x as a sum of Dirac delta functions (with weights MJ X and nodes [ J X ; X = ). Now, let the cumulative moments of WX be defined as: NR X
¨
X
A
[ R W [ D [ ÅÅÅR . Q
(1)
.Q
and substitute W X T
M X T E[ [ X T in Eq.(1) to get: J
J
J
.Q
NR X
M X ¢ [ X ¯± J
J
R
(2)
J
The above continuous cumulative integration quadrature is exact for integrands that are polynomials of degree at most 2Nq-1. Note that in contrast to the QMOM, the CQMOM is formulated using nodes and weights that are functions of the particle property space. This quadrature has continuous nodes and weights as long as the cumulative moments from which they are constructed are continuous. The global Gauss-Christoffel
Integral Formulation of the Population Balance Equation using the Cumulative QMOM (CQMOM)
83
quadrature can be viewed as a special case when the limits of integration given by Eq.(1) are from zero to infinity. Note that μ0(x) = W(x), which is the cumulative distribution of w(x) and the other cumulative moments (for r =1, 2, …2Nq-1) are all monotone and at least right continuous functions. These functions satisfy two important properties: they tend to zero as the particle property (x) tends to zero and their change with respect (x) tends to zero as (x) goes to infinity. These properties help in eliminating the so called finite domain error during the solution of the PBE and force the numerical schemes to be conservative (with respect to X ). Eq.(2) can be inverted analytically for a two-node quadrature and using the PDA For the nth-node quadrature.
3. The Integral Population Balance Equation (IPBE) The CQMOM described in section 2, when applied to the PBE provides not only the cumulative number distribution, but also the required cumulative moments of higher order. In this contribution only the IPBE for particle breakage is given here: X d MINX [ R sNR X T ¨ [ R ([ K W[ T D [ ¨ [ R ([ K W[ T D [ ¨ X[ CX \ [ DX (3) sT
Where Γ and β are the particle breakage frequency and daughter particle distribution respectively. The first term on the right hand side represents loss of particles of sizes less than or equal to x and the second term accounts for particle formation of sizes between zero and ζ if a mother particle of size ζ is broken. By formal substitution of the w(x) expansion given in section 2 and making use of the Dirac delta function properties, one gets the following semi-discrete IPBE: .
Q sNR X T [ JR X T ([ J X T K MJ X T sT J
.Q
ÅÅÅÅÅÅÅÅÅÅÅ [ X d T ([ J X d T K MJ X d T ¨ R J
J
MINX [ J
CX \ [ DX X [J
R
(4)
J
In the above equation x∞ is the particle size corresponding to infinity (lim (∂μr ⁄∂x = 0 as x→∞). The source term in Eq.(4) is closed since ζ and λ are found by inverting Eq.(2). Note that the closure used above does not depend on particular values of x. So, the cumulative moments can be reconstructed by sampling arbitrary the particle size x. This makes the CQMOM a mesh-free method and the mesh can be easily refined in the regions of sharp cumulative moments.
4. Numerical Results and Discussion In this section numerical results are presented and analyzed to illustrate the implementation of the CQMOM for particle breakage. This is best demonstrated by choosing a spatially homogeneous PBE to isolate the interaction of complex flow field numerical algorithms with the PBE numerical solver. Cases for particle aggregation, simultaneous breakage and aggregation, and simultaneous particle growth and aggregation will be presented separately. First, the sensitivity of the equal and unequal weight quadratures to the input cumulative moments is analyzed for the case of particle breakage in batch vessel. Table (1) shows the relative error in the quadrature weights and abscissa using analytical and numerical cumulative moments. It is obvious that the two-equal weight quadrature is less sensitive to numerical errors in the input moment
M. Attarakih et al.
84
vector. This is because Gaussian quadratures with uniform weights are subjected to less round off errors, when applied to ill-posed problems [2, 13].
Table 1: Sensitivity of quadrature weights and abscissa to the input cumulative moments. % relative error in abscissa
Quadrature type
% relative error in weights
Two-equal weight
2.4054
1.4483
Two-unequal weight
22.0202
8.1121
Three- unequal weight
19.5600
8.5316
10
4
2
0
0
1 d(-)
2
μ
2 0
0
1 d(-)
2
1 0
6
2.5
10
4
2
5 0
0
5 time ( - )
10
μ2
15 μ1
μ0
μ
μ
5
2
3
1
6
0
15
2 0
0
1 d(-)
2
5 time ( - )
10
1.5 1
0
5 time ( - )
10
0.5
0
Figure (1): Comparison between analytical (solid line) and numerical solutions (open circles) for particle breakage in batch vessel. The upper panel is the first three cumulative moments at t = 10 and the lower panel is the variation of global moments as function of time.
Fig.(1) compares the analytical solution [14] for particle breakage in a homogeneous batch vessel (Γ = d3, β = 3d2/d′3 and w0 = exp(- d3)) with the numerical solution using the CQMOM. As can be seen, the first three cumulative moments are well reproduced along the particle size using only 30 grid points. The accuracy of the solution does not depend on these grid points, but only on the number of quadrature points (here Nq = 3). The global moments as predicted by the CQMOM are also compared to the analytical ones. These are obtained by setting the particle size to a large enough value (x∞), where for this special case the QMOM is recovered. Fig.(2) shows a comparison between the SQMOM [2] and the CQMOM using realized breakage functions (breakage frequency & daughter particle distribution) taken from the work of Alopaeus et al. [12]. These functions describe droplet breakage in a turbulent liquid-liquid dispersion taking into account the effect of the dispersion physical properties and the energy dissipation. The CQMOM reproduced both zero cumulative (left panel) and global (right panel) moments when compared to the SQMOM. Unlike the SQMOM, the increase in the number of grid points does not have any effect on the
Integral Formulation of the Population Balance Equation using the Cumulative QMOM (CQMOM)
85
accuracy of either the cumulative or global moments (when x→∞).) Since large increase in the number of grid points is decoupled from prediction of integral quantities, using small number of arbitrary grid points is sufficient to infer about the shape of the distribution. 15
τ = 10 s
10
10
μ0
3
μ0 ( 1/m )
15
5 0
5
inlet feed
0
0.5 d ( mm )
1
0
0
50 time ( s )
100
Figure (2): Comparison between the SQMOM (solid line) and the CQMOM (open circles) for droplet breakage in continuous stirred tank. Left panel is the cumulative zero moment and the right panel is the global zero moment as function of time.
5. Summary and Conclusions The successful derivation of the cumulative Gauss-Christoffel quadrature as a continuous closure rule motivates the integral formulation of the PBE. This integral formulation is guaranteed to reproduce exactly any desired finite number of low-order cumulative moments. In contrast to the QMOM, which destroys the number density function, the present CQMOM reproduces not only the cumulative number density function, but also its low-order cumulative moments. The CQMOM can be considered as a mesh-free method since the grid structure is independent of the integration quadrature. The CQMOM is reduced simply to the standard QMOM when only one grid point is placed at infinity. Compared to analytical solutions and other numerical methods (SQMOM), the CQMOM solves the IPBE with a very high accuracy.
References [1] B. N. Raikar, S. R. Bhatia, M. F. Malone & M. A. Henson (2006), Chem. Eng. Sci. 61 , 7421 . [2] M.M. Attarakih, C. Drumm & H.-J. Bart (2009), Chem. Eng. Sci. 64, 742. [3] R. McGraw (1997), J. Aeros. Sci. Tech. 27, 255. [4] R.O. Fox (2008), J. Comp. Phys. 227, 6313. [5] Z. Zhu, C.A.Dorao. & H. A. Jakobsen. (2010), Ind. Eng. Chem. Res., 49, 6204. [6] I.T. Cameron, F.Y.Wang, C.D. Immanuel & F. Stepanek (2005). Chem. Eng. Sci. 60, 3723. [7] M. A. Hjortso (2004), Population Balances in Biomedical Engineering. McGraw-Hill Co., New York [8] C. Drumm, M. Attarakih, M. W. Hlawitschka & H.-J. Bart (2010), Ind. Eng. Chem. Res., 49, 3442. [9] V. John, I. Angelov, A. A. Öncülc & D. Thévenin (2007), Chem. Eng. Sci. 62, 2890 – 2904. [10] R. B. Diemer, & J. H. Olson (2002), Chem.Eng. Sci. 57, 2211. [11] R. M. Mnatsakanov & A. S. Hakobyan (2009), IMS Lect. Notes–Monograph Series, 57, 252. [12] V. Alopaeus, J. Koskinen, K. I. Keskinen, & J. Majander (2002), Chem. Eng. Sci., 57, 1815. [13] E. Isaacson & H. B. Keller (1994), Analysis Of Numerical Methods. Dover Publications Inc., New York. [14] R. M. Ziff, & E. D. McGrady (1985). J. Phys. A:Math. Gen., 18, 3027.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integration of Generic Multi-dimensional Model and Operational Policies for Batch Cooling Crystallization Noor Asma Fazli Abdul Samad, Ravendra Singh, Gürkan Sin, Krist V. Gernaey, Rafiqul Gani Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 229, Technical University of Denmark, DK-2800,Lyngby, Denmark
Abstract A generic multi-dimensional modeling framework for studying batch cooling crystallization processes under generated operational policies is presented. The generic nature of the modeling allows the study of a wide range of chemical systems under different operational scenarios, enabling thereby, the analysis of various crystallization operations and conditions. Furthermore, a systematic procedure for generating operational policies through available analytical crystal size distribution (CSD) estimators has been developed and verified for achieving targeted CSD consistently. The application of the model-based framework is highlighted for batch cooling crystallization of potassium dihydrogen phosphate (KDP) in two-dimensions, while the use of the analytical estimator is demonstrated for a potassium dichromate case study to achieve a target CSD. Keywords: Crystallization, generic modeling framework, multi-dimensional model, analytical estimator, crystal size distribution (CSD).
1. Introduction Crystallization processes form an important class of separation techniques involving solid-liquid phases. The specifications of the crystal product are usually given in terms of crystal size, shape and purity [1]. In order to predict the desired crystal morphology by means of model-based approaches, appropriate models covering the effects of the various operational parameters on the behavior of the crystals are necessary. Also, often one-dimensional models have been used to study the crystallization process. However the one-dimensional models only consider one inner variable (characteristic length) in the population balance equations as a measure for crystal size, thus limiting the crystal shape only to the description of spherical or cubic crystals. To fully characterize the crystal particles higher dimensional models are necessary, that is, a multi-dimensional population balance modeling approach is needed, where two- or even threecharacteristic lengths of a crystal may be considered. Once the model is available, it can subsequently be used in many applications notably to obtain the required product qualities in terms of CSD and shape. Usually the main difficulty in batch cooling crystallization is to accomplish a uniform and reproducible CSD. Usually supersaturation control has been applied to drive the process within the metastable zone to enhance the control of the CSD. Although this approach has shown to produce high quality crystals, the set point operating profiles for the supersaturation controller are usually chosen arbitrarily or by trial-and error.
Integration of Generic Multi-dimensional Model and Operational Policies for Batch Cooling Crystallization
87
The objective of this work is to integrate generic multi-dimensional models with operational policies based on analytical estimators within a computer aided framework for study of batch cooling crystallization processes. Also, the paper highlights the application of the framework to study different multi-dimensional aspects of crystallization processes for a wide range of chemical systems. In order to generate an operational policy, an analytical CSD estimator has been introduced and integrated with the generic multi-dimensional model in the framework. The estimator is based on the assumptions of constant supersaturation and an operation that is dominated by size dependent growth [1,2]. The generated operational policy provides the supersaturation set point and by maintaining the operation at this point, a target CSD is achieved. Compared to earlier works [1,2], additional information regarding the total crystal mass is also targeted here. The application of the multi-dimensional model-based framework is highlighted using a two-dimensional potassium dihydrogen phosphate (KDP) batch cooling crystallization process as a case study. The use of an analytical estimator for prediction of the CSD is illustrated on a potassium dichromate case study.
2. Generic multi-dimensional model-based framework A generic multi-dimensional model-based framework (Fig. 1) has been developed to create the specific models to describe various crystallization processes based on the generic batch cooling crystallization model [3]. There are 4 main steps through which the problem specific model is created on the basis of the generic multi-dimensional model-based framework. The first step is the problem definition for the crystallization process under study: the overall objective of the study is defined, and process definition of the specific crystallization process being investigated is provided. The step 2, problem specification, involves the selection of the chemical system that needs to be studied and the collection of the relevant information about the process and the product.
Figure 1. Generic multi-dimensional model-based framework [3]
The third step is concerned with the listing of the necessary balance and constitutive equations needed to model the crystallization process. The balance equations consist of population, overall mass and energy balances for the defined crystallization volume
88
N.A.F.A. Samad et al.
supplemented with energy balance equations for the cooling jacket. The constitutive equations contain a set of models describing nucleation, crystal growth rate, supersaturation, saturation and metastable concentration as well as physical properties corresponding to different types of chemical systems found in the crystallization processes. Subsequently a problem specific model is created which is verified through process (operation) analysis (step 4). If the specific model is satisfactory with respect to the desired performance, then it is included in the model library. In this way, the generic model is adapted to reflect a specific case study and thereby allows the analysis of various crystallization operations and conditions.
3. Potassium dihydrogen phosphate (KDP) crystallization process – a twodimensional modeling case study In this section, the application of the framework for the potassium dihydrogen phosphate (KDP) crystallization process is demonstrated using a two-dimensional modeling approach (adopted from [4, 5]). 3.1. Problem definition (step 1) The overall objective for this problem is to observe the crystallization scenario based on generated concentration and temperature profiles and study the effect of properties of the crystal particles, especially with respect to the total crystal mass, average length and width of the crystals. 3.2. Problem specification (step 2) The chemical system that needs to be investigated is potassium dihydrogen phosphate (KDP) dissolved in water. Thus the two chemicals involved are KDP (solute) and water (solvent). The process equipment involved is a jacketed batch crystallizer. 3.3. Model development and solution (step 3) Similar conditions and assumptions have been applied in this study as reported in the literature [4, 5] to allow validation of the generic modeling framework. The set of equations needed to represent the models for the specific operation phases were extracted from the generic model.
ϯϲ ϯϰ ϯϮ ϯϬ Ϯϴ Ϯϲ
Concentration (g/g)
Temperature (°C)
3.4. Process (operation) analysis (step 4) The problem specific model is first analyzed and then simulated in the ICAS-MoT modeling toolbox. Different simulation strategies are employed depending on the specific phase of the crystallization operation. The batch time for this case study is 2 hours and the temperature is decreased linearly from 34 to 28°C, until the end of the crystallization operation, as shown in Fig. 2 (left). Crystallization starts with the initial cooling operation (phase 1) where the solution was cooled from 34°C until it reached the saturation line after 1000 seconds (Fig. 2, right). Ϭ͘ϯϮ
(1)
(2)
(3) Solute Concentration Saturation Concentration
Ϭ͘ϯϭ Ϭ͘ϯϬ Ϭ͘Ϯϵ Ϭ͘Ϯϴ
Ϭ
ϮϬϬϬ
ϰϬϬϬ Time (sec)
ϲϬϬϬ
ϴϬϬϬ
Ϭ
ϮϬϬϬ
ϰϬϬϬ
Time (sec)
Figure 2. Temperature and concentration profiles
ϲϬϬϬ
ϴϬϬϬ
Integration of Generic Multi-dimensional Model and Operational Policies for Batch Cooling Crystallization
89
Characteristic Length (μm)
Total Crystal Mass (g)
Once the supersaturation condition was achieved, 1.5 g of seed crystals was introduced into the solution to prevent a too high nucleation in the beginning (phase 2). The average length and width of the crystal seed was 100 μm. This seed then grows based on crystal growth phenomena until the end of operation (phase 3). The crystal population growth in phase 4 starts at the same time with phase 2 and lasts until the end of the operation. In this phase the physical properties of crystal particles such as the total crystal mass obtained, were determined. Fig. 3 (left) shows that a total crystal mass of 28 g has been produced, where, the initial seed mass was 1.5 g. Meanwhile Fig. 3 (right) shows that the average length and width of the crystal initially at 100 μm are increasing towards the end of the process, e.g. at t = 7200 seconds. The average length of the crystals grown from seeds is around 540 μm and the average width is approximately at 280 μm, indicating that the crystals are elongating because of two different growth kinetic parameters applied on the same crystal growth model. The significance of this result is that the volume of more complicated shapes can be determined accurately based on information of the average length and width. ϯϬ ϲϬϬ ϮϬ ϭϬ Ϭ Ϭ
ϰϬϬ ϮϬϬ
Average Length Average Width
Ϭ
ϮϬϬϬ ϰϬϬϬ ϲϬϬϬ ϴϬϬϬ
Ϭ
ϮϬϬϬ ϰϬϬϬ ϲϬϬϬ ϴϬϬϬ
Time (sec)
Time (sec)
Figure 3. Total crystal mass and average characteristic length profiles
4. Operational policy for achieving target crystal size distribution (CSD) In this section, the systematic procedure to generate and employ an operating policy is presented. By employing an analytical CSD estimator, the policy to obtain a target CSD is generated. Eq. (1) represents the general analytical CSD estimator ( f n ) and Eq. (2) represents the characteristic length ( L ). The target CSD is obtained by specifying the initial characteristic length ( L0 ) and initial seed of the CSD ( f n 0 ). The kinetic growth parameters are obtained from the chemical system that needs to be investigated. The application of the systematic procedure is illustrated for the potassium dichromate case study.
( (
f n = f n 0 (L0 ) 1 + k g S tγ (1 − p ) (1 + γL0 )
1− p
g
))
1 · § 1− p L = ¨¨ (1 + γL0 ) + k g S g tγ (1 − p ) 1− p − 1¸¸ γ ¹ ©
(
)
p p −1
(1) (2)
4.1. Generation of target CSD The initial seed of the CSD has been generated as a normal distribution by using a mean of 156.89 μm and a standard deviation of 43.75 μm. The final characteristic length and target CSD are then calculated by using potassium dichromate growth parameters as well as the supersaturation ( S ) set point at 0.029 and assuming a total crystallization time ( t ) of 110 minutes.
N.A.F.A. Samad et al.
4.2. Validation with closed-loop control A complete mathematical model for potassium dichromate has been generated by the generic multi-dimensional modelbased framework. This model is now used to verify the operation policy. A PI controller has been used in order to maintain the operation at the generated operation policy (see Section 4.1). Fig. 4 shows that the concentration is well maintained at the set point trajectory until the end of operation.
Concentration (g/g)
90
Ϭ͘ϯϱ Ϭ͘ϯϬ Ϭ͘Ϯϱ Ϭ͘ϮϬ Ϭ͘ϭϱ Ϭ͘ϭϬ Ϭ͘Ϭϱ Ϭ͘ϬϬ
Concentration Saturation Concentration Concentration Set-point
Ϭ ϱ ϭϬ ϭϱ ϮϬ Ϯϱ ϯϬ ϯϱ ϰϬ ϰϱ Temperature (C)
Figure 4. Closed-loop control
Ϭ͘Ϭϭ Ϭ͘ϬϬϴ Ϭ͘ϬϬϲ Ϭ͘ϬϬϰ Ϭ͘ϬϬϮ Ϭ
Analytical Estimator Model Simulation
Ϭ
ϮϬϬ
ϰϬϬ
ϲϬϬ
Final Characteristic Length (μm)
ϴϬϬ
Total Crystal Mass (g)
Final CSD (#/μm.g solvent)
4.3. Performance comparison and analysis The performance of the simulated operation with the generated policy to achieve the target CSD is highlighted in Fig. 5. Fig. 5 (left) shows the comparison of the target and simulated CSDs, while Fig. 5 (right) shows the corresponding total mass crystallized (approximately 18.6 g was achieved from 1.2 g of initial seed mass). ϮϬ ϭϱ ϭϬ ϱ Ϭ Ϭ
ϮϬ ϰϬ ϲϬ ϴϬ ϭϬϬ ϭϮϬ Time (min)
Figure 5. Target CSD predictions comparison and total crystal mass
5. Conclusions A generic multi-dimensional model-based framework has been integrated to an analytical estimator for operation and study of batch cooling crystallization operations. The modeling feature of the framework to generate specific crystallization process operational models for different production scenarios has been highlighted for the KDP crystallization process. The generation of operation policy involving an analytical CSD estimator has been illustrated using a potassium dichromate case study and the generated policy has been found to be capable of achieving the target CSD accurately.
6. Acknowledgement The PhD project of Noor Asma Fazli Abdul Samad is financed by a PhD scholarship from the Ministry of Higher Education of Malaysia and Universiti Malaysia Pahang.
References [1] E. Aamir, Z.K. Nagy, C.D. Rielly, Chem. Eng. Sci., 65 (2010), 3602-3614. [2] E. Aamir, Z.K. Nagy, C.D. Rielly, T. Kleinert, B. Judat, Ind. Eng. Chem. Res., 48 (2009), 8575-8584. [3] N.A.F.A. Samad, R. Singh, G. Sin, K.V. Gernaey, R. Gani, Comput. Chem. Eng. (2010), doi:10.1016/j.compchemeng.2011.01.029. [4] D.L. Ma, D.K. Tafti, R.D. Braatz, Ind. Eng. Chem. Res., 41 (2002), 6217-6223. [5] R. Gunawan, D.L. Ma, M. Fujiwara, R.D. Braatz, Int. J. Mod. Phys, 16 (2002), 367-374.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Multi-scale Systems Approach to Granulation Process Design Rohit Ramachandrana a
Rutgers University, 98 Brett Road, Piscataway, NJ, USA
Abstract The overarching goal of this study is to quantitatively understand the interactions between material properties, process parameters, equipment design and environmental conditions to predict product performance of granules as product performance is critical to the value of granulated products. In this study, multi-scale predictive models are presented for granulation processes combining key material properties and process parameters with transport phenomena. Results obtained from the study enables a more quantitative and predictive understanding of granulation. Furthermore, the improved multi-scale model formulations can be used to alleviate labor- and capital-intensive experimentation that currently plagues industrial processes. Keywords: Granulation, Multi-scale, First-principles
1. Introduction and Objectives In granulation, which is a particle design process, fine powdery solids are formed into larger granules with the aim of achieving targeted product attribute(s). Due to a lack of a systems-based understanding, target attributes are often not met in a timely and cost efficient manner, with many of the industrial granulation operations suffering from high recycle and batch rejection rates. It has been recognized that studying the granulation process involves a multi-scale operation where final product quality is influenced by the particle interactions, granule mechanisms and flows, velocities and stresses However, there is still significant disconnect in terms of incorporating fundamental physics and chemistry in model forms at the different scales, and addressing and integrating the multi-scale nature of the granulation process [1]. Therefore, the proposed objectives at the different scales to be studied are 1) Micro(single granule) scale: Study liquid surface coverage of particles, particle-particle collision frequencies, collision velocity distributions and particle motion. 2) Meso(population of granules) scale: Study and formulate mechanistic descriptions of granulation rate processes. 3) Macro- (vessel) scale: Formulate and solve overall distributed population balance model incorporating heat and mass transfer and to perform model validation studies to ensure overall model is predictive. 4) Multi-scale: Integrate partial models at each scale and to facilitate information exchange between each scale for the purpose of simulating and validating evolutions/distributions of key granule properties.
2. Multi-scale Model Development Based on Figure 1, it is evident that granulation modeling is a multi-scale operation which requires he implementation of several modeling strategies (CFD, DEM, PBM, microstructure formation models with adequate information exchange between them in
92
R. Ramachandran
the form of distributions of granule properties and liquid, collision velocities and frequencies, accessible liquid fraction and thickness and accurate granulation kernels for nucleation, aggregation and breakage (discrete phenomena).
Figure 1: Schematic of proposed multi-scale configuration. VoF is the volume of fluid method which is a sub-class of CFD [2]. 2.1 Micro-scale At the micro-scale, the focus will be on specific objective 1: Liquid surface coverage for distribution nucleation (which occurs in fluid bed granulation) for single and multiple hydrophilic/hydrophobic powders We have demonstrated a methodology for computer simulation of granule structure formation that is able to adequately quantify the liquid surface coverage as a function of key material properties (e.g. contact angle, liquid surface tension, surface roughness and non-sphericity) [2]. The morphology of primary particles was modeled as the so-called Gaussian blobs, (i.e., spheres whose radius is modulated by a Gaussian-correlated random surface). A class of such objects is fully specified by a set of three numerical descriptors: the mean radius of gyration, the surface roughness amplitude and the surface roughness correlation length, L. A population of distinct Gaussian particles with these mean parameters is generated by repeating the procedure for different random initializations of the un-correlated field X. The parameters can be conveniently measured on real particle samples using digital image analysis. Figure 2 depicts the spreading of a liquid droplet on a primary particle entity.
Figure 2: Spreading of a liquid droplet on a primary particle entity at different times. At the micro-scale, flow profiles of the granulation process are also investigated. Figure 3a depicts the energy distribution of particles and 3b depicts the mean velocity and mean square fluctuation velocity obtained at the side wall of the drum. Collision velocity distributions and frequencies (ie., collision-scale behavior) can be computed from the granular temperature. Similar flow profiles have been computed for fluid-bed and high-shear granulation (figures not shown). 2.2 Meso-scale
A Multi-scale Systems Approach to Granulation Process Design At the meso-scale the focus will be on specific objective 2: Studying and formulating mechanistic representation of the granulation rate processes, namely, aggregation, nucleation, and breakage. We propose to use the micro-scale models (discussed in the previous section) to provide key inputs in the formulation of the mechanistic models. For nucleation, we have formulated a mechanistic nucleation kernel for immersion nucleation that in
Figure 3: Particle dynamic simulation of a drum granulator. a) Snapshot of side view where particles have been colored by total energy (kinetic energy plus potential energy). (b) Mean velocity vectors and mean square fluctuational velocity field obtained at the side wall. combination with other rate processes, is able to predict the evolutions and distributions of key granule properties. The current kernel is derived from the collisional/transition state theory and accounts for important nucleation kinetics and thermodynamics such as liquid spray rate, temperature, contact angle and surface tensions. Aggregation of particles typically occurs as a consequence of the distribution nucleation mechanism. The key in determining a fundamental and predictive aggregation model/kernel is to accurately determine the accessible liquid fraction as liquid surface coverage and accessible liquid fraction need not be the same as due to steric hindrance and particle roughness, the liquid surface coverage available (accessible liquid fraction) may be less than the theoretical liquid surface coverage tabulated. Accessible liquid fraction is determined via the shooting method, where a large number of primary particles were sequentially introduced towards the existing granule using the ballistic deposition algorithm [2], but instead of being integrated into the existing granule structure, number of incoming particles that would end up in a liquid covered region was counted. Via numerical simulations, we have determined accessible liquid fraction as a function of liquid-to-solids ratio (Figures 4a and 4b). It was observed that the dependence of the accessible liquid fraction does not follow a linear dependence on the liquid-to-solids ratio but rather a sigmoidal dependence. With more data points, a sigmoidal function can be fit to obtain the accessible liquid fraction for any liquid-to-solids ratio. The final quantity of interest is the thickness of the liquid layer that is present on the granule surface and can contribute to kinetic energy dissipation during collisions. This quantity was also evaluated by the shooting method, but instead of counting the impacted particles, total volume of liquid displaced during each collision by incoming particles was noted (Figure 4c). The ratio of the total displaced liquid volume over the total volume of the incoming particles was used as a measure of the average liquid present on the granule surface. Based on these computations, a predictive aggregation model was formulated based on the Stokes’ criteria [2]. For the accurate and predictive determination of the breakage rate process, must be determined mechanistically. For a three-dimensional single component system we have formulated a predictive kernel that
93
94
R. Ramachandran
is based on key process parameters and material properties of both the solid and liquid was formulated as a quotient of external stress applied over the intrinsic strength of a granule, which is analogous to the Stokes’ deformation criteria and is a realistic characterization of granule breakage. The shapes observed are in agreement with expected phenomenological behavior [3].
Figure 4: Liquid surface coverage as a function of increasing liquid-to-solids ratio (a) 0.01, b) 0.05, c) Schematic of shooting method
Figure 5: Shape of the mechanistic kernel with respect to two dimensions keeping the third constant: a) volume of gas constant and b) volume of liquid constant 2.3 Macro-scale At the macro-scale the focus will be on specific objective 3: Formulation and efficient numerical solution of the population balance model (PBM) and 2) Parameter estimation and model validation studies.
(1) We have developed a four-dimensional population balance model (see Equation 1): where F(s1, s2, l, g, t) represents the population density function such that F(s1, s2, l, g, t)ds1 ds2 dl dg is the moles of granules with solid volume between s1,2 and s1,2 +ds1,2, liquid volume between l and l +dl and gas volume between g and g +dg. The subscripts
A Multi-scale Systems Approach to Granulation Process Design 1 and 2 for s represent the solid volume of the first and second component respectively. (Note: Many pharmaceutical compounds necessitate the characterization of multiple solid components). The partial derivative term with respect to s1,2 accounts for the layering of fines onto the granule surfaces; the partial derivative term with respect to l accounts for the drying of the liquid and the re-wetting of granules; the partial derivative with respect to g accounts for consolidation which, due to compaction of the granules, results in an increase of pore saturation and decrease in porosity. We use a finite volume discretization method in combination with a novel implicit method for the numerical integration to solve the full PBM. The particle population is first discretized into subpopulations and the population balance is formulated for each of these semilumped sub-populations. This is obtained by the integration of the population balance equation (PBE) over the domain of the subpopulations and re-casting the population into finite volumes. The key to speed up computational time is in the off-line calculation of the complex multi-dimensional integrals present in the aggregation and breakage term. This results in casting the complex triple integrals into simpler algebraic equations, major portions of which are computed once a priori to the start of the simulation being much less computationally intensive, is updated at every time-step. The simulation time is further expedited by implementing a message passing interface (MPI) parallel programming framework, where part of the population balance equation is split amongst the different processors. The bulk of the computational time results from solving the computationally integrals present. In a single-processor job, these integrals present are solved for by entirely one processor. We have successfully achieved a speedup of 6 using 10 processors. We use a multi-zonal approach whereby the hydrodynamics and collision-scale behavior are solved on a fine scale of CFD/DEM grid cells, while the PBM is solved using a coarser solution of compartments or zones. The VoF simulations will be solved a priori as these simulations and empirical functions would be obtained and ready to use within the PBM using information obtained from the CFD/DEM. For the solution of this hybrid model, information exchange will take place at the coarse grid points.
3. Conclusions Results obtained from the proposed work enables a more quantitative and predictive understanding of granulation. Results also contribute to three specific advances in the field of particle technology, namely 1) Theoretical understanding of the formation and interactions of multiple solid components with liquid droplets, 2) A multi-scale model coupling fundamental physics/chemistry with systems level process parameters and equipment level flows, stresses and velocities and 3) Novel numerical techniques for the efficient solution and application of the multi-scale model to granulation process design. The multi-scale simulation will also be combined with model-based experiments to demonstrate global validity of the model.
References 1. 2. 3.
G.D. Ingram and I.T. Cameron, 2004, Challenges in Multi-scale Modeling and its Application to Granulation Systems, Dev. Chem. Eng. Mineral Processes, 2004, 12, 293-308. F. Stepanek, P. Rajniak, R. Chern, C. Mancinelli, R. Ramachandran, Distribution and accessibility of binder in wet granules, Powder Technology, 2009, 60, 4019-4029. R. Ramachandran, C.D. Immanuel, J.D. Litster, F.J. Doyle III and F. Stepanek, A mechanistic model for granule breakage in population balances of granulation, Chemical Engineering Research and Design, 2009, 87, 598-613.
95
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-scale modeling of activated sludge floc structure formation in wastewater bioreactors Irina D. OILĠHUXa,b Micol Bellucci,b Vasile Lavric,a Cristian Picioreanu,c Thomas P. Curtisb a
University Politehnica of Bucharest,Chemical Engineering Department Polizu 1-7, Bucharest 011061, Romania b Newcastle University, School of Civil Engineering and Geosciences,Cassie Building, Newcastle upon Tyne NE1 7RU, United Kingdom c Delft University of Technology, Department of Biotechnology, Julianalaan 67, 2628 BC Delft, The Netherlands
Abstract A multi-scale computational model was created for the formation of activated sludge floc structure. The model couples mass balances for substrates and biomass at reactor scale with an individual-based approach for the floc morphology, shape and microcolony development. Among the novel model processes included are the group attachment of micro-flocs to the core structure and the clustering of nitrifiers. Simulation results qualitatively describe the formation of globular colonies of ammonia and nitrite oxidizers in the extracellular polymeric substance produced by heterotrophic microorganisms, as also observed in fluorescence in situ hybridization images. These results are the first step towards a multi-scale model of the activated sludge wastewater treatment systems, which could also be extended to other engineered biological systems. Keywords: activated sludge floc, nitrification, individual-based model, colony size
1. Introduction All open engineered biological communities comprise biologically and physically complex environments in which the macro-level performance is a function of the emergent properties of micro-level changes in composition and activity. The single most important such system is arguably activated sludge. Good mathematical models are important for improved design and operation, starting with the bioreactor and the separation unit, and including the floc/biofilm, the main processing units for nutrients removal. The processes imply different scales, all reflected in the modeling approaches. At macroscale (reactor), the floc is seen most often as a simple pseudo-homogeneous sphere, with no clusters of species. When modeled as an entity at the micro-scale, the floc is a structured unit, where relations develop among different microbial species. Despite the increasing number of experimental studies on microbial diversity and ecology of nitrifying bacteria (Maixner et al., 2006), there is no unifying approach to predict the flocs characteristics based on their environment. Modeling how heterogeneity of flocs structure, distribution and dynamics concur to the system performance is challenging. Current models do not aim at linking micro-scale floc formation with the main contributors to the biological processes and bioreactor performance. The goals of this modeling study are: (i) to reproduce the observed floc-like structures; (ii) to integrate the micro-scale model for the floc with the bioreactor operation. The bottom-up approach gives the advantage of retaining important biological information about the floc, seen as an aggregate of microorganisms and abiotic particles.
Multi-scale modeling of activated sludge floc structure formation in wastewater bioreactors
97
2. Methods 2.1. Experimental The microbial community structure within activated sludge flocs was analyzed in samples from a municipal wastewater treatment plant (Spenneymoor, County Durham, UK) by the combined use of fluorescence in situ hybridisation (FISH) and confocal laser scanning microscopy (Maixner et al., 2006). The images obtained showed different shapes and dimensions of the flocs, all having common characteristics, which were further abstracted into the model features. In all images obtained, ammonia oxidizing bacteria (AOB) and nitrate oxidizing bacteria (NOB) were forming compact globular micro-colonies. A typical example is presented in Figure 1. Figure 1. Fluorescence in situ hybridization of an activated sludge floc, observed by confocal laser scanning microscopy. Green – heterotrophic bacteria; blue – ammonia oxidizing bacteria (AOB); yellow – nitrite oxidizing bacteria (NOB). 2.2. Model description To describe the observed floc morphology the system was represented at two scales. First, at micro-scale we developed an individual-based model for microbial growth and spreading, considering the main guilds of microorganisms implied (heterotrophs – consuming carbon source and oxygen, ammonia oxidizing bacteria – consuming ammonia and oxygen producing nitrite and nitrite oxidizing bacteria – consuming nitrite and oxygen), based on Martins et al. (2004). The steps considered in the floc evolution are: microbial growth and spreading, attachment of individual cells and attachment of groups of cells. Only the heterotrophs are producing extracellular polymeric substances (EPS), which surrounds and separates the cells. In contrast, the AOB and NOB are not producing EPS, therefore growing in distinct clusters. Together with the three types of microorganisms, EPS is also represented in this model by particulate entities. Microbial processes (see Table 1) are driven by the local concentrations of substrates (carbon source CS, ammonium CNH4, nitrite CNO2, and oxygen CO2). The two-dimensional substrate fields are found by solving diffusion-reaction mass balances for substrates in the floc and its surroundings. A constant-thickness (10 Pm) mass transfer boundary layer follows the floc margins. Second, for the reactor scale, the mass balances for the substrates are constructed by considering that the floc developed is representative for the whole biomass growing inside a continuous reactor with recycle and purge. Biomass balance over the reactorseparator system includes formation in the flocs and elimination by the purge. 2.3. Parameters Values of yields and kinetic parameters were chosen according to established activated sludge and biofilm models (Wanner et al., 2006), namely: YHET = 0.61 gCODX/gCODS; YAOB = 0.33 gCODX/gN; YNOB = 0.08 gCODX/gN, YEPS = 0.18 gCOD/gCODS; Pm,HET = 3 d-1; Pm,AOB = 0.76 d-1; Pm,NOB = 1.1 d-1. Influent flowrate was 3.43 m3/d, with concentra-
,'2ILĠHUX
98
tions Cin,S = 0.04 kgCOD/m3, Cin,NH4 = 0.04 kgN/m3, Cin,NO2= 0.001 kgN/m3, and Cin,O2 = 0.005 kg/m3. A recycle/influent ratio of 0.2 and a purge fraction 0.01 were considered, which, for a reactor volume of 1 m3, result in HRT = 0.3 d and SRT = 5.1 d. Oxygen was supplied by aeration with a specific flow of 12 kg d-1m-3 and oxygen saturation concentration was 0.009 kg/m3. Diffusion coefficients in the floc were assigned equal values with those in bulk water, namely: DO2 = 1.73Â10-4 m2/d; DNH4 = 1.21Â10-4 m2/d; DNO2 = 1.03Â10-4 m2/d; DS = 4.32Â10-5 m2/d. The total biomass for inoculum was 0.06 kg, resulting in an initial number of flocs of 1014. Table 1. Stoichiometric matrix and processes rates for growth of heterotrophs, ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB). COD means "chemical oxygen demand". Soluble components
Particulate components Process
XHET kgCOD m
Growth heterotrophs Growth AOB
Growth NOB
3
XAOB
XNOB
XEPS
CS
CO2
CNH4
CNO2
kgCOD
kgCOD
kgCOD
kgCOD
kg
kg N
kg N
m3
m3
m3
1 YAOB
1 YAOB
m
3
m
m3
3
YEPS YHET
1
1 YHET
1
1
Rates,
Process Growth heterotrophs
m
3
Pm, HET
Growth AOB
Pm, AOB
Growth NOB
Pm, NOB
1 YHET YEPS YHET
3.42 YAOB YAOB
1.15 YNOB YNOB
1 YNOB
kgCOD m3 d
CO2 CS X HET K S , HET CS K O2 , HET CO2
C NH 4
CO2
K NH 4 , AOB C NH 4 KO2 , AOB CO2 C NO2
CO2
K NO2 , NOB C NO2 K O2 , NOB CO2
X AOB
X NOB
2.4. Solution method The model was implemented in a combination of MATLAB code (ver. 2008b, MathWorks, Natick, MA) as the main algorithm driver, COMSOL Multiphysics (ver. 3.5a, Comsol Inc., Burlington, MA) finite element methods for solving the diffusion-reaction equations and own Java code for the individual-based floc model. Model solution involves a sequence of steps performed LQDWLPHORRSWLPHVWHSǻt = 0.002 days). At any time t there are successively solved: (a) the mass balances for substrates at steady state to get the 2-d concentration fields (with COMSOL finite element methods) given the
Multi-scale modeling of activated sludge floc structure formation in wastewater bioreactors
99
2-d biomass distribution and given concentrations in the reactor liquid (which are the boundary conditions); (b) biomass growth, division and spreading according to the local substrate concentrations (MATLAB and Java); (c) attachment of individual cells and micro-flocs (from a pool of structures previously created in the same conditions); (d) time evolution of reactor concentrations by coupling the reactor-scale balance with fluxes produced by all the flocs. With the floc geometry and biomass distribution so obtained, a new time step starts. A new model feature is that spreading takes place in two steps: within micro-colonies of nitrifiers, and between these colonies and heterotrophs plus EPS.
3. Results and discussions In Figure 2 a typical structure obtained for 3.6 days of simulation time is presented. The simulation describes well the microscopy image from Figure 1, having compact and distinct AOB and NOB micro-colonies kept within the HET and EPS matrix. Microflocs attachment results in irregular floc shape during its whole development (Figure 2right). While only small COD, O2 and ammonium gradients developed in the floc, the local nitrite concentrations reflect the presence of the corresponding bacterial species, being higher around the AOB micro-colonies (arrow a), which are producing it, and decreasing around NOB micro-colonies (arrow b), which are consumers.
Day 0
0.2
0.6
a 1
1.4
b
c 2
0.57
0.59
0.61 mg/L
Figure 2. Example of simulated floc development. Left: microbial colonies in the floc (AOB - blue, NOB - red, HET - green, EPS - grey) at 3.6 days and the corresponding nitrite concentration distribution. Arrows point to: (a) nitrite accumulation due to the high density of AOB colonies in that region; (b) nitrite consumption by NOB; (c) concentration boundary layer. Right: different stages of floc development.
,'2ILĠHUX
Reactor concentration, kg/m3
100
0.04 S NH4
0.03
NO2
0.02
0.01
0
0
0.5
1
1.5
2
2.5
3
3.5
4
Time, day
Figure 3. Substrate concentrations in the reactor reaching a quasi-steady state after 3.5 days. The solution for mass balance of substrates in the reactor (presented in Figure 3) gives their concentrations evolution in time, corresponding with biomass growth. In the initial stage, when there is abundance of ammonium but fewer cells and small flocs, there is an accumulation of nitrite, the intermediate product in nitrification. All substrates approach a pseudo steady-state after 3.5 days, characterized by much lower concentrations of all substrates than in the influent. Heterotrophs, consuming their substrate both for growth and EPS production, lead to a faster decrease of COD than that of ammonium and nitrite (substrates used by nitrifiers). Consequently, while heterotrophs almost stop growing, the nitrifiers continue to grow, divide and increase their colony size within the EPS matrix formed by heterotrophs.
Conclusions An individual based model was developed for an activated sludge floc inside of a continuous bioreactor. The different scales between mass transport and biomass growth were considered. The main features of the floc, as captured by FISH, e.g., globular colonies of nitrifiers surrounded by heterotrophs and EPS within irregularly-shaped flocs, were qualitatively reproduced by simulation. These results are the first step towards a multiscale model of the activated sludge wastewater treatment systems.
Acknowledgements IDO aknowledges that this work was supported by CNCSIS-UEFISCSU, project number PN II-RU 29/09.08.2010, and European Reintegration Grant FLOMAS.
References F. Maixner, D.R. Noguera, B. Anneser, K. Stoecker, G. Wegl, M. Wagner, H. Daims, 2006, Nitrite concentration influences the population structure of Nitrospira-like bacteria. Environmental Microbiology, 8(8), 1487-1495 A.M.P. Martins, C. Picioreanu, J.J. Heijnen, M.C.M. van Loosdrecht, 2004, Three-dimensional dual-morphotype species Modeling of activated sludge flocs, Environmental Science and Technology, 38(21), 5632-5641 O. Wanner, H. Eberl, E. Morgenroth, D. Noguera, C. Picioreanu, B. Rittmann, M.C.M. van Loosdrecht, 2006, Mathematical Modeling of Biofilms. London, UK: IWA publishing.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
A Multi-layered Ontology for Physical-ChemicalBiological Processes Heinz A Preisig∗ Department of Chemical Engineering; NTNU; Trondheim, Norway
Abstract Our models are the result of a step-by-step procedure. This procedure lead to the an abstraction, which is a super-structure model being an ontology for a very broad class of processing systems. The steps reÀect into forming strata of de¿nitions. The mathematical result, which for space limitations is not included, is a set of lists and equations that encompass the de¿ned domain. One of the main features making it so general is the outsourcing of the opaque components being kinetics and material descriptions. Keywords: Computer-aided modelling and engineering, mathematical modelling
1. The Stage Engineering is increasingly model-based, thereby lifting modelling to the level of an enabling technology without which engineering is not any more competitive or even doable. With the operations associated with generating coded models being the main bottleneck, modelling, as a research and development domain, moves into a more dominant position. There are many challenges to be faced, not at least the formulation of the model itself and its ef¿cient handling all the way to generating target code. Currently the base models are hand written, meaning published or developed model equations are mapped into a target code and then made available as a module in a library. Interactive tools are then used to assemble plant models by combining these library modules of unit operations or the like. Constructing models using a software tool, must provide an advantage over the currently used technology, which is the ability to check on the structure of the model as it is being built. Our software tools build models based on an underlying theory, an ontology, which mostly captures the physics being augmented with physical chemistry, kinetics and geometry. The ontology can be seen as a type of super model, which describes all the processes in the target domain, namely Physical-Chemical-Biological (PCB) systems. With this ontology being used in connection with the systematic construction of process models it seems obvious, that the process of generating the model is strongly reÀected into the structure of the ontology, which is precisely what will be discussed in this paper though without going into the mathematical representation.
2. The Process of Establishing a Model The description of a process behaviour is a function of time and space. The ¿rst consideration is on how the space is subdivided into ”principle components” and where they interact. Thus two types of primitives are de¿ned, namely capacities and communications between capacities, for both types of which on has models. The models of the capacities ∗
[email protected]
102
Heinz A Preisig
provide the time behaviour in the form of conservation principles, which implicitly also de¿ne the state for the particular capacity. The models of the communications between capacities, which for physical systems is the transfer of extensive quantities, is the result of making time-scale assumptions on physical systems that actually ”perform” the transfer. Also on a smaller scale the stochastic behaviour needs to be considered and on the scale transition the proper averaging of the microscopic population balances. So if we make a step back and look at this process, what one de¿nes is a structured containment, which represents the space the physical process occupies being embedded in the part of the world being considered, or maybe better, having to be considered. The latter enforcement being the reÀection of having to draw an outer boundary, beyond which one assumes the process of interest to be not dependent. Thus each of these process de¿nitions de¿nes a type of ”universe”, within which everything happens that is of relevance to the process of interest on the time-scale of interest. Similar considerations on the nature of the capacities, namely if they can be seen as macroscopic or microscopic, distributed in 1-3 D or lumped or of event dynamic nature. These considerations enter the de¿nition of the model very early, and they have a profound impact on the ability of the model to mimic the modelled process. In a next step, the contents of the containment is de¿ned, on a ¿rst level more implicitly than explicitly: namely what parts are associated with what conserved quantity is being considered, being mainly mass, energy and momentum. At this point it is not necessary to de¿ne precisely what type of mass, or what form of energy. Only in the next stage this information is given, with the type of mass being application dependent. Also the transfer may now be considered as semi-permeable in the sense that an individual connection has the ability to transfer for example a set of species, whilst others are inhibited (simple model for a membrane, for example), or certain forms of energy being transferred. This knowledge is required to establish the balance equations and the transfer laws and the kinetics are introduced, which in turn introduce additional variables, which again in turn have to be derived from the basic state, namely the component mass, the energy and the momentum. This is then usually the time control is being superimposed, adding another network of signal processing nodes and signal arcs from and to the physical topology representing measurements and control actions.
3. A Layered Ontology The ontology is a template, from which the PCB system models are constructed. It is a supermodel that captures all aspects of theory associated with physical-chemical-biological system. Thus, what is not included in the ontology cannot be modelled with the tool utilising the ontology. The ontology being describe here on a gross scale is layered and largely reÀects the process, or sequence of operations, of constructing a model for a PCB system. The structure is layered in layers and strata of layers each adding another level of detail to the description. Each layer or strata can be seen as an optical template adding another colour to the overall framework. The ontology captures the behaviour of PCB systems in time and space for multi-component / species systems. Clearly it has its limits, but it is constructed to cover a very wide spectrum of processes and it can be extended as it becomes necessary. In contrast to ? and ? the ontology is limited to the physical core with the opaque pieces like material model and kinetics being outsourced. The base layer is a graph, which represents the topology of the model:
A Multi-layered Ontology for Physical-Chemical-Biological Processes
103
Layer 1: The Graph The graph consists of a set of nodes connected by directed arcs. Thus the model describes the interaction of subsystems, which are connected over communication paths. At this point, neither, the nature of the subsystems nor the nature of the communication is being de¿ned. This follows in the second strata of layers: Layer 2: Filling in the Graph The graph needs to be populated in order to represent PCB systems. What it is being populated with is to be put into certain parts of the overall graph. Thus ¿rst the nature of the containers are de¿ned: Layer 2a: Basic Nature of System: Two types of containers are required, namely • physical containers to capture the physical containment of the plant, which we term physical topology; • added is a signal processing containment, which is used to describe the units being added to run the physical containment and its contents. Layer 2b: Basic Tokens – Tokens are de¿ned, which characterise the contents: being mass, energy and momentum. De¿ning different conductivities to arcs de¿nes basic types of arcs. Layer 2c: Token’s Morphology: Tokens are more detailed by adding a morphology attribute, such as species for mass yielding component mass, internal, kinetic and potential energy extending the token energy in more detail. Similarly in arcs, energy is augmented with morphology attributes like conductive heat, radiation etc. and species information phase. Layer 2d: Token rules – In order to compute the sub-networks rules are being introduced. These rules de¿ne how tokens@form propagate in the network and are generated through interaction.The mass@species@phase form a strata of interacting layers. Also mass interacts with energy as mass induces energy. Layer 2e: State – The tokens in a node have a state, which are the conserved quantities: the conservation of mass, species mass, the conservation of energy and momentum. The incidence matrices of the coloured sub-graphs enter directly the formulation of the conservation laws. Layer 3: Equations Equations and list de¿nitions are used to map the topology into a mathematical representation of the model. For the representation of the physical topology, three different types of capacities are being introduced, namely: a reservoir, being an in¿nitely large capacity with given intensive properties. The second class is ¿nite capacities, which may be 3-D, 2-D, 1-D or 0-D distributed, the last representing lumped systems. Finally the last class is a in¿nitely small capacity, again in general distributed. These in¿nitely small capacity are usually the result of making a event-dynamic assumption about a physical object. A very common one is the surface representing a phase boundary. There are no speci¿c rules for the signal operations except that they must represent realisable systems and be representable in form of equations.
Heinz A Preisig
104
Equations are constructed as expressions, where the equivalence sign is an operator. Expressions are recursive objects, forming a bipartite tree with variables being leave nodes. The two sets of nodes are expressions and variables in one set and operators in the other. This representation is known as abstract syntax tree. Each term in a sum, must have the same physical units and unitary functions’ arguments must be without physical units. Implementing physical variables that can be indexed and have physical units and de¿ning the mathematical operations with the physical units attached and checked when ever applicable, the expressions are guaranteed consistent in terms of physical units. The equations for the representation of the physical topology splits into four distinct types of equations: which are the equations describing a) the capacities (storage of extensive quantity), b) the transfer laws (transfer of extensive quantity), c) the transposition of extensive quantity and d) the state variable transformations mainly providing the necessary link between the conserved extensive quantities and the material behaviour and the spacial characteristics. a: Storage of conserved extensive quantity are the conservation equations. They implicitly de¿ne the fundamental state space in which the model is being de¿ned. The quantity being accumulated is the (fundamental) state variable. The de¿nition of the conservation equations utilizes the incidence matrices of the coloured topologies. The base tokens provide the direct link to the fundamental state variables. In physical systems mass induces energy and momentum. The above-de¿ned rules reÀect the existence of abstract surfaces (thermodynamic walls) that are semi-permeable with respect to the tokens and some of their forms. xÚ =
∑
node.form
xÚ node.form =
∑
Farc.token.form xˆ arc.token.form + N x˜ node.token.form
arc.token.form
The matrices Farc.token.form are the incidence matrices of the coloured physical topologies. They may be block matrices, which are 2-level indexed. The indices broadcast can be elegantly re-written using the Khatri-Rao product (?). b: Transport of extensive quantity are functions of the state of the two connected systems, which are the conjugates to the potentials that drive the transport. Thus the Gibb’s view on transport is implemented. c: Transposition of extensive quantity are mainly describing the reactions occurring inside a system, which with the above system de¿nition includes boundaries and the phase transitions, which again occur in the boundary system. In both cases again the description is a function of variables that derive from the state. The latter two types of equations induce the de¿nition of two classes of variables, namely secondary states, which is all those variables that are introduced in this descriptions not being ¿xed constants, and parameters, which are the ¿xed values. In addition we found it useful to also introduce the term conditions, which we use for secondary states of a neighbouring system, a concept that comes particularly handy when connecting is a reservoir. d: State variable transformations link the primary states with the secondary states. In our experience, this set is the most complex one and its de¿nition plays a central role in establishing the model. Besides some simple de¿nitions like the intensive properties it also includes the description of the material behaviour and the geometrical relations. The material description is central and we found it advantageous to utilize a two level approach. On the level of the ontology and its use, we de¿ne all relations based on the energy functions. So all the properties appearing in the various relations are given as
A Multi-layered Ontology for Physical-Chemical-Biological Processes
105
either canonical variables of the energy functions or as partial derivatives of the energy functions with respect to the canonical variables. The energy functions are generated on the second layer outside the ontology. They are implemented as a module that given a speci¿c material model as an algebraic object in form of two appropriate equations of state, for example p(T,V, n) and μ (T, B, n), the corresponding energy function can be de¿ned and consequently all other become available through the Legendre transformations. Further implementing symbolic differentiation into the package provides the means to get the desired derivatives. This approach separates the empirical de¿nition of the material behaviour outside the ontology, and keeps it clean. A similar approach is taken for the reaction kinetics and phase transition. 3.1 Implementation The here-described ontology has been mapped into a complete mathematical structure consisting of lists and equations. For the de¿nition of the ontology a special editor has been generated which, using a wizard-type of approach, de¿nes the various parts step by step building on the graph representation. An important feature in the de¿ntion is that some of the information is being de¿ned as part of the model de¿nition. These objects are marked accordingly and on use of the ontology this information must be provided either from a model ¿le containing this information or alternatively through direct interaction with the person using indirectly the ontology. For the lack of space the publication of the mathematical structure is being delayed.
4. Conclusions We provide a framework for an ontology representing physical-chemical-biological systems that is generic and captures a very wide range of processes. It builds heavily on basic physics being used to characterise the various model elements in the underlying graphical representation. The multi-layered type declaration integrates seamless with the mathematical formulation of the ontology. In particular, the coloured graphs, being typed sub-graphs of the graphical description of the physical containment, enter the balance equations explicitly. The colouring is systematically constructed using a Petri-net-like approach, which formalises this part of the operation neatly and introduces the central object state in a natural way. Separating out the empirical parts, explicitly the material description and the kinetics and use particularly the material description only on the conceptional level in the ontology provides an important abstraction, making the ontology more universal in its application.
References Kokossis, C. A., Yang, A., 2010. On the use of systems technologies and a systematic approach fro the synthesis and the design of future biore¿neries. Computers & Chemical Engineering 34, 1397–1405. Preisig, H. A., 2010. Constructing and maintaining proper process models. Comp & Chem Eng 34(9), 1543–1555. Yang, A., Marquardt, W., 2009. An ontological conceptualization of multiscale models. Comp & Chem Eng 33, 822–837.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modeling and simulation of a gas cleaning section in a Cu/Ni metallurgical plant Mirnes Alica, Tor Anders Haugeb,c, Bernt Lie*,a a
Telemark University College, Porsgrun N-3918, Norway; Xstrata Nikkelverk, Kristiansand N-4606, Norway; c Agder University, Grimstad N-4898, Norway; b
Abstract After roasting of copper and nickel ores in roasters, a significant amount of sulphur dioxide is released along with the particulate matter. This paper deals with the gas cleaning process and preparation of the released sulphur dioxide rich gas stream that comes out of roasters. The cleaning section consists of the following sequentially connected components: hot gas fans, venturies, washing tower, cooling tower, wet electrostatic precipitators, drying tower, and blower. The system is modeled from first principles, setting up the conservation laws for mass, momentum and energy. To complete the model, parameter estimation based on the least squares estimates is performed. Important variables of the process such as pressure and temperature are extracted from the model and compared with the real measurements in the gas cleaning section of Xstrata Nikkelverk Kristiansand. The model is intended for use in an operator training simulator and for control purpose. Keywords: off-gas cleaning, dynamic modeling, first principles, metallurgy.
1. Introduction Adverse effects of aerosols and SO2 gas emissions on people, vegetation, and climate are well known today. In the last three decades and up to today, environmental regulations have become more and more stringent. Off-gas cleaning sections in different configurations have become a part of every production system where harmful gases are generated either by combustion, roasting, burning, smelting or some other process. In the copper/nickel ore roasting process, the gas cleaning section plays a decisive role in reducing the amount of fine particles and pretreating the off-gas before the acid production plant. Several authors have been interested in the problem of modeling the process of nickel smelting and off-gas treatment from the electric furnaces [1, 2]. Significant work has been done on modeling of the pressure behavior of an industrial roaster off-gas system with the aim of providing insights into pressure control and gas leakage minimization [3]. Similar work has been done to investigate the effects of pressure and temperature on concentrations of SO2 of a smelter furnace and converter off-gas system [4]. In order to improve the process and reduce the emissions of gases, optimization and control of the gas cleaning section as an important factor of an industrial off-gas system is investigated in [5] and [6]. Pressure control is especially important since the systems are mostly governed by a downstream underpressure fan to avoid gas leaking into the environment. This paper focuses on the modeling of gas cleaning configuration of Xstrata Nikkelverk Kristiansand, where the harmful gases and metal fumes are produced in the process of roasting metal ores in fluidized bed roasters. *
Corresponding author:
[email protected].
Modeling and simulation of a gas cleaning section in a Cu/Ni metallurgical plant
107
Dynamic behavior of the system is described by mechanistic models of units based on conservation laws of mass, momentum and energy. The developed model is intended for use in operator training simulator and for control purposes.
2. Process description The process of copper/nickel and sulphuric acid production begins with smelting of metal ores in fluidized bed roasters. In a series of reactions inside the roaster, metal sulfates are converted to metal oxides accompanied by a substantial release of gases. Upon removal of the coarse particles in cyclones and electrostatic precipitators, gas enters the second stage of cleaning, whose purpose is to remove the remaining particles and water vapor from the gas stream. The second stage starts with quencher (Q) (see Fig. 1). Here, the gas is contacted with atomized liquid droplets of weak sulphuric acid, and cooled down to adiabatic temperature. The pressure drop over the quencher can be influenced by the amount of liquid introduced. At this stage there is also, a partial removal of particles. Cooled metal fumes with the gas stream enter the washing tower (W). Inside the washing tower, gas stream flows counter current with the weak acid sprayed from the top of the tower staying inside the tower for ca. 3 seconds. This retention of the stream provides enough residence time for particles to precipitate and be removed in the following stage of venturi (V) and cooling tower. The venturi is an orifice with a variable throat area which can be used to control the pressure drop in the system. The working principle for the venturi is similar to the quencher where a weak sulphuric acid is injected into the gas stream and agglomerated particles are being removed. To ensure further removal of fine agglomerates, gas enters the cooling tower.
Figure 1. Gas cleaning section
The cooling tower is composed of two sections, which are: the spray section (S) with a purpose similar to washing tower where gas is stripped from the remaining solids by a weak acid spray, and the cooling trays section (C) where it is cooled down to condense the water vapor contained in the stream. The washing tower, the spray section and the cooling section are so-called “weak sulphuric acid circuits”, due to fact that the percentage of acid in the liquid is about 10-30%. Requirements which the gas stream needs to comply with in order to enter the acid plant, are very high, and in the series of electrostatic precipitators (E1 to E3 (EE), and E4 (E)), gas is further released of submicron particles and aerosols by means of high voltage charging of the particles. The final stage in the gas treatment before the acid plant is the drying tower (D) where the
108
M. Alic et al.
remaining water is removed via reaction with a strong acid. The reaction between the strong acid and water is exothermic and rather violent, and produces droplets which are captured by the demister (M). The system is run with a negative pressure supplied by the blower (B), and regulated by valve (VL). At the end stage, gas is ideally composed of SO2, O2, N2 and traces of CO2.
3. Model development To develop a nonlinear dynamic model of the system, the first principles of mass, momentum and energy conservation are used. Developing a mechanistic model has some advantages over other types of models; an important one being maintaining physical meaning of its variables and parameters. In order to simplify the modeling process, the following assumptions are introduced: potential energy is neglected as we consider gas medium at low height difference, shock effects in the pipes are not considered as well the kinetic energy of the gas, the pipes cross-sections are considered constant, the system units are not considered in the spatial terms. The properties of the gas are approximated by the ideal gas law. From the energy balance of a basic unit, one can deduce the following expression for the temperature change:
· dT piVi g § c p ,i − 1¸ i = m i −1c p ,i (Ti −1 − Ti ) + RmTi m i −1 + nd M H 2 O − m i − nd H vap − Q i ¨ Ti © Rm ¹ dt
(
)
(1)
where: p is pressure, T is temperature, V is volume, Rm is gas constant, cp is specific heat capacity, nd is the mole diffusion term (should be omitted for EE, E, D, M), Hvap is heat of vaporization, m is mass flow, M is molar mass, Q i is heat exchanged between gas and liquid. Equation (1) is valid for the following units W, S, C, EE, E, D, M, and index i refers to the sequence of units presented on Fig. 1 from left to right. For Q and V, temperature is given by a static mixing model as in Eq. (2): Tk = Tref +
m k −1c p , k −1 (Tk −1 − Tref ) + m l c p ,l (Tl − Tref ) − ε v nd H vap m k −1c p , k −1 + m l c p ,l
(2)
where: ε v is vaporization coefficient, m l is acid mass flow. The pressure of the units with relatively large volume (W, S, C, EE, E, D, and M) is given by Eq. (3) and for small volume units (Q and V) by Eq. (4):
§ c p ,i · dp Vi g ¨ − 1¸ i = m i −1c p ,iTi −1 − c p ,iTi m i − nd M H 2O − nd H vap − Qi © Rm ¹ dt
(
pk = pk +1 +
kk +1 2 m k ak +1
)
(3)
(4)
Mass flow for each unit is deduced from the momentum balance of the unit and the downstream flow resistance, and is given by Eq. (5):
L j +1
dm j dt
= ( p j − p j +1 ) a j +1 − k j +1m j n
(5)
where: k is coefficient of frictional resistance, a is cross-sectional area, L is pipe length, and exponent n= {1, 2}.
109
Modeling and simulation of a gas cleaning section in a Cu/Ni metallurgical plant
kk =
K k m k ,l 2 ρg
(6)
a For the units where acid addition is present, Eq. (6) is used to define the frictioonal resistance. Mass and heat transfer coefficients for the spray tower are defined by b a counter current gas-droplet contact and appropriate expressions can be found in the literature [7]. Mass flows tthroughout the system are defined based on the steady sstate mass balance. For estimatio on of the friction coefficients over pipes and local resistannces of Q and V, the least square estimation method is used. Determined parameters andd the model constants are given iin Table 1. Values given do not necessarily represent the real system of Xstrata Nikkelverrk, but should rather be considered as an example. Table 1. Model parameters for the gas cleaning section Estimated parameters kvl = 37[ N / ( kg / s ) ] 2
kee = 15
aw,q ,c ,s ,ee ,e ,d ,m ,vl = 1[m 2 ]
Vee = 3Ve
Rm = 257[ J / kg / K ]
km = 126
1 ks = 15
av = 0.5
Vc = 30
H vap = 44650[ J / mol ]
kd = 270
6 kc = 60
Lq ,w,v,s ,c,ee,e,d ,m,vl = 1[m]
Vs = 25
k g = 0.16[mol / m 3 / s ]
ke = 10
kw = 15
Vm = 5[ m3 ]
Vw = 70
c p = c p (T ) [ J / kg / K ]
K v = 160[ s / m 2 / kg ]
Vd = 35
Tl = 300[ K ]
ρ g = ρ g (T ) [kg / m3 ]
K q = 15
Ve = 10
Tref = 298[ K ]
hA = 720[1 / s ]
4. Simulation and validation of the model To confirm the validity of tthe model, steady state overall system pressure distributioon is presented in Fig. 2 (left). Pressure is regulated in the blower by the valve and any change in pressure propagattes from the blower towards the quencher.
Figure 2. Pressure distribution over the system (left); Pressure validation over M and E4 (rightt);
When the pressure compariison is performed for individual units, as illustrated in Fiig. 2 (right), one can observe a good g fit of simulated and measured data. Data in Fig. 2 have h been scaled for confidentiaality reasons. Due to insufficient data about the weak acid a circuits, it is not possible to t reliably establish comparison upstream from the cooling tower. However, what can bbe done is to investigate the effect of weak acid input intoo the venturi on the downstream temperature and upstream pressure illustrated in Fig. 3 (leeft).
110
M. Alic et al.
The venturi as a unit of investigation i is used because of the possibility to con ntrol pressure drop in a real proocess. When the step in the acid inflow into the venturri is made, pressure drop over th he venturi decreases if acid inflow is reduced, and increasees if the inflow is increased. This resembles a real physical behavior of a venturi unit.
Figure 3. Pressure and temperaature change upon 10% reduction in weak acid inflow (left); Pressure and temperature channge upon 10% area reduction of throat area of V (right)
With the decrease of venturri throat area, pressure drop increases and vice versa, see Fig. 3 (right). While the change of throat area does not have much effect on the temperatture, reduction in acid inflow inccreases temperature in the spray tower.
5. Conclusions In this paper a nonlinear dynamic d model of roaster gas cleaning section is presennted. The model has been developed based on the first principles of mass, momentum and energy conservation. To vaalidate and analyze the model, four cases have been studdied. The first one has been studied to confirm the overall pressure distribution across the system; the second one: to validate the dynamic behavior of the pressure over the unnits; the third one: to investigate the changes of pressure and temperature subjected to chaange in acid input; and the fourth one: to observe the changes in pressure and temperaature when the geometry of units varies. It has been found that the model closely relates too the m, and gives reasonable response on the inputs made. The pressure of the real system model will be further studieed and improved for training simulator and control purposses.
6. Acknowledgment The authors gratefully accknowledge financial support of Xstrata Nikkelverk and appreciate professional sup pport from Ole Morten Dotterud and Finn Stålesen from fr Xstrata Nikkelverk.
References [1] J.G. Bekker et al., ISIJ Int.,, No. 39, (1999), 23 [2] M. Kirschen et al., Energy,, No. 31, (2006), 2590 [3] H. Shang et al., Ind. Eng. Chem. C Res., No. 46, (2007), 5371 [4] H. Shang et al., Am. J. Env viron. Sci., No. 4, (2008), 22 [5] A.C.B. de Araujo et al., Jou urnal of Cleaner Production, No. 17, (2009), 1512 [6] J.G. Bekker et al., Control Eng. Prac., No. 8, (2000), 445 [7] F.P. Incropera et al., Fundaamentals of Heat and Mass Transfer, 6ed, (2007)
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A novel approach to the biomass pyrolysis step and product lumping Daniele Bernocco, Paolo Greppi, Elisabetta Arato PERT - Process Engineering Research Team Department of Construction, Environment and Land Engineering UNIGE - Università degli Studi di Genova, Via Opera Pia 15, 16145 Genova, Italy
Abstract The initial pyrolysis/de-volatilization step is common to all high-temperature gas-phase conversion processes (combustion, gasification and pyrolysis) of carbonaceous solids, because condensed phases do not react directly with the gas phase. The aim of the present work is to introduce a systematic, generic stoichiometric approach to represent a biomass starting material and the repartition of pyrolysis products. The approach transforms the conventional representation of the starting material, into a set of well-defined chemical species while respecting the atom balances. The starting points are the elemental composition of the dry and ash free matter of the biomass and the list of possible chemical species deriving from devolatilization. The algorithm is flexible and can accommodate a different selection of possible products. In the current implementation three product groups with 13 species are taken into account. Since there are infinite ways of splitting the starting material over the 13 components, at least 3 repartition coefficients and 8 atom relevance weights must be specified. The algorithm has been implemented using LIBPF (LIBrary for Process Flowsheeting), a process flow-sheeting and modeling tool arranged as a C++ library. Keywords: stoichiometric, biomass, pyrolysis model, lumping.
1. Introduction High-temperature processes for the conversion of biomass to energy are widely discussed in the chemical engineering literature (T. Abbasi et al. 2010, A. Evans et al. 2010, L. Zhang and C. Xu et al. 2010, M. Balat et al. 2009 part 1, M. Balat et al. 2009 part 2, H.B. Goyal et al. 2008). This interest is due because biomasses are good renewable energy and sources are quite abundant. Since biomass is not composed of well defined chemical species, standard stoichiometric approaches are not appliable. More over biomass characterization is usually expressed only as proximate (moisture, ash, fixed carbon and volatile matter) and ultimate analysis (C, H, N, O …) of the dry and free ash matter. Sometimes cellulose, hemicelluloses and lignin mass fractions are available. All high temperature processes of biomasses, like pyrolysis, gasification and combustion, develop hundreds of chemical species, and the stoichiometric relation between products and biomass composition is complex to represent. Moreover the solid does not react directly with the gas phase except for solid carbon (char or coke). The solid biomass always undergoes a first step of pyrolysis or devolatilization.
112
D. Bernocco et al.
2. Biomass pyrolysis models The complexity of the devolatilization behavior is due to the wide number of possible product of the pyrolysis. By a general point of view, devolatilization generates gaseous (non condensable) products (CO, CO2, CH4, H2, etc.), light partly condensablehydrocarbons (C2H4, C2H6, C3H6, methanol, etc.), inorganic components (NH3, H2O, HCN, etc.), and condensable and heavy hydrocarbons. A few hundreds of chemicals have been detected in the products of high temperature conversion processes of biomasses (Milne et al., 1998). There are three major models of devolatilization behaviors (M. L. Souza-Santos, 2004): isolated kinetic models (global or single step and combination of series and parallel reactions), distributed activation models (series and parallel reactions are related to a distribution of activation energy) and structural models (solid structure and detailed composition define the proper devolatilization path). No information about stoichiometric release, type and composition of products are given by the first two models. The latters, instead, are quite complex and computational expensive and require a lot of experimental parameter. For these reasons, the present work aims to present a simple model that can evaluate product composition. The FG models (one of the structural models) introduce an important concept that is the potential tar-forming fraction of the fuel. It represent the most important starting point for the present lumping approach and is defined as to the maximum yield of a tar component coming out from a particular fuel in a particular process. To evaluate it, all that component formation is taken in account without considering the eventual consumption.
3. Stoichiometric approach
O2
A generic biomass contains a lot of atom types. Most of them are in a very little amount. In order of decreasing common residue mass fractions, biomass contains carbon, oxygen, hydrogen, nitrogen, sulfur and chlorine (and/or other halogens). In this work, only the first four atoms (C, H, N and O) have been considered. Since nitrogen is biomass present in a very little amount, it is considered to produce N2 or NH3 if there is enough hydrogen at the end of all the barycenter H2 C biomass lumping. So the biomass can be referred as a CHO component. The generic Figure 1: CHO ternary diagram of a sample chemical decomposition (1) biomass decomposition in a cloud of chemical ܥ ܪ ܱ ՜ ߭ ܥ ܪ ܱ ߭ ܥ ܪ ܱ (1) species with its barycenter re-maps a biomass molar composition as ɓୡ moles of a chemical species deisired and ɓ୰ moles of a residue and the “reaction” can be represented on a CHO ternary diagram (see Figure 1). The generic chemical product can be either a well defined chemical species or a group of chemicals. A group of chemical species can be represented as one point considering a weighted barycenter. The weights are the first important input parameter that the user should define. They are referred to the importance or predominance of atoms in real product deriving from a biomass processing. For example, these weights can be expressed as ratios between oxygenated and non-oxygenated chemicals or
A stoichiometric approach to the biomass pyrolysis step between hydrogen-rich over hydrogen-poor species. However, in the present work uniform weights have been assumed for the C, H, and O atoms. Using these basically concepts some C++ routines were written to calculate the amount of potential chemical forming fractions.
113
O2
3 COx
4. Product Research (D. K. Seo, 2010; Y. Zhang, 2010) H2O suggests that the pyrolysis of biomass is 4 1 strongly dependent on the process conditions rather than on the biomass composition. 2 Moreover, a single group of chemical C H2 CH4 species is not sufficient to describe devolatilization phenomena. In the current Figure 2: Primary product repartition on a CHO implementation three product groups are ternary diagram taken into account, with 13 species. 4.1. Primary products The first group considered is referred as primary products. The chemical species and the nature of this group are directly related to small molecules usually present in the pyrogas or syngas also in thermodynamic equilibrium conditions. The primary products considered are: H2, O2, N2, CO, CO2, H2O, CH4, NH3, and C. Table 1: Reference The algorithm that re-maps biomass into primary products biomass composition is different from that used in secondary and tertiary ones, (Y. Zhang et al., 2010) because this calculus must exhaust all atoms present in the Proximate analysis original matter. Looking to Figure 2, depending on the wt% dry triangle in which the biomass is localized, the biomass is re-mapped into: Ash 0.2 Triangle 1: Only C, CO, CO2, CH4; Volatile matter 89.3 Triangle 2: Only H2, CH4, H2O; Fixed carbon 10.5 Triangle 3: Only O2, H2O, CO, CO2; Ultimate analysis Triangle 4: Only CH4, H2O, CO, CO2; wt% dry and free ash 4.2. Secondary products The second group is referred as secondary products. It is representative of light tars with a condensing temperature quite lower than that of water. Chemical species here included in the secondary products are: ethylene (C2H4) and methanol (CH3OH).
C H N O
49.20 6.49 0.10 44.20
4.3. Tertiary products The third group is referred as tertiary products. It is representative of heavy tars with a condensing temperature higher than water. Chemical species here included in the tertiary products are: phenol (C6H5OH) and naphthalene (C10H8). 4.4. Model parameters The choice of components in the latter two groups is arbitrary and represents a lumping procedure which must be based on a physical understanding the process under study.
D. Bernocco et al.
114
To describe all the devolatilization behavior of and the repartition of the biomass into primary, secondary and tertiary products, two desired repartition coefficients from each of the three groups were introduced. The specification of “desired” is due to the stoichiometric approach. The algorithm, in fact, tries to accomplish the desired repartition coefficients, but in general a residue always exists. This residue, coming from secondary and tertiary product re-mapping of the biomass, is added to the primary products to respect the amount of atoms present in the biomass. This artificial assumption involves the definition of two actual repartition coefficients into the calculus. These repartition coefficients are defined as the mass of secondary or tertiary products over the total biomass mass, both in dry and free ash basis. Primary products need another important parameter that is the carbon oxide ratio, (CO/CO2) as a function of temperature (J. R. Arthur, 1951). The expression (2) resume the biomass lumping calculation of model of pyrolysis described in this paper.
Table 2: Results of the model of stoichiometric pyrolysis step (PY: pyrolysis, ST: steam gasification, PO: partial oxidation) Cases PY PY SG PO 600°C 800°C 800°C 800°C Desired repartition coefficients (kg of biomass converted / kg of biomass) Secondary 0.055 0.042 0.039 0.032 Tertiary 0.080 0.055 0.060 0.020 Actual repartition coefficients (kg of biomass converted / kg of biomass) Secondary 0.107 0.082 0.076 0.062 Tertiary 0.146 0.101 0.110 0.038 Mass of product (kg of obtained component / kg of daf volatile matter of biomass) Primary products C 0.000 0.011 0.008 0.033 CH4 0.220 0.237 0.237 0.251 CO 0.154 0.158 0.158 0.160 CO2 0.483 0.496 0.497 0.503 H2 0.000 0.000 0.000 0.000 H2O 0.007 0.000 0.000 0.000 N2 0.001 0.001 0.001 0.001 NH3 0.000 0.000 0.000 0.000 O2 0.000 0.000 0.000 0.000 Secondary products C2H4 0.025 0.020 0.018 0.015 CH3OH 0.029 0.022 0.021 0.017 Tertiary product C10H8 0.039 0.027 0.030 0.010 C6H5OH 0.040 0.028 0.030 0.010
ܥ ܪ ܰ ܱ ՜ ߭ுమ ܪଶ ߭ைమ ܱଶ ߭ேమ ܰଶ ߭ை ܱܥ ߭ைమ ܱܥଶ ߭ுమை ܪଶ ܱ ߭ுర ܪܥସ ߭ேுయ ܰܪଷ ߭ ܥ ߭మுర ܥଶ ܪସ ߭ுయைு ܪܥଷ ܱ ܪ ߭లுఱைு ܪ ܥହ ܱ ܪ ߭భబுఴ ܥଵ ଼ܪ
(2)
The algorithm proceeds sequentially for each group and calculates one by one the maximum amount of a stoichiometrically defined product that can be obtained respecting the atom balance based on the composition of the residual biomass, until one or two elements are exhausted.
5. Application There are four cases, representative of a wide scenario, to which the stoichiometric pyrolysis step algorithm could be applied. These cases are pyrolysis (at two different temperatures), steam gasification and partial oxidation. The reference work where the experimental data are kept is that of Y. Zhang et al., 2010.
A stoichiometric approach to the biomass pyrolysis step
115
The composition and amount of tars obtained by Y. Zang et al., 2010, is converted in the parameters required in the model. These are obtained by summing amounts of chemicals with a boiling temperature criterion. The application of the model to the four cases had generated the results reported in Table 2.
6. Conclusion and further work The approach proposed in the present work, can be the starting point for the interpretation of experimental data or of kinetic studies on molar basis. In particular, kinetic studies, based on the lumping scheme presented, could be one milestone in the modeling of gasifier, combustors and pyrolyzers. The biomass lumping (5) does not alter the biomass atomic fractions even in the case of partial conversion of the reactant, because the biomass amount in the system is reduced but its composition does not change. This makes it simple to calculate the energy balances and other properties, because only the initial biomass characteristics (density, enthalpy) are required in a process computation. In a real process on the other hand in the case of a partial reaction, it is more realistic to assume that a partial pyrolysis will proceed with the lighter components first, leaving a biomass residue enriched in carbon. But this causes difficulties because of the lack of correlations to calculate the density and enthalpy as a function of the biomass atomic fractions. These correlations should be the focus of the further works.
References T. Abbasi, S.A. Abbasi, 2010, Biomass energy and the environmental impacts associated with its production and utilization, Renewable and Sustainable Energy Reviews v. 14, p. 919–937 J. R. Arthur, 1951, Reactions between carbon and oxygen, Trans Faraday Soc., v 47, p. 167-178 M. Balat, M. Balat, E. KÕrtay, H. Balat, 2009, Main routes for the thermo-conversion of biomass into fuels and chemicals. Part 1 and Part 2, Energy Conversion and Management, v. 50, p. 3147––3168 A. Evans, V. Strezov , T. J. Evans, 2010, Sustainability considerations for electricity generation from biomass, Renewable and Sustainable Energy Reviews, v. 14, p. 1419–1427 H.B. Goyal, D. Seal, R.C. Saxena, 2008, Bio-fuels from thermochemical conversion of renewable resources: A review, Renewable and Sustainable Energy Reviews, v 12, p. 504–517 P. Greppi, 2006, LIBPF: a liibrary for process flowsheeting in C++, Proceeding of the International Mediterranean Modelling Multiconference, pp. 435-440 T. A. Milne, R. J. Evans, and N. Abatzoglou, 1998, Biomass gasifier “tars”: their nature, formation and conversion. Golden, Colorado: National Renewable Energy Laboratory. D. K. Seo, S. S. Park, J. Hwang, T. Yu, 2010, Study of the pyrolysis of biomass using thermogravimetric analysis (TGA) and concentration measurements of the evolved species, Journal of Analytical and Applied Pyrolysis, (article in press) M. L. Souza-Santos, 2004, Solid fuels combustion and gasification, New York, USA: Marcel Dekker, Inc. Y. Zhang, S. Kajitani, M. Ashizawa, Y. Oki, 2010, Tar destruction and coke formation during rapid pyrolysis and gasification of biomass in a drop-tube furnace, Fuel, v.89, p. 302-309. L. Zhang, C. Xu, P. Champagne, 2010, Overview of recent advances in thermo-chemical conversion of biomass, Energy Conversion and Management, v. 51, p. 969–982
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Towards a rigorous model of electrodialysis processes Matthias Johanninka, Adel Mhamdia, Wolfgang Marquardta a
Aachener Verfahrenstechnik, RWTH Aachen University, 52065 Aachen, Germany
Abstract In electrodialysis (ED), ion exchange membranes are used to separate ions of opposite charges in an electric field. Only few mechanistic models for ED are proposed in literature, which are based on a quasi-stationary one-dimensional formulation. We present a generalization of the frequently used set of equations to the multi-dimensional dynamic case. The resulting set of partial-differential algebraic equations (PDAE) is thoroughly analyzed by means of an extended index analysis. It is shown that the introduction of the electroneutrality condition, which is frequently used in models arising in electrochemistry, leads to a high index in time and spatial coordinates. In this context, a reformulation to an index one PDAE system is proposed and additional “hidden” constraints for a consistent set of initial and boundary conditions are derived. Keywords: electromembrane processes, electrodialysis, index analysis, electroneutrality
1. Introduction ED is an electromembrane separation process used in numerous industrial applications, e.g. for the desalination of organic liquids or the production of table salt from aqueous solutions [1]. The underlying coupled physico-chemical phenomena characterizing the mass transport in the membrane-electrolyte system are not completely understood although the process has been established in industry for several decades. Fig. 1a illustrates the schematic setup of an ED stack in plate and frame configuration. Anion and cation exchange membranes are arranged in an alternating manner. Charged functional groups incorporated in the membranes allow a nearly complete exclusion of either anions or cations. The membranes are separated by spacers made of loosely woven polymer filaments which are passed by electrolyte solution. Next to the stack, electrodes apply an electric field orthogonally to the flow directions of the electrolyte solution. Thus, ionic species selectively migrate through the membranes to result in either a concentration or a dilution in neighboring flow channels. Few approaches for the mechanistic modeling of ED processes have been reported in literature. In these, either the Nernst-Planck or the Maxwell-Stefan formulation is used for the description of ionic mass transfer [2, 3]. However, the models are frequently based on numerous simplifying assumptions: (i) The flow channels are segmented into a bulk phase with two polarization films adjacent to the membranes (cf. Fig. 1b). In the polarization films the velocity is assumed to be zero; for the bulk segment plug flow is assumed. (ii) Transport processes in flow directions of the electrolyte solution are neglected; hence, a one-dimensional model is obtained. (iii) Holdup terms are neglected
Towards rigorous modeling of electrodialysis processes
117
in the balances of the membrane and polarization films, to result in a quasi-stationary formulation. However, the numerical solution of even these simplified models has shown to be quite complex in state-of-the-art simulation environments [2, 3].
Fig. 1: a) Functional principle of an electrodialytic separation of an electrolyte. b) Segmentation of the electrolyte channels into bulk phase and polarization films (PF) adjacent to the membranes.
This paper is organized as follows. First, we present the model equations describing ionic mass transfer for multi-component systems in a stagnant homogenous phase which is frequently used for the description of ion-exchange membranes or polarization films. Moreover, we extend the frequently used one-dimensional, quasi-stationary formulation to a dynamic multi-dimensional one. The resulting set of PDAE is thoroughly analyzed with respect to its structural indices in time and spatial coordinates. This way, (i) reformulations of the set of PDAE can be worked out to ensure that all the indices are less or equal to one and (ii) “hidden” consistency conditions can be determined, to facilitate the identification of a consistent set of initial and boundary conditions. In this context, the scope of this contribution is to enlighten the structural properties of the models emerging from the introduction of the electroneutrality constraint. Furthermore, a reformulation of the model into a set of PDAE is represented which can be reliably solved by a semi-discretization and state-of-the-art DAE solvers.
2. Modeling of ionic mass transfer in a homogenous phase We introduce a model of ionic mass transfer in a stagnant homogenous phase in a twodimensional spatial domain, with coordinates xl >xlL , xlU @, l 1,2 , where xlL and xlU are fixed lower and upper bounds. Furthermore, we choose x1 to be the direction parallel and x2 to be the direction orthogonal to the electrodes. A set S ^1,..., n` of charged or uncharged species is considered here, in which, without a loss of generality, species n represents a neutral solvent. This way, the model is represented by the PDAE system wc k x, t wt J k x, t 0
J k x, t , k
Dk ck x, t z k ck x, t F
n 1
¦ z c x, t , k
k 1
k
(1)
1,...n, Dk Ex, t , k RT
1,..., n 1,
(2) (3)
M. Johannink et al.
118 n
¦ J x, t ,
(4)
Ex, t M x, t .
(5)
0
k
k 1
Eqs. (1) represent the molar material balances, Eqs. (2) the Nernst-Planck equations [4] for the description of the diffusive fluxes Jk,(x,t), Eq. (3) the electroneutrality constraint accounting for the strong Coulombic interactions of the solute ions, Eq. (4) the linear dependency relation of the diffusive fluxes and Eq. (5) constitutes the definition of the electric field E(x,t). In these equations, Dk is the diffusion coefficient, zk the charge number, ck(x,t) the molar volumetric concentration of ion k. ij(x,t) is the electric potential, R the molar gas constant, T the temperature and F the Faraday constant. The set of PDAE has to be completed by a consistent set of initial and boundary conditions.
3. Extended index analysis in time and spatial coordinates The set of consistent initial and boundary conditions requires a structural analysis of the set of nonlinear PDAE (1)-(5). During this analysis the structural indices in time and spatial coordinates are determined first. To this end, we follow a definition of the index for PDAE systems recently proposed by Martinson and Barton [5]. The key idea is the reformulation of the PDAE system into a pseudo-DAE system in the independent variable of interest. Then, a standard algorithm for the structural analysis of DAE systems [6] can be applied to determine its index. In analogy to the consistent initialization of DAE systems, the index analysis gives rise to consistency conditions that have to be satisfied by initial and boundary conditions [7]. The index with respect to time is two as the electroneutrality constraint expressed in Eq. (3) forms a minimal structurally singular subset (MSSS), i.e. the number of new variables produced by differentiation of the subset is less than the number of equations in the subset. Here, a “new variable” is meant in the context of a consistent initialization, where a variable and its derivative with respect to time are considered as distinct quantities [5]. For a reduction of the index to one, Eq. (3) is differentiated first. The resulting equation is then substituted into Eq. (1) to result in n 1
z k wc k x, t wt 1
¦z k 2
wc k x, t wt
J1
J k x, t , k
(6) 2,...n.
(7)
Note that this reformulation is equivalent to the introduction of a “dummy derivative” [8] for c1(x,t) which is subsequently removed from the system of equations by substitution. The reformulated set of Eqs. (2)-(7) represents a system of PDAE of index one with respect to time. However, its index with respect to the spatial coordinates is two. A MSSS is formed by Eqs. (3) and (4). Similar to the reduction of the index in the time coordinate, the index with respect to the spatial coordinates can be reduced by differentiation of Eqs. (3) and (4) and their introduction into Eqs. (6) and (2):
Towards rigorous modeling of electrodialysis processes z k wck x, t wt 2 1
n 1
¦z k
n
¦ J k x, t ,
(8)
k 2
zk D ck x, t z1c1 x, t F 1 Ex, t , RT k 2 z1 Dk Dk ck x, t z k ck x, t F Ex, t , k RT n 1
J 1 x, t D1 ¦ J k x, t
119
(9) 2,..., n 1.
(10)
The resulting set of modeling equations formed by Eqs. (3)-(5) and (7)-(10) represents a system of PDAE of index one with respect to the spatial coordinates. Note, that c1(x,t) represents an algebraic variable in this formulation. Thus, n 1 independent initial conditions have to be specified. These are constrained by Eqs. (3)-(5), (9), (10) and the “hidden” constraint 0
n 1 § § z ·· ¨ J n x, t ¦ J k x, t ¨¨1 k ¸¸ ¸, ¨ z1 ¹ ¸¹ k 2 © ©
(11)
which results from substituting Eqs. (7) into Eq. (8). Furthermore, it has to be taken into account that the set of equations describing the state of the system at t = t0 itself forms a system of PDAE in the spatial coordinates. Accordingly, a consistent set of initial conditions is only appropriate if the index with respect to the spatial conditions of the system of PDAE at t = t0 is one. A set of initial conditions that satisfies both requirements is J k (x, t c n x, t
t0 )
0, k
2,..., n 1,
(12)
t 0 c n , 0 x ,
(13)
where cn,0(x) is an arbitrary distribution of the solvent concentration in the spatial coordinates. In addition, a total of 2n 1 boundary conditions have to be specified for each spatial coordinate. Similar to the consistent initial conditions, these have to satisfy Eqs. (3) and (4) and additional constraints that are revealed by the introduction of Eqs. (7) into Eq. (8) and Eqs. (10) into Eq. (9): 0
n 1 § z ·· w §¨ c n x, t ¦ c k x, t ¨¨1 k ¸¸ ¸, ¨ z1 ¹ ¸¹ wt © k 2 ©
J 1 x, t
z1 c1 x, t F
(14)
n 1 · § D1 F 1 Ex, t D1 ¦ ¨¨ z k c k x, t Ex, t J k x, t ¸¸. RT RT Dk k 2 © ¹
(15)
A feasible set of boundary conditions satisfying these constraints is l
xlL
n l J kL,l (t , x j z xl )
0, k
2,...n, l , j 1,2,
(16)
J k (t , x) x
xU l
n l J kU,l (t , x j z xl )
0, k
2,...n 1, l , j 1,2,
(17)
J k (t , x) x
l
M. Johannink et al.
120
M (t , x) x
l
xlU
MU (t , x j z xl ), l , j 1,2,
(18)
where nl represents a normal vector in direction of xl pointing into the homogenous phase. JLk,l and JUk,l are known fluxes, e.g., from neighboring phases.
4. ED process model and concluding remarks The reformulation of the set of modeling equations to an index one PDAE system and the identification of a consistent set of initial and boundary conditions allow a reliable and efficient numerical solution with semi-discretization methods and subsequent integration of the resulting DAE system with state-of-the-art DAE solvers. Hence, the results form the basis for the development of rigorous ED process models in established simulation environments. In addition, the findings shed light on the origin of the numerical difficulties reported previously in this context [2, 3]. The reformulated set of modeling equations is used for the description of the transport processes in ion-exchange membranes and polarization films in a detailed dynamic process model of an ED process for the desalination of multi-component electrolytic solutions developed by the authors. The model is based on a hierarchical model structure, in which submodels for the ion-exchange membranes, polarization films, bulk phases and electrodes form the elementary building elements. These are aggregated to a model for the description of a membrane stack as depicted in Fig. 1. Instances of the stack model are connected to models describing tanks and pumps. This way, the description of arbitrary plant configurations with multiple stacks in batch or continuous mode becomes possible. The model gives insight into the underlying coupled physico-chemical phenomena on a high scale of granularity. Particularly the dynamic model formulation in two spatial coordinates is, to the knowledge of the authors, not encountered in previous works. Future tasks focus on the experimental identification of model parameters and the validation as well as on the use of the model within model-based design applications. Acknowledgements Financial support by the Max-Buchner-Forschungstiftung and the Cluster of Excellence "Tailor-Made Fuels from Biomass", which is funded by the Excellence Initiative by the German federal and state governments to promote science and research at German universities is gratefully acknowledged. References [1] H. Strathmann, Ion-Exchange Membrane Separation Processes. Elsevier, Amsterdam, 2004 [2] C.R.Visser, Electrodialytic Recovery of Acids and Bases, PhD-Thesis, Rijksuniversiteit Groningen, 2001 [3] W. Neubrand, Modellbildung und Simulation von Elektromembranverfahren, PhD Thesis, Universität Stuttgart, 1999 [4] J. Newman, Electrochemical Systems, John Wiley & Sons, Hoboken, 2004 [5] W.S. Martinson. and P.I. Barton, SIAM J.Sci.Comput., 21 (2000) 2295 [6] J. Unger, A. Kröner and W. Marquardt, Comput. Chem. Eng., 19 (1995) 867 [7] J. Neumann and C.C. Pantelides, SIAM J. Sci. Comput., 30 (2008) 916 [8] S. E. Mattsson and G.Söderlind, SIAM J. Sci. Comput., 14 (1993) 677
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Stochastic Monte Carlo Simulations as an Efficient Multi-Scale Modeling Tool for the Prediction of Multi-Variate Distributions Dimitrios Meimaroglou,a,c Costas Kiparissidesa,b a
Chemical Process Engineering Research Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece 570 01 b Department of Chemical Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece, 60361 c Current Address: Laboratory of Reactions and Process Engineering, Nancy University, LRGP-ENSIC-INPL, 1 rue Grandville, BP 20451, 54000 Nancy, France
Abstract In the present work, the role of stochastic Monte Carlo simulations as an efficient computational tool for the simulation of multi-variate physico-chemical systems, within the general framework of population balances, is presented. The applicability of the Monte Carlo method to different length scales of a process is clearly illustrated in two representative examples, namely the prediction of the bi-variate particle size distribution of particulate processes and the calculation of distributed molecular properties of highly-branched polymers on the basis of their exact topological architecture. Keywords: stochastic simulations, Monte Carlo, population balance equation, particle size distribution, highly-branched polymers.
1. Introduction Mathematical modeling and in silico analysis of chemical processes have been widely used for more than 40 years and have contributed to great advances in numerous research areas. Depending on the complexity of the process under study, the mathematical model implemented for its description may contain systems of algebraic, differential or integro-differential equations of different order. Accordingly, a number of advanced numerical methods have been developed to deal with the complex systems of equations a mathematical model may contain. These methods are broadly classified into two general categories, namely the deterministic and stochastic numerical methods. The deterministic approach is based on the principle that events are bound by causality, i.e., any state of the system under study is majorly determined by prior states. This approach is commonly characterized by low computational requirements (for simple systems) and high accuracy and repeatability in the calculated results. On the other hand, its application to multi-dimensional process models is bounded by an increased level of complexity. An alternative approach to the commonly employed deterministic numerical methods is the use of probabilistic tools (i.e., Monte Carlo simulations). The stochastic approach is based on the principle that a system's subsequent state is mainly determined by a random element. Stochastic methods have attracted significant attention over the last
D. Meimroglou and C. Kiparissides
122
years as they are easier to implement for the simulation of complex, multi-variate systems while their main disadvantage, associated with their high computational requirements, has been gradually eliminated by the dramatic increase in computer power. Another great benefit of stochastic approaches is their general applicability since the majority of physicochemical systems are inherently discrete and stochastic. As a result, significant advances have been reported, via the use of probabilistic approaches, in the study of complex, multi-variate systems (e.g., polymerization systems, crystallization systems, aerosols, biological systems, etc.). Besides the extended applicability of stochastic simulations on the study of multivariate particulate systems at the micro-scale level, another area of great importance, where the use of probabilistic tools has been proven extremely significant, is that of the nano-scale research of the topological architecture of molecules. Typical examples include the study of the exact architectural microstructure of highly branched polymers as well as the study of the folding pathways of protein molecules. In the present work, the importance and applicability of the stochastic Monte Carlo (MC) method for the study of complex multi-variate systems as well as its use as an efficient tool for multi-scale modelling are illustrated through the use of two representative examples from the micro- and nano-length scales of physicochemical processes. More specifically, the stochastic prediction of a bi-variate particle size distribution (PSD) of a particulate process, under the combined action of particle aggregation and growth mechanisms, is presented to illustrate the applicability of the MC method for the prediction of key micro-scale distributed properties of significant processes, such as polymerization processes, crystallization processes, catalytic processes, colloidal systems, aerosols, etc. Furthermore, a nano-scale application of the MC method, for the accurate prediction of the exact topological architecture of highlybranched polymers is subsequently presented, via an industrial high-pressure polymerization paradigm.
2. Stochastic Prediction of the PSD in Particulate Processes Following the original developments of Hulburt and Katz (1964) and Ramkrishna (1985), the generalized bi-variate population balance equation (PBE) for a batch particulate system, under the combined action of particle aggregation and growth mechanisms, is expressed as: V
x
max max wn(V,x,t) w[Gv (V,x)n(V,x,t)] w[Gx (V,x)n(V,x,t)] ³ ³ E(V,U,x,z)n(V,x,t)n(U,z,t)dUdz wt wV wx Vmin xmin
V/2 x/2
³
(1)
³ E(VU,U,x z,z)n(VU,x z,t)n(U,z,t)dUdz
Vmin xmin
The term n(V, x, t)dVdx denotes the number of particles per unit volume in the size range [V, V+dV] and the property x in the range [x, x+dx]. Furthermore, G(V,x,t) and E(V, U, x, z) denote the particle growth rate and particle aggregation rate kernel between particles of volumes V and U and internal properties x and z, respectively. The stochastic MC method can be employed to solve the two-dimensional PBE (Eq.(1)) and infer the dynamic evolution of the bi-variate PSD, simply by tracking the corresponding changes or events (i.e., growth, aggregation) occurring on a constant sample-volume of the system (i.e, in a small number of sample particles, e.g., 106) (Meimaroglou et al., 2006). Considering an initial number density function, n(v,x,0),
Stochastic Monte Carlo Simulations as an Efficient Multi-Scale Modeling Tool for the Prediction of Multi-Variate Distributions 123 that follows an exponential dependence with respect to the particle volume, detailed numerical simulations were carried out for several batch particulate processes undergoing particle aggregation and growth, using an initial sample population of 2.7×107 particles. The calculated PSDs were compared with respective analytical solutions of the bi-variate PBE (Gelbard and Seinfeld, 1978). Note that, for the presentation of the results, the particle growth and aggregation dimensionless time constants were utilized (i.e., IJg and IJa, respectively) (Alexopoulos et al., 2004). Furthermore, the discretization of both domains and the characteristic values of particle volume and internal property (i.e., V and x) were considered to be identical.
(a)
(b)
Figure 1. Bi-variate PSD as calculated by the stochastic MC method (a) and by the analytical expression (b), for the case of pure constant particle aggregation (IJa = 1).
In Figures 1a and 1b, the MC calculated bi-variate PSD is compared with the distribution calculated by the analytical expression (Gelbard and Seinfeld, 1978), for the case of constant particle aggregation (ȕ(V,U,x,z) = ȕ0 ; IJa = 1). It can be seen that the MC calculated distribution displays a very good agreement with the analytical solution. As expected, despite the low value of IJa (i.e., the limited extent of particle aggregation), some oscillations are observed in the MC calculated lower region of the distribution due to insufficient sampling of these areas, a phenomenon that is commonly encountered in stochastic simulations and is enhanced at higher aggregation times. The occurrence of such oscillations can be overcome via the implementation of a “constant number MC simulation” (Smith and Matsoukas, 1998) or via the use of various “sample refreshing” techniques (Meimaroglou et al., 2006). A similar behavior is observed in the case where both mechanisms of particle growth and aggregation are considered. This can be seen in Figures 2a and 2b, where the MC calculated PSD is compared with the respective analytical distribution for the case of combined constant particle aggregation (ȕ(V,U,x,z)=ȕ0, IJa =1) and linear particle growth (GV(V,x)=G0V, Gx(V,x)=G0x, IJg=1).
(a)
(b)
Figure 2. Bi-variate PSD as calculated by the MC method (a) and by the analytical expression (b), for a case of combined constant particle aggregation (IJa=1) and linear particle growth (IJg=1).
3. Stochastic Prediction of the Molecular Properties of Polymers In a non linear polymerization system, the PBE can be expressed in a lower length-scale in terms of the concentration of the “live” or “dead” polymer chains of the system, using
D. Meimroglou and C. Kiparissides
124
as internal coordinates the total degree of polymerization of the polymer chains and their total number of long-chain branches, in order to infer the dynamic evolution of a series of distributed molecular properties of the polymer chains. The resulting PBEs can be numerically solved, following either a deterministic or a stochastic approach to calculate a series of polymer molecular properties associated with the two internal coordinates of the PBE (Kiparissides et al., 2010). On the other hand, in an attempt to model the rheological behavior of polymers, a need has recently arisen for the accurate prediction of properties associated with the exact architectural characteristics of highlybranched polymers (Das et al., 2006). The nano-scale modeling of the microstructure of highly-branched polymers dictates the manipulation of a huge amount of information in order to accurately describe their exact architecture and/or spatial configuration. Thus, a deterministic approach for such a study becomes prohibitive and a multi-variate MC stochastic approach provides the only feasible modeling tool for such a complex system. In Figures 3a and 3b, the bi-variate long-chain branching – molecular weight (LCBMW) and branching order – branch molecular weight (BO-BMW) distributions are depicted for a specific industrial grade (Meimaroglou et al., 2011) of low-density polyethylene (LDPE) produced in a high-pressure tubular reactor. The first distribution reveals the existence of high-molecular weight PE chains with high contents of longchain branches while the second distribution provides detailed information on the molecular weight of the PE long-chain branches as well as on their relevant position with respect to the linear “backbone” carbon sequence of the polymer chains.
(a)
(b)
Figure 3. Joint (a) LCB-MW and (b) BO-BMW distributions as calculated by the stochastic kinetic/topology MC method for an industrial LDPE grade.
On the basis of the topological information provided by the stochastic kinetic/topology MC algorithm, typical random-walk simulations can be carried out to provide a series of random 3-D spatial configurations of the polymer chains and, thus, to calculate properties directly associated with the occupying volume of the polymer chains in the melt. Such properties include the radius of gyration, Rg, the hydrodynamic radius, Rh, of the polymer chains as well as their branching factor, g. In Figures 4a and 4b, the radius of gyration, Rg, and the branching factor, g, are depicted for a population of 118.5×104 PE polymer chains, as calculated for the same industrial LDPE grade (Meimaroglou et al., 2011). The dots in Figures 4a and 4b represent the respective Rg and g values, averaged over ten simulation runs for each individual polymer chain of the sample population, while the continuous lines represent the respective average Rg and g curves.
Stochastic Monte Carlo Simulations as an Efficient Multi-Scale Modeling Tool for the Prediction of Multi-Variate Distributions 125 2
1.2
Linear Branched
1.0
Branching Factor, g
Rg/Bond Length
10
1
10
0.8 0.6 0.4 0.2 0.0
2
(a)
10
3
4
10
10 Chain Length
5
10
(b)
10
2
10
3
4
10
5
10
Chain Length
Figure 4. (a) Radius of gyation and (b) branching factor distributions as calculated for a sample of 118.5×104 PE polymer chains.
4. Conclusions In the present work, the role of stochastic MC simulations as an efficient computational tool for the simulation of multi-variate physico-chemical systems, within the general framework of population balances, was presented. The applicability of the MC method to different length scales of a process as a multi-scale simulation tool was clearly illustrated in two representative examples, namely the prediction of the bi-variate PSD of particulate processes under the action of particle growth and aggregation mechanisms and the calculation of distributed molecular properties of highly-branched PE polymer chains on the basis of their exact topological architecture.
References A.H. Alexopoulos, A.I. Roussos, C. Kiparissides, 2004, Part I: dynamic evolution of the particle size distribution in particulate processes undergoing combined particle growth and aggregation. Chemical Engineering Science, 59, 5751–5769. C. Das, N.J. Inkson, D.J. Read, M.A. Kelmanson, T.C.B. McLeash, 2006, Computational linear rheology of general branch-on-branch polymers, Journal of Rheology, 50, 207-234. C. Kiparissides, A. Krallis, D. Meimaroglou, P. Pladis, A. Baltsas, 2010, From molecular to plantscale modeling of polymerization processes: A digital high-pressure low-density polyethylene production paradigm, Chemical Engneering Technology, 33, 1754-1766. D. Meimaroglou, A.I. Roussos, C. Kiparissides, 2006, Part IV: Dynamic evolution of the particle size distribution in particulate processes.Acomparative study between Monte Carlo and the generalized method of moments, Chemical Engineering Science, 61, 5620 – 5635. D. Meimaroglou, P. Pladis, A. Baltsas, C. Kparissides, 2011, Prediction of the molecular and polymer solution properties of LDPE in a high-pressure tubular reactor using a novel Monte Carlo approach, Chemical Engneering Science, In Press (doi:10.1016/j.ces.2011.01.003). D. Ramkrishna, 1985, The status of population balances, Revisions in Chemical Engineering, 3, 49–95. F. Gelbard, J.H. Seinfeld, 1978, Numerical solution of the dynamical equation for particulate systems. Journal of Computational Physics, 28, 357–375. H.M. Hulburt, S. Katz, 1964, Some problems in particle technology. A statistical mechanical formulation, Chemical Engineering Science, 19, 555–574. M. Smith, T. Matsoukas, 1998, Constant-number Monte Carlo simulation of population balances. Chemical Engineering Science, 53(9), 1777-1786.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modeling of a batch emulsion copolymerization reactor in the presence of a chain transfer agent : estimability analysis, parameters identification and experimental validation. B. Benyahia a,b , M. A. Latifi a, C. Fonteix a, F. Pla a a
Laboratoire Réactions et Génie des Procédés, CNRS-ENSIC 1 rue Grandville, BP 20451, 54001 Nancy Cedex, France E-mail address :
[email protected] b
Process Systems Engineering Labora tory, Department of Chemical Engineering, MIT, 77 Massachusetts Avenue, Cambridge MA 02139, USA
Abstract This paper deals with estimability analysis, parameter identification and experimental validation of a model developed for a batch reactor where the emulsion copolymerization of styrene and butyl acrylate in the presence of n-dodecyl mercaptan as a chain transfer agent takes place. Accurate estimation of the model parameters is required to obtain reliable predictions of the products end-use properties. However, due to the mathematical model structure and to possible lack of measurements, the estimation of some parameters may be impossible. The main limitations to the parameters estimability are their weak effect on the measured outputs and the correlation between these effects. The objective of the method developed in this paper is to determine the subset of the most influencing parameters that can be estimated from the available experimental data when the complete set of model parameters cannot be estimated. In the case study, it was shown that only 21 parameters out of the 49 involved in the model were estimable. The values of the remaining 28 non estimable parameters were taken either from previous studies or from literature. Moreover a new method has been developed to determine the confidence domain for the set of estimable and identified parameters. Keywords : Emulsion copolymerization model, Estimability analysis, parameter identification, confidence domain.
1. Introduction Process systems with large and complex mathematical models are commonly encountered in polymerization, biological and water treatment fields. The common feature exhibited by this kind of models si the large number of parameters to be estimated. An accurate estimation of the whole parameters is usually impossible and this number is often reduced due to the insufficient information contained in the available experimental data. A first step in the development of a reliable mathematical model, prior to parameter identification problem, is to evaluate the estimability of the parameters and to determine the subset of potentially estimable parameters. Due to the model structure and possible lack of measurements, estimation of some parameters may be impossible regardless of
Modeling of a batch emulsion copolymerization reactor in the presence of a chain transfer agent : estimability analysis, parameters identification and experimental validation 127
the amount of data available. The main limitation to the parameters estimability is the weak influence of some parameters on the measured outputs and the correlation between the parameters effects. The estimation of these parameters can lead to significant degradation in the predictive capability of the model. Different methods based on the sensitivity analysis have been proposed in the literature to rank the parameters (Weijers and Vanrolleghem 1997, Li et al. 2004). Moreover orthogonalization-based methods are more and more attractive (Yao et al.2003, Lund and Foss 2008, Chu et al. 2009). These methods are similar and make it possible to distinguish the least correlated parameters. The method developed by (Yao et al.2003) is particularly efficient and has been implemented recently to several systems (Jayasankar et al. 2009, Quiniou 2009, Ngo 2009, Surisetty et al. 2010). This method can be used prior to the experimental campaign to identify the best parameter candidates to be estimated based on the model predictions and initial nominal values of the parameters (obtained generally from literature). When a parameter is not estimable according to the experimental design adopted, the latter may be changed so that the targeted parameter will be estimable under the new conditions. In this work, we implemented the sequential orthogonalization method developed by (Yao et al.2003) to the mathematical model of a fed-batch emulsion copolymerization of styrene and butyl acrylate in the presence of n-dodecyl mercaptan, as a chain transfer agent (CTA). This model consists of a system of differential algebraic equations (DAEs), derived from mass, population and moment equations (Benyahia et al.2010). This system involves 49 unknown kinetic and thermodynamic parameters, many of them being impossible to be accurately estimated due to the lack of needed experimental data. Moreover, a new approach has been developed to assess the confidence domain of the identified parameters.
2. Estimability analysis The development of an effective solution to the parameter selection problem requires the quantification of the influence of each parameter on the measured outputs (Yao et al., 2003). The first step of the estimability analysis method is the evaluation of the normalized sensitivity coefficients as follows,
s ij t
pj tk
yi
t tk
wyˆ i wp j
(1) t tk
where p j is the nominal value of the parameter of the predicted output
p j , yi t t is the corresponding value k
yˆ i at time tk.
The matrix of the sensitivity coefficients Z is then obtained for the whole outputs and measurements. In this matrix, each column represents the effect of a given parameter on the whole outputs at different measurements time, whereas each row represents the effect of the whole parameters on a given output at a fixed time of measurement.
Benyahia et al.
128
ª s 11 t t « « M « «sn1 t t « « s 11 t t « M « «s n 1 t t ¬
1
Z
s
1
2
s
nm
L
s 1n
p
O L
sn n
L
s 1n
t t1
M s p
p
O M L sn n s p
t t1 t t2
t tn m
º » » » » » » » » » ¼
(2)
2.1. Algorithm The algorithm used is summarized as follows 1. 2. 3. 4. 5. 6. 7. 8.
Compute the magnitude of each column (the sum of squares of the elements) of Z. Select the parameter whose column in Z has the largest magnitude as the first estimable parameter. Mark the corresponding column as XL (L = 1 for the first iteration). Compute ZL, the prediction of the full sensitivity matrix, Z, using the subset of columns XL: Z L X L X TL X L 1 X LT Z Compute the residual matrix RL : RL = Z - ZL Compute the sum of squares of the residuals in each column of RL. The column with the largest magnitude corresponds to the next estimable parameter. Select the corresponding column in Z, and augment the matrix XL by including the new column. Denote the augmented matrix as XL+1 . Advance the iteration counter by one and repeat steps 4 to 7 until the column of largest magnitude in the residual matrix is smaller than a prescribed cut-off value.
The algorithm was implemented to the 49 parameters of the process model. As a result only 21 parameters were distinguished. It is noteworthy that the mathematical model was aimed to predict satisfactorily and simultaneously the global conversion (Xove ), the fraction of residual styrene (Fr2 ), the number- and weight-average molecular weights (Mn , Mw) and the average particle diameter (d p ). The 21 parameters were determined through the minimization of the following maximum likelihood criterion, ny § nmi 2· Min J ( p) ¦ nmi . ln¨¨ ¦ yij yˆ i x, p, tij ¸¸ i 1 ¹ ©j1 x& f (x, u, p, t) x( t 0) x0 s. t (3)
pl d p d pu where n y is the number of the measured variables or outputs (Xove,Fr2 , Mn , Mw, d p ) , yij is the measurement of the ith output at jth time or t ij , nmi is the total number of measurements for a given output i. Finally, the set of the parameters obtained has been used for the model validation in both batch and fed-batch conditions. The results obtained showed a good agreement between model predictions and experimental measurements (Benyahia 2009).
Modeling of a batch emulsion copolymerization reactor in the presence of a chain transfer agent : estimability analysis, parameters identification and experimental validation 129
3. Confidence domains The accuracy of the parameters estimation is inherent to the confidence intervals of each parameter. Moreover, the former are good indicators of correlations between parameters. However, the determination of these intervals without approximations is quite a challenging task, particularly in the case of nonlinear systems. In this work, a new approach is implemented to assess the confidence domain. It consists in generating a subset of the parameter vectors that lie within the confidence interval instead of the common approach based on the determination of the contours of the confidence domain. The errors between the model prediction and the experimental measurements are supposed to be Gaussian random variables. The test of Fisher-Snedecor is used to get the following expression which is given without proof for the sake of brevity,
J ( p ) J ( p) d
nm (n p n y 1) nm n p ns 1
Fβ (n p n y 1, nm n p n y 1)
(4)
where J(p * ) is the value of the maximum likelihood criterion for the optimal parameters, n m is the total number of measurements, Fβ is the Fisher-Snedecor law. The last equation (Eq. 4) is finally used as a constraint in a new optimization problem stated as follows,
Min J ( p) sà
x&
§ nmi · . ln ¨ ¦ yij yˆ i x, p, tij 2 ¸ ¨j1 ¸ i 1 © ¹ f (x , u , p , t ) x( t 0) x0 ns
¦n
mi
J ( p ) J ( p)
nm ( n p ny 1) nm n p ns 1
(5)
Fβ ( np n y 1, nm n p n y 1) d 0
pl d p d pu
1,1E+00
1,1E+00
1,0E+00
1,0E+00
9,0E-01
9,0E-01
f0
f0
This problem has been solved using a genetic algorithm (Benyahia 2009). The confidence domains obtained for the 21 parameters showed a good agreement with the estimability analysis approach particularly for the outranked parameters (Figure 1).
8,0E-01
8,0E-01
7,0E-01 8,0E-01
7,0E-01 9,0E-01
1,0E+00
1,1E+00
s
(A)
1,2E+00
1,3E+00
0,0E+00
3,0E-06
6,0E-06
9,0E-06
1,2E-05
1,5E-05
kd0
(B)
Figure 1: Confidence domains of a sample of estimable and identified parameters : A. f0 (a parameter related to the initiator efficiency) vs. s (swelling parameter of the particles). B . f0 vs. kd0 (kinetic coefficient of the initiator decomposition ).
130
Benyahia et al.
4. Conclusions An estimability analysis methodology based on a sequential orthogonalization procedure has been implemented to distinguish the parameters potentially estimable from the experimental measurements was carried out. Thanks to this approach, 21 parameters of the most influentiing and the least correlated have been selected out of the 49 parameters of the model. These parameters were identified using the experimental data available. The mo del was then validated and a good agreement between the model predictions and the measurements was achieved. A new method has been developed to assess the confidence domain using an optimization scheme. The results obtained showed a good agreement with the estimability approach particularly for the outranked parameters for which correlations are very weak and confidence domains are very narrow.
References B. Benyahia, 2009. Modélisation, expérimentation et optimisation multicritère d’un procédé de copolymérisation en émulsion en présence d’un agent de transfert de chaîne, PhD Thesis, INPL, Nancy, France. B. Benyahia, M. A. Latifi, C. Fonteix, F. Pla, S. Nacef, 2010. Emulsion copolymerization of styrene and butyl acrylate in the presence of a chain transfer agent. Part 1: Modeling and experimentation of batch and fed-batch processes, Chemical Engineering Science, 65, 850869. Y. Chu, Z., Huang, J. Hahn, 2009. Improving prediction capabilities of complex dynamic models via parameter selection and estimation. Chemical Engineering Science, 64, 4178-4185. B. R. Jayasankar, A. Ben-Zvi, B. Huang, 2009. Identifiability and estimability study for a dynamic solid oxide fuel cell model. Computers & Chemical Engineering, 33, 484-492. R. Li, , M. A. Henson, M. J. Kurtz, 2004. Selection of model parameters for off-line parameter estimation. IEEE Transactions on Control Systems and Technology 12 (3), 402-412. B. F. Lund, B. A., Foss, 2008. Parameter ranking by orthogonalization - Applied to nonlinear mechanistic models. Automatica, 44, 278-281. V.V. Ngo, 2009. Modélisation du transport de l’eau et des hydrocarbures aromatiques polycycliques (HAP) dans les sols de friches industrielles, PhD Thesis, INPL, Nancy, France. S. Quiniou, 2009. Modélisation, simulation et analyse expérimentale du transport de matière et de chaleur dans les textiles, PhD Thesis, INPL, Nancy, France. K. Surisetty, H. D. H. Siegler, W. C. McCaffrey, A. Ben-Zvi, 2010. Model re-parameterization and output prediction for a bioreactor system, Chemical Engineering Science, 65, 4535-4547. S. R. Weijers, P. A. Vanrolleghem, 1997. Procedure for selecting best identifiable parameters in calibrating activated slugde model no. 1 to full-scale plant data, Water Science and Technology, 36 (5), 69-79. K. Z. Yao, B. M. Shaw, B. Kou, K. B. McAuley, D. W. Bacon, 2003. Modelling ethylene/butene copolymerization with multi catalyst: parameter estimability and experimental design, Polymer Reaction Engineering., 11, 3, 563-588.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multiscale modeling of chemical vapor deposition of silicon Nikolaos Cheimarios,a Sokratis Garnelis,a George Kokkoris,b Andreas G. Boudouvisa a
School of Chemical Engineering, National Technical University of Athens Athens 15780, Greece b Institute of Microelectronics, National Center for Scientific Research “Demokritos” Athens 15310, Greece
Abstract A multiscale computational study of chemical vapor deposition (CVD) of silicon (Si) from silane (SiH4) is presented. The macro-scale of the bulk of a CVD reactor is coupled with the micro-scale of a predefined topography consisting of trenches on the wafer surface through a multiscale modeling framework. The coupling is implemented by the correction of the boundary condition for the consumption of species on the wafer which takes into account the existence of the micro-topography. The effect of the wafer temperature on the Si film uniformity inside the trenches as well as the effect of the micro-topography on the species consumption on the wafer are studied. Keywords: chemical vapor deposition, multiscale modeling, transport phenomena, profile evolution, chemistry model.
1. Introduction The deposition of silicon (Si) films is one of the key steps in the fabrication of semiconductor, micro-sensor devices and micro- and nano-electromechanical systems (MEMS, NEMS) [1]. These films are usually produced with Chemical Vapor Deposition (CVD). Due to the wide range of their applications, the properties (film thickness, uniformity, surface morphology) of Si films produced by CVD on flat wafer surface have been studied both experimentally [2] and theoretically [3,4]. Nowadays, the specifications of the films refer to properties in micro- or nano-scale and the single scale conventional CVD modeling methods are not adequate. More advanced mutliscale modeling methods ought to be implemented for studying the physical/chemical phenomena in the co-existing (multiple) scales, in e.g. the filling of a micro-trench or the nano-roughness growth of a coating. A review of multiscale CVD modeling approaches can be found in [5] and particularly for Si deposition in [6,7]. In the present study, multiscale modeling is performed by coupling the macro-scale of the bulk of a CVD reactor [Fig. 1(a)] with the micro-scale of a predefined topography [Fig.1(b)] on the wafer surface. A Reactor Scale Model (RSM) which describes the transport and chemical phenomena in the bulk of the CVD reactor is coupled with a model describing the evolution of the film growth on a micro-topography on the wafer, termed as Feature (e.g. trench, hole) Scale Model (FSM). The coupling between RSM and FSM is implemented [8] by a correction of the boundary condition for the consumption of species on the wafer, i.e. by imposing an effective consumption on the wafer which takes into account the existence of the micro-topography.
N.Cheimarios et al.
132
Compared to previous works [6], the contribution of this work rests on coupling the multiple length scales with two-ways interaction between the scales through a simple, yet effective mass conservation-type condition at the effective macro- micro- scale interface. The coupling of the co-existing scales is performed directly; no meso-scale model as in the work of Gobbert et al. [9] is used. Moreover, the coupling of a commercial CFD code with a home-made code provides a robust and flexible macroscale computational environment. Finally, the time consuming computations in the micro-scale are efficiently treated with a master-worker parallel technique in a high performance computational environment [10]. In this work, the CVD of Si from silane (SiH4) inside long rectangular trenches (microscale) in terms of macro-variables is manipulated. Namely, the effect of wafer temperature on the uniformity (conformality) of deposition inside the trenches is studied. The influence of the micro-topography on the consumption rate of the species participating in the surface reactions is also investigated.
2. Chemistry model for Si CVD Three reactions are considered for the deposition of Si; one in the bulk phase of the reactor, SiH4(g) R SiH2(g) + H2(g), leading to the decomposition of SiH4 to silylene (SiH2) and hydrogen (H2), and two at the surface of the wafer, namely SiH4(g) ĺ Si (s) + 2H2(g) and SiH2(g) ĺ Si(s) + H2(g), leading to the deposition of Si. The reaction set is a simplified set, accurately reproducing the growth rate along the wafer as well as the Arrhenius plot of the full one originated in the work of Kleijn et al. [4].
3. RSM, FSM and the coupling methodology Regarding RSM, the conservation equations, namely momentum, continuity, energy and species equations, with the appropriate boundary conditions [8], are solved numerically in steady state for the velocity, pressure, temperature, and species mass fractions. The computations are performed with the commercial software Ansys/Fluent [11] which incorporates the finite volume method [8, 12]. In the micro-scale, the FSM [13] is an integrated framework developed in C/C++. It consists of a ballistic model [14], suitable for treating species transport at high Kn number conditions, for the calculation of the local fluxes of each reactant inside the trenches, a surface chemistry model for the calculation of the deposition rate, and a profile evolution algorithm based on the level set method [13] for “growing” the film inside the trenches. The schematic representation of the coupling methodology is shown in Fig. 1. The coupling of the RSM with FSM is accomplished by the effective reactivity factor [15], İ, a parameter introduced to correct the boundary condition for the species equation. The kth surface reaction rate in the macro-scale is multiplied with İk and the boundary condition for the species i participating in the surface reaction(s) reads,
U Di n Zi
m
s M i ¦ J i , k H k rmacro ,k
(1)
k 1
where ȡ is the density (kg/m3), Zi the mass fraction, Mi the molecular weight (kg/kmol), Ȗi,k the stoichiometric coefficient of species i in the kth surface reaction, Di the diffusion coefficient (m2/s), m the number of reactions that species i participates in, s the rate of the kth surface reaction calculated by Fluent and n the unit normal rmacro ,k
Multiscale modeling of chemical vapor deposition of silicon
133
Figure 1. The schematic of the coupling methodology. (a) Macro-scale: CVD reactor. (b) A boundary cell at the top of a cluster of features. A is the total surface of the features in the cluster. is the surface of the boundary cell through which information is transferred from the macro- to the micro-scale [i.e. mass fractions, density, and temperature (Ȧi, ȡ, T)] and vice versa.
vector to the surface of the wafer. In the course of the simulation, İk is corrected in the face of every boundary cell j adjacent to the wafer surface for the RSM to take into account the increased consumption of species i due to the existence of the microtopography on the wafer; İk,j is the effective reactivity factor on cell j. The correction is performed through the fixed point iteration scheme n 1
Hk, j
n
Hk, j
H
n s rmicro ,k , j H k , j s macro , k , j
r
n k, j
(2)
s is the where (n+1) and (n) correspond to two successive iterations. The term rmicro ,k , j surface reaction rate computed by FSM in a cluster of trenches which corresponds to the boundary cell j. The multiscale computations start with İk = 1. The latter corresponds to the case without micro-topography. The iterative procedure continues until convergence on all İk,j. Upon convergence, the local deposition (growth) velocities are fed to the level set method and deposition occurs for ǻt. Details for the RSM, FSM and the coupling methodology can be found in [8]. The computations in the micro-scale can be performed independently for each boundary cell [Fig.1b]. To exploit this characteristic of the FSM, a synchronous master-worker parallel technique [10] is implemented by using Message Passing Interface (MPI). The speedup achieved is almost proportional, to the number of processors [10].
4. Results and discussion The model reactor used for the computations is a vertical, single wafer, cold wall CVD reactor [Fig. 1(a)]. The dimensions of the reactor were taken from [16] and the operating conditions from [4] (base case), i.e. total flow rate of 1000 sccm, mole fraction of SiH4 0.1 in nitrogen (N2) carrier gas and operating pressure of the reactor 133 Pa. The predefined micro-topography on the wafer consists of trenches of 1 ȝm width and 1.5 ȝm depth with uniform density, 8 trenches per 32 microns, along the wafer. We use a macro-parameter, namely the wafer temperature, Tw, to manipulate the deposition profile inside the trenches; low Tw yields uniform deposition inside the trenches and high Tw yields formation of void. The decrease of the mole fraction of SiH 4 in the inlet has a similar to the wafer-temperature-increase effect on the film uniformity (not shown in this work).
134
N.Cheimarios et al.
Figure 2. Trench profile evolution at the cluster of trenches extending from 0.045 to 0.05 m from the center of the wafer for (a) Tw = 900K, (b) Tw = 1200K, and (c) Tw = 1400K. The profiles are at equidistant time spaces 600 s, 20 s, and 30 s respectively. (d) Average consumption rates of SiH4 and SiH2 on the wafer vs deposition time (Tw = 900K).
In Figs. 2(a) to (c) the trench profile evolution is shown for the three cases of Tw (900, 1200, and 1400K). For the 900 K case, the trench fills uniformly. This is due to the low . Low effective value of the effective sticking coefficient of SiH4 sticking coefficient means that the number of reemissions – and consequently the redistribution of flux inside the trench – is high. As a result, the flux is almost the same for every elementary surface of the trench and the deposition is isotropic. The influence even if it has a high value [4], on the profile evolution is of insignificant since the deposition rate of Si due to SiH2 is small (~2x10-9 kg m-2s-1) increases (~10-2) and a void compared to SiH4 (~10-7 kg m-2s-1). For Tw = 1200 K, starts to form as the trench closes [Fig. 2(b)]. The redistribution of flux is not so is higher. The deposition rate of Si by SiH2 (see effective in this case where section 2), albeit its increment (~9x10-9 kg m-2s-1), is still not significant. Further increment of Tw causes the void to increase as it can be seen from Fig. 2(c). In this case, the void is due to the high value of the value of emains the same as in the case of 1200K but becomes important since the deposition rates from SiH4 and SiH2 are of equal magnitude (~2x10-5 kg m-2s-1). The void formation with increasing wafer temperature has also been reported in the experimental works of Kinoshita et al. [6] for Si CVD, Shell et al. [17] for tungsten silicide (WSix) CVD and Kim et al. [18] for tungsten (W) CVD. A comparison of the results coming from the multiscale framework with those coming from a fully macroscopic model which “ignores” the micro scale features on the wafer is performed in Fig. 2d. In particular, the average consumption rates along the wafer and (reduced to the surface of a wafer without micro-topography [8]) of SiH4, of SiH2, , are shown for the case of Tw = 900 K. is greater compared to the case without micro-topography (flat wafer) at t = 0 s: The existence of the microtopography increases the surface where Si can be deposited (at t = 0 s the trenches are increases. As Si deposition proceeds, trench filling reduces empty) and for that the surface where Si can be deposited and consequently decreases. is lower compared to the case without micro-topography at t = 0 s due to the increased consumption of SiH4 at t = 0 s; SiH2 is produced by SiH4 from a volumetric reaction (see section 2). As Si deposition proceeds, the consumption of SiH4 on the wafer decreases and consequently the production of SiH2 increases. After trench filling, both
Multiscale modeling of chemical vapor deposition of silicon
135
and tend to the values without micro-topography [Fig. 2(d)] as the micro-topography tends to a flat surface.
5. Conclusions A coupling methodology for multiscale modeling of Si CVD is presented. The coupling of the co-existing scales in the CVD process of Si is achieved by introducing a correction (effective reactivity factor) in the boundary condition for the species equation. The deposition of Si inside the trenches is manipulated through wafer temperature (Tw); for high Tw a void is formed inside the trenches in contrast to the case of low Tw where the trenches fill uniformly. Moreover, the existence of the microtopography increases the average consumption rate of SiH4 and decreases the average consumption rate of SiH2. The latter is attributed to the volumetric reaction in the bulk phase of the reactor. Acknowledgments. This work was partially supported by the National Technical University of Athens through the Basic Research Program ȆEBE – 2007 and the State Scholarships Foundation, through a fellowship to N.C.
References [1] J. D. Plummer, M. D. Deal, 2000, Silicon VLSI technology : fundamentals, practice, and modeling, Prentice Hall, 509-605. [2] C.H.J. Van Den Brekel and L.J.M. Bollen, J. Cryst. Growth, 54 (1980) 310. [3] M.E. Coltrin, R.J. Kee and J.A. Miller, J. Electrochem. Soc., 133 (1986) 1213. [4] C.R Kleijn, J. Electrochem. Soc., 138 (1991) 2190. [5] C.R.Kleijn, R. Dorsman, K.J. Kuijlaars, M.Okkerse and H. van Santen, J. Cryst. Growth, 303 (2007) 362. [6] S. Kinoshita, S. Takagi, T. Kai, J. Shiozawa and K. Maki, Jpn. J. Appl. Phys, 44 (2005) 7855. [7] A. Barbato, A. Fiorucci, M. Rondanini and C. Cavalloti, Surf. Coat. Tech., 201 (2007) 8884. [8] N.Cheimarios, G.Kokkoris and A.G.Boudouvis, 2010, Chem. Eng. Sci., 65 (2010) 5018. [9] M.K. Gobbert, C.A. Ringhofer and T.S. Cale, J. Electrochem. Soc., 143 (1996) 2624. [10] N. Cheimarios, G. Kokkoris and A.G.Boudouvis, An efficient parallel fixed point iteration method for multiscale modeling of chemical vapor deposition processes, preprint (2010). [11] Ansys v12.1sp1. ANSYS Inc 2010. http://www.ansys.com/ [12] T. C. Xenidou, N. Prud’homme, C. Vahlas, N. C. Markatos and A. G. Boudouvis, J. Electrochem. Soc., 157 (2010) D633. [13] G.Kokkoris, A.Tserepi, A.G. Boudouvis and E.Gogolides, J. Vac. Sci. Technol. A, 22 (2004) 1896. [14] [14] G. Kokkoris, A.G. Boudouvis and E. Gogolides, J. Vac. Sci. Technol. A, 24 (2006) 2020. [15] S.T. Rodgers, K.F. Jensen, J. Appl. Phys. 83, (1998) 524. [16] H. van Santen, C.R. Kleijn, H.E.A. van der Akker, Int. J. Heat Mass Tran., 44 (2001) 659. [17] B. Shell, A. Sänger, G. Schulze-Icking, K. Pomplun, W. Krautschneider, Thin Solid Films, 443 (2003) 97. [18] B. Kim, Y. Akiyama, N. Imaishi and H.C. Park, Jpn. J. Appl. Phys, 38 (1999) 2881.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
3D Cellular automata for modeling of spray freeze drying process S. Ivanov,a A. Troyankin,a P. Gurikov,a A.Kolnoochenko,a N. Menshutinaa a
D. Mendeleev University of Chemical Technology of Russia (MUCTR), CAPE Department, Miusskaya sq. 9, Moscow, 125047, Russia This paper presents a cellular automata modeling of the atmospheric freeze drying process with active hydrodynamics. Presented approach makes possible to calculate drying kinetics taking into account internal structure of a particle, as well as heat and mass transfer and phase changes. nVidia CUDA technology and high-performance computer with parallel computing were used for model calculations. Keywords: freeze drying, cellular automata, modeling, parallel computing
1. Introduction Models of complex physical and chemical systems with phase changes in non-regular porous media are to be used when describing various objects and phenomena: from frozen soils to pharmaceutical drug release systems. Such systems are rather hard to model in traditional way, by means of differential equations, that’s why there is a need to look for other instruments that on one hand are capable to accurately depict physical nature of examined phenomenon and on the other hand are relatively easy to calculate. Nowadays, cellular automata become more and more popular in describing diffusion, adsorption, coagulation, aggregation etc. They are systems consisted of huge number of cells with discretely defined time and space and they can be described by a finite number of options or conditions. Common physical properties and characteristics can be used in depicting these cells’ condition, e.g. concentration, temperature, pressure etc. Condition of each cell is changed in predefined time interval in accordance with a special transition rule that defines new condition of a cell taking into account its previous condition as well as conditions of its neighbors. This paper presents cellualar automata modeling of atmospheric freeze drying process of a single spherical particle.
2. Atmospheric spray freeze drying process with active hydrodynamics Atmospheric freeze drying with active hydrodynamics is one of the most promising methods for production of powders and microparticles for pharmaceutical applications [H. Leuenberger]. This process allows to produce fine spherical particles with narrow size distribution and rather large internal surface area. It is also suitable for drying of heat-sensitive substances. Atmospheric freeze drying process is performed in two separate stages: ultra-rapid freezing and drying. On the first stage a prepared solution is sprayed via Sonotek ultrasonic nozzle into liquid nitrogen where it becomes rapidly frozen. This freezing technique allows to avoid big ice crystals’ growth that helps to get a high porous internal structure of final particle [Zerkaev et al]. On the second stage frozen solution is freeze dried in the separate chamber in a spouted bed. Freeze drying is performed at the temperature closed to eutectic temperature of frozen solution that ensures maximum rate of the process.
3D Cellular automata for modeling of spray freeze drying process
137
Produced particles are characterized by spherical form, narrow size distribution and high porosity. SEM photos of freeze dried dextran particle that was used as a model substance are presented in Figure 1.
Figure 1. Dextran particle produced by atmospheric freeze drying with active hydrodynamics (closer view of the particle surface on the right side)
To estimate internal surface area and porosity of a particle nitrogen adsorption method was used. Surface area was calculated by BET model, and pore volume and radius was obtained from BJH. Characteristics of a particle of freeze dried 10% water dextran solution are presented in Table 1. Table 1. Characteristics of a particle of freeze dried 10% water dextran solution Average particle diameter
40 m
Internal surface area
§ 180 m2/g
Total pore volume for pores with less than 100 nm diameter (nanopores)
§ 1.980·10–1 cm3/g
Total pore volume
§ 4.3·10–1 cm3/g
3. Cellular automata modeling 3.1. Approach description Due to active hydrodynamic regime inside the chamber during freeze drying process all particles are moving in a spouted bed. This bed has rather low density so that for the modeling a separate particle with its closest vicinity can be selected and considered neglecting collisions with other particles in the bed. The velocity of air passing nearby (not farther than boundary level of 100 nm) and through the particle is negligibly low in comparison with air velocity in the whole chamber. This allows to assume that heat and mass transferred from the particle would leave the modeling volume (particle itself and its closest vicinity) and would not influence on the particle behavior. In the modeled system there are two crystalline phases – ice and material, one gaseous phase – air with some water content. It is obvious that mass transfer by diffusion takes place only in gaseous phase, in other words, areas with air only should interact with areas with water vapor (air with some water content). Heat transfer takes place between any phase and area in the system. A 3D cellular automata model of considered system is presented as aggregation of equivalent cubes (or cells), each with linear size of 10 nm. There are several types of cubes depending on what type of substance it consists of: ice, material, air. Each cube
138
S. Ivanov et al.
except those at boundaries has 6 neighbors and each cube is assigned by a pair of numbers, first one defines water content and type of substance: 0.0 stands for absolutely dried air, 0.999 stands for maximum saturated air, 1.0 stands for ice, 2.0 stands for material of which the particle is made of. The second number defines heat quantity inside the cube and is calculated by multiplying cube temperature and heat capacity. 3.2. Structure generation Before modeling the drying process the internal structure of the particle should be modeled or generated. For such a structure generation several techniques can be used, for example overlapping spheres, diffusion limited aggregation (DLA) and its modifications [Gurikov et al]. The influence of different structure generation techniques on the atmospheric freeze drying process are to be investigated further; in this work a modified DLA method with multiple crystallization centers is used trying to provide correspondence between particle characteristics from the experiment and its characteristics in the model. Modeled internal structure of the particle with density of 0.1 g/cm3 and surface area of 180 m2/g is shown in Fig.2. To gain a spherical particle, the structure is to be truncated in a needed way. Empty places are filled with ice, simulating a frozen particle.
Figure 2. Internal structure of the particle (test image of cube consists of 32ɯ32ɯ32 cubes)
3.3. Freeze drying process modeling: heat, mass transfer and sublimation Once the structure is modeled the next step is modeling of the drying process. At first, heat transfer inside the particle and in the surrounding air should be taken into account. Heat quantity of cube ‘a’ (its coordinates are i, j, k) at current iteration is defined as
,
and on the next iteration it will be . The duration of iteration is seconds. Each cube during heat transfer procedure in the model is to be processed; new heat quantity value of the cube is calculated taking into account heat quantities of its neighbors. If Ƚ – is a multitude of neighbors of current cube a, then:
(1) where are heat transfer coefficients between neighboring cubes, L – is the length of a cube edge. Above mentioned rules are applied to all cubes in the system except those at boundaries because they are admitted to have constant temperature. Sublimation heat generation is also taken into account: cubes filled with ice which have at least one neighbor cube filled with air are participated in the sublimation process in
3D Cellular automata for modeling of spray freeze drying process
139
accordance with Hertz-Knudsen equation that defines the velocity of sublimation depending on the temperature and water content of the interacting cubes:
(2) where
– maximum sublimation velocity; 0 A 1 – sublimation coefficient;
– partial pressure; and
– pressure of vapors in the gaseous phase. By multiplying
the mass of ice
that has been sublimated is calculated. Heat quantity
during sublimation is defined as
, where
is the latent
heat of sublimation. It is assumed that heat quantity is distributed equally between air and ice cubes. For ice cubes that have more than one neighboring cube with air, the velocity of sublimation is calculated with average values of temperature and water content. As it was noted above, besides mass transfer caused by sublimation there is also mass transfer of water vapor between cubes with air. To simulate it a similar equation was used. Heat quantity values heat transfer coefficients
are changed to moisture contents (concentrations) are changed to diffusion coefficients
;
. (3)
(4) It is also important to take into account heat exchange caused by mass transfer. Thus total heat transfer during one iteration can be defined as:
(5) The functioning of described cellular automata model is performed by consecutive application of the sublimation rules to the cubes with ice bordered with air. Then all cubes with air are participated in mass transfer. And finally all cubes of the system are participated in the heat transfer. The system is monitored by online estimating heat quantity, water content and ice mass in the particle.
4. Software implementation Described cellular automata model was programmed using C/C++ language and nVidia CUDA technology that was developed especially for massive parallel computations. Whole task is divided into parallel subtasks that are processed at different GPUs (graphic processing unit) this allows to get much higher performance in comparison with standard CPU computations. Calculations were run at PC equipped with 4 nVidia GTS 250 GPUs, each with 1 GB DDR5 memory and 128 processing units. Developed software allows also to visualize processing data in the following ways: 3D visualization of the generated structure
S. Ivanov et al.
140
Freeze drying process visualization is represented as the cut of overall model cube in 2D layout. There is also a possibility to export all needed data on drying kinetics for further processing and representation.
5. Conclusions At present moment the most part of the work is still in progress and abovementioned model was tested and adjusted not for the real size particle (about 40 m) but for the test particle that is much smaller. The conditions and first results (2d representation) of the software testing and debugging (see Fig. 3) are as follows: Total field size of whole system (includes particle and its vicinity): 128 x 128 cubes, Linear size of the particle: 100x100 cubes, that equals to 1000 nm particle diameter Temperature of the inlet air: -20 oC Total amount of iterations until the particle is fully dried: 200 000 Drying kinetics from the model is comparable with the experimental one
a) iteration 0
b) 25 000
c) 50 000
d) 200 000
Figure 3. 2D representation of freeze drying process of a single particle at different number of iterations. Black color cells are those with ice, red cells are those with material, dark blue to bright blue are cells with maximum to minimum humid air, white indicates cells with absolutely dried air.
Developed models allows to simulate freeze drying process of a separate particle taking into account the influence of its internal structure as well as phase change and heat and mass transfer throughout the modeled system. At present moment the work is still in progress and in this paper rather simplified model is introduced: particle and model volume is much smaller than a real one. However, it can be seen that even simplified model has good correspondence with data obtained from experiments. The next steps of the work are modeling of a real size particle and shift from one separate particle to the entire bed.
References H. Leuenberger, Spray freeze-drying – the process of choice for low water soluble drugs, 2002, Journal of Nanoparticle Research. -2002. - Vol. 4 - pp. 111-119. P. Gurikov, A. Kolnoochenko, N. Menshutina, 2009, 3D reversible cellular automata for simulation of the drug release from aerogel-drug formulations, Computer Aided Process Engineering.Vol. 26. pp. 943–947. A.Zerkaev, P. Strashnov, A.Troyankin, N.Menshutina, 2010, Molecular dynamics modeling of protein behavior during freezing, 20th European Symposium on Computer Aided Process Engineering – ESCAPE20. pp. 1611–1614.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Spatially 3D simulation of a catalytic monolith by coupling of 1D channel model with CFD Jan ŠtČpánek,1 Petr Koþí,1 Milan Kubíþek,2 František Plát,1 Miloš Marek,1 1
Dept. of Chemical Engineering Dept. of Mathematics, Institute of Chemical Technology, Prague. E-mail:
[email protected],
[email protected] http://www.vscht.cz/monolith
2
Abstract Modern combustion engines are required to produce high power output while maintaining low specific fuel consumption and low emission level. Cars then need to be equipped with increasingly complex systems of catalytic converters, enabling oxidation of carbon monoxide, unburned hydrocarbons, filtration and oxidation of soot, and reduction of nitrogen oxides. Design of these complex systems includes optimization of pressure and heat losses. With increasing availability of processing power and effective 1D models it is possible to simulate the dynamic behavior of catalytic converters system on common desktop computers. In this work, a standard 1D-model of a monolith converter with optimized numerical solver is coupled to a commercial 3D CFD processing software via a newly developed interface. Simulation results obtained by the 1D-channel model alone and the coupled 3D-CFD with representative 1D-channels are compared. Keywords: catalytic monolith, dynamic simulation, exhaust gas aftertreatment, CFD
1. Introduction In the last years, spatially 1D computer simulation of monolith converters has been commonly used in the automotive industry [Güthenke et al., 2007]. Spatially 1D models provide usable results within a short computation time, although they have some disadvantages. A single channel represents the whole monolith converter, thus the radial temperature and flow distribution cannot be considered. When all hydraulic and heat transfer effects in all spatial dimensions are described, a large system of partial differential equations (i.e. mass and momentum conservation equations, heat transfer in fluids and solids) needs to be solved. Unfortunately, this increases computational power requirements significantly and the simulation takes hours to days instead of minutes to hours on a common desktop computer [Šnita et al., 1997; Kumar and Mazumder, 2010].
2. Methodology The modeling of monolith converters described in this paper is based upon coupling of a commercial CFD software (StarCD) with an effective 1D model of the monolith channel with a complex, realistic, non-linear reaction kinetics, developed at our
J. ŠtČpánek et al.
142
department [Koþí et al., 2009]. Obviously, this modeling approach is a compromise between classical spatially 1D modeling and a fully 3D simulation, where everything is solved by a CFD software. Core point of the presented methodology is the development of a general interface that enable to link the desired 1D monolith channel model with the CFD software. Then all the processes inside the monolith channel are simulated by the 1D model, and the CFD software calculates the flow in the inlet and outlet pipes, manages the calls of 1D model for individual representative channels. Such a coupling of the 1D channel model to CFD brings certain benefits: It enables to simulate the complete exhaust gas aftertreatment system including pipes without neglecting important factors, such as pipe geometry and monolith housing shape. In contrast to fully 3D simulation, the processing power demands are lower. The external 1D channel model enables not only the use of tailored numerical solution methods leading to faster computation times, but also offers a freedom to define virtually arbitrary reaction kinetics (including realistic, non-linear rate laws with fully transient solution of adsorbed species) – the feature that is often missing in CFD tools. The parallelization methods available in the CFD software can be easily employed to further decrease the computation time. Last but not least, this approach provides absolute compatibility between the 1D and 3D simulations (i.e., one can be really sure that the representative channel model in the 3D simulation is exactly the same as in the common 1D simulation). 2.1. CFD model The mass and momentum equations implemented in the CFD software describe general fluid flow. The CFD tool solves the equations along with heat transfer between the solid and the flowing gas [Warsi, 1981; CD-Adapco, 2009]. In Cartesian tensor notation, they are written as: డఘ డ௧
డఘ௨ డ௧
డ
൫ߩݑ ൯ ൌ ݏ
డ௫ೕ
డ డ௫ೕ
൫ߩݑ ݑ െ ߬ ൯ ൌ െ
(1) డ డ௫
ݏ
(2)
The total enthalpy of the fluid is defined as follows: డఘு డ௧
డ డ௫ೕ
൫ߩݑ ܪ ܨǡ െ ݑ ߬ ൯ ൌ
డ డ௧
ݏ ݑ ݏ
ଵ
ܪൌ ݑ ݑ ݄ ଶ
(3) (4)
For solids, the energy balance is written in the following form: డሺఘሻ డ௧
ൌ
డிǡೕ డ௫ೕ
ݏ
(5)
The term ܨǡ differs for isotropic or anisotropic thermal conductivity [CD-Adapco, 2009]. The symbols in the equations: ui is the absolute fluid velocity in direction xi, p stands for piezometric pressure, sm is the mass source, si are the momentum source components, t stands for time (s), xi represents the Cartesian coordinate, ȡ is the density and IJ stands for stress tensor components.
Spatially 3D simulation of a catalytic monolith by coupling of 1D channel model with CFD 143
2.2. 1D channel model The spatially 1D model of catalytic monolith channel consists of equations describing the mass balance in the flowing gas, in the washcoat pores and on the catalyst surface, and energy balances of the flowing gas and the solid phase. The equations are [ŠtČpánek et al., 2010]: డೖ ሺ௭ǡ௧ሻ డ௧ డೖೞ ሺ௭ǡ௧ሻ డ௧
ൌെ ൌ
డట ሺ௭ǡ௧ሻ డ௧
ൌ
ଵ అ ೌ
డ்ሺ௭ǡ௧ሻ డ௧ డ் ೞ ሺ௭ǡ௧ሻ డ௧
డ௭
ఌ ೞ ሺଵିఌ ሻఝೞ
ߩ ܿ ߩ ௦ ܿ௦
డሺ௩ڄೖ ሻ
ఌ
ሺݕ௦ െ ݕ ሻǡ ݇ ൌ ͳǡ ǥ ǡ ܭ
ܿ ڄሺݕ െ ݕ௦ ሻ
ଵ ఌೞ
σୀଵ ߥǡ ܴ ǡ ݇ ൌ ͳǡ ǥ ǡ ܭ
ట σୀଵ ߥǡ ܴ ǡ ݉ ൌ ͳǡ ǥ ǡ ܯ
ൌ െݒ ൌ ߣ௦
డ் డ௭
డమ ் డ௭ మ
ߩ ܿ ߩ ௦ ܿ௦
ఌ
ሺܶ ௦ െ ܶሻ
ଵିఌ
ሺܶ െ ܶ ௦ ሻ െ ߮ ௦ σୀଵ ߂ܪǡ ܴ
(6) (7) (8) (9) (10)
The model is implemented in Fortran [ŠtČpánek et al., 2010]. The reaction rates Rj employed in this paper correspond to a NOx storage and reduction catalyst – NSRC. This catalyst type is designed for abatement of nitrogen oxides, and is operated under periodically alternated lean and rich conditions. The NOx are adsorbed during a longer lean phase (oxidizing conditions), and then reduced by an excess of CO, H2 and CxHy within a short rich phase (reducing conditions). Detailed description of individual reactions can be found in [Koþí et al., 2009]. 2.3. Coupling of the models In each time step, the CFD software calculates gas flow, pressure and temperature distribution and passes the corresponding data to the 1D channel model, together with the stored flow, concentration and temperature profiles along the channel at the beginning of the time step [Weaver, 2010]. The 1D solver then integrates evolution of concentration, pressure and temperature profiles, and enthalpy flux into the solid phase along the monolith channel. This process is iterative – within each iteration, the calculated pressure losses and energy balances (after addition of radial heat transfer) obtained from the 1D channel model and the CFD are compared for each representative channel and corrected until they match, cf. Fig. 1.
Fig. 1. Scheme of the 3D-CFD and 1D-channel model coupling.
144
J. ŠtČpánek et al.
3. Results Periodic lean/rich operation of a NOx storage and reduction converter [Koþí et al., 2009; Plát et al., 2010] has been simulated by the coupled 3D-CFD+1D-channel, and the results were then compared with the standard 1D model of the converter. The test system geometry is shown in Fig. 2, together with the calculated profile of linear gas velocities in the inlet and outlet pipes (7 s after the simulation start, isothermal system T=200°C). It can be seen that the used geometry leads to a non-uniform flow distribution. The inlet gas contained always 7% CO2, 7% H2O and 150 ppm NO, and in addition to that during the lean phase 9.5% O2, and during the rich phase 1.9% O2, 3.3% CO, 1.1% H2 and 0.3% C3H6 (+ balance N2). The space velocity was 60 000 h-1 (at standard temperature and pressure, based on net volume of the catalytic monolith).
Fig. 2. Test system geometry and gas velocity profile in 3D-CFD+1D-channel model.
Fig. 3. Outlet NO concentrations (left) and temperatures (right) during periodic lean(60 s)/rich(5 s) operation. The 4th rich phase (t=255-260 s) and the consequent lean phase are shown during the operation with incomplete NSRC regeneration. Ten representative channels have been chosen. Due to non-uniform flow distribution, the outlet concentrations and temperatures from the individual channels differ (Fig. 3). It can be seen that the outlet NO concentrations at the end of the lean adsorption period
Spatially 3D simulation of a catalytic monolith by coupling of 1D channel model with CFD 145
vary from 15 ppm (channel No. 1, lower flow rate) to 27 ppm (channel No. 10, higher flow rate). A substantial amount of heat is generated during the rich phase; the outlet temperature peak is much broader due to heat capacity and conductivity of the monolith solid phase. Expectedly, the standard 1D model of the converter gives results near the middle of the range for individual channels from the 3D-CFD+1D-channel simulation, which demonstrates a correct coupling of the models.
4. Conclusions A novel methodology for direct coupling of a commonly used, effective 1D model of catalytic monolith channel with a CFD software has been presented. This approach enables 3D simulations of the flow and temperature distribution effects in the exhaust tract without introducing the limitations on complexity of chemical reaction kinetics in the catalytic device. In comparison with fully spatially 3D-models [Kumar and Mazumder, 2010] the 3D-CFD+1D-channel model is more efficient regarding computation time. The developed tools can be used for design of exhaust gas aftertreatment systems, optimization of flow and temperature distributions, maximization of conversions and minimization of pressure losses.
5. Acknowledgements This work has been supported by the grant 104/08/H055 of the Czech Grant Agency and project MSM6046137306 of the Czech Ministry of Education. The authors thank to Mike Weaver (CD-Adapco) for cooperation on the StarCD interface development and for technical support, and to Harald Echtle (Daimler AG) for usefull discussions.
References CD-Adapco, 2009, StarCD User’s Guide - Methodology, p. 15-31 A. Güthenke, D. Chatterjee, M. Weibel, N. Waldbüsser, P. Koþí, M. Marek, M. Kubíþek, 2007, Development and application of a model for a NOx storage and reduction catalyst, Chemical Engineering Science, 62, 5380-5385 P. Koþí, F. Plát, Š. Bártová, M. Marek, M. Kubíþek, V. Schmeisser, D. Chatterjee, M. Weibel, 2009, Global kinetic model for the regeneration of NO x storage catalyst with CO, H2 and C3H6 in the presence of CO2 and H2O, Catalysis Today, 147S, S257-S264 A. Kumar, S. Mazumder, 2010, Toward simulation of full-scale monolithic catalytic converters with complex heterogeneous chemistry, Computers & Chemical Engineering, 34, 135-145 F. Plát, Š. Bártová, P. Koþí, M. Marek, 2010, Dynamics of a combined DOC-NSRC-SCR exhaust gas aftertreatment system with periodic regenerations, Industrial and Engineering Chemistry Research, 49, 10348-10357 D. Šnita, M. Kubíþek, M. Marek, 1997, 3-D modelling of monolith reactors, Catalysis Today, 38, 39-46 J. ŠtČpánek, P. Koþí, F. Plát, M. Marek, M. Kubíþek, 2010, Investigation of combined DOC and NSRC diesel car exhaust catalysts, Computers & Chemical Engineering, 34, 744-752 Z. W. A. Warsi, 1981, Conservation form of the Navier-Stokes equations in general nonsteady coordinates, AIAA Journal, 19, 240-242 M. Weaver, 2010, Coupling 1-D aftertreatment models with 3-D flow and solid heat conduction simulations in STAR-CD, 2010 DOE Crosscut Workshop on lean emissions reduction simulation, University of Michigan, 20th-22nd April 2010
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A generic framework for stochastic dynamic simulation of chemical engineering systems using free/open source software Carl Sandrock and Philip de Vaal∗ Department of Chemical Engineering; University of Pretoria; South Africa
Abstract Chemical engineering process modelling and simulation pose significant challenges to the computer program developer. Chemical processes are invariably described by non-linear equations such as chemical reaction kinetics, flow-pressure relationships and physical properties. Dynamic simulation of such systems involves the solution of sets of non-linear differential and algebraic equations. There is also uncertainty associated with the model equations themselves (model uncertainty), their parameters (parametric uncertainty) and the inputs into the model (input uncertainty). Tools that aid chemical engineers in the solution of these problems have been successfully commercialised and enjoy a measure of success, although the adoption of dynamic and stochastic simulation packages is lagging behind the steady-state flowsheeting tools. Commercial software solutions can be prohibitively expensive in addition to confining users to adhere to proprietary standards. The Free Software and Open Source movements have made inroads into providing non-proprietary alternatives to many commercial software packages, which has encouraged the adoption of open standards. This work presents a framework for the development of stochastic dynamic simulations of chemical processes using only free and open source software. The large problem of stochastic dynamic simulation has been broken down into stages: 1. Input modelling using Markov chain models trained on process data or seeded by hand in addition to stationary distribution models. This enables dynamic scenarios to be handled with the minimum of special case code generation. 2. Process modelling using an object-oriented approach in the Modelica language. Modelica has an actively developed open source implementation called OpenModelica and is an open standard modelling language. 3. Monte Carlo simulation using extensions to the OpenModelica compiler that ease parallel simulations 4. Postprocessing, including visualisation and statistical analysis. Statistics that are generated can be used for control evaluation purposes. The Tennessee-Eastman challenge process is used to illustrate its capabilities. Keywords: Segmentation, stochastic modelling, Open Source ∗
[email protected]
A generic framework for stochastic dynamic simulation of chemical engineering systems using free/open source software
147
1. Introduction We present a suite of tools and techniques for stochastic simulation of dynamic chemical engineering processes. The goal of the system is to serve as a test bench for control system analysis. In this phase of the work, real-time interfacing with physical equipment is not important. Rather, we focus on a workflow As such, the requirements of the system are not so much the theoretical rigour of simulation or model integrity, but rather ease and speed of simulation so that different control systems can be compared with one another. Of course, the system needs to accommodate common modelling approaches for unit operations and where possible leverage the large body of work that has been done on modelling for instance thermodynamic systems. Several open source simulation environments have been developed that are well-suited to chemical engineering problems. For static flowsheeting, DWSIM provides a familiar environment, but is Windows-specific. For dynamic simulation the most notable open-source environments are ASCEND IV from Carnegie Melon and EMSO, which now forms part of the ALSOC project. Both of these environments can be used to create dynamic simulations of chemical systems reasonably easily, and EMSO features a large library of chemical engineering unit operations. However, they suffer from relatively small communities and a lack of diversity in the implementation of their languages. On the other hand, the Modelica language is gaining traction as a simulation standard. In the last few years, it has been incorporated in Mathematica (via MathModelica) and Maple (via MapleSim). A subset of Modelica is also used in the SciCos block diagram simulator as part of Scilab. The wide adoption of open standards is a desirable outcome for open source projects, as it provides some security against the whims of proprietary vendors.
2. Implementation 2.1. Input modelling The input model is created using automated segmentation with a multi-objective optimisation technique as described in Sandrock and de Vaal (2008) and Sandrock (2010). After segmentation, the fitted curves are characterised as one of three events: constant, ramp or exponential response. The probabilities assigned to the state transitions are used in a Markov process to generate similar reponses. To account for noise and some of the unfitted behaviour, a suitable stationary distribution is fitted to a histogram of the residuals. This part of the framework has been implemented in the open-source Python language utilising the Numpy package extensively for the curve-fitting component. The process of reading a signal, doing segmentation and developing an input model is entirely automated by the system. 2.2. Modelling A set of models was developed that mimics the Modelica.Fluid package for fluid flow with less comprehensive property models and without using Modelica stream variables, as these are not yet supported in OpenModelica.
Carl Sandrock and Philip de Vaal
148
2.3. Simulation A simple linear congruential pseudo random number generator as suggested in Aiordachioaie et al. (2006) was implemented and used to supply the Markov process with values to determine the transition times between events. The procedure for generating normally distributed values was also used to generate the residual component.
3. Case study 3.1. Input model Figure 1 shows some of the results the automated segmentation the sample input data. Note that this is a subset of all the fits done, for illustrative purposes. 62 60 Valve 1 valve fraction
58
56
54 52
50 480
20000 40000 60000 80000 100000 120000 140000 160000 180000 Time (s)
(a)
!" #$ %& '()!& (
(b) Figure 1. Segmentation and signal generation results for TE valve input signal. The residuals are normally distributed.
The transitions were analysed for all the fits obtained (there were 72 points on the Pareto front), yielding the following Markov transition matrix (the probability of moving from
A generic framework for stochastic dynamic simulation of chemical engineering systems using free/open source software
the state in row i to the state in column j): ⎡ ⎤ 0.012 0.699 0.289 X = ⎣0.020 0.453 0.527⎦ 0.078 0.511 0.411
149
(1)
The overall residuals were determined to have be approximately normally distributed with a standard deviation near 0.5 as shown in Figure 1. This increases the confidence in the segmentation as TE code makes use of normally distributed noise signals. 3.2. Process Model The TE process as originally proposed by Downs and Vogel (1993) is shown in Figure 2.
Figure 2. The open loop Tennessee Eastman process flowsheet
Each of the unit operations in the Tennessee Eastman process was modelled as a Modelica model. The relevant models are 11 valves, 4 heat exchangers (two in the reactor, the condenser and the stripper boiler, a basic tank model for the separator, the reactor model and the stripper model. For these last two units the reaction kinetics reactions and equilibrium behaviour were taken from the Downs and Vogel Fortran source code. These equations are also enumerated in Cruz (2004). In addition to the open loop process, simple PID controllers were implemented using the scheme originally included in the Downs and Vogel sources, with the exception of the driving valve modelled in the previous section. The reactor temperature was chosen as a significant variable to check control for. For illustrative purposes, a simple alternative scenario where all the controllers have 10% larger gain than was recommended by Downs and Vogel was investigated. 3.3. Monte Carlo Simulation Figure 3 shows the results of the Monte Carlo simulation. The simulation was run for 48 hours simulated time, using the values from 2500 runs.
Carl Sandrock and Philip de Vaal
150
40
High gain Low gain
35 30 Normalised counts
25
20 15
10 5 00.15
0.10
0.05 0.00 0.05 0.10 Control error in Reactor temperature
0.15
0.20
Figure 3. Results of control evaluation: the high gain case is superior for rejecting the modelled input disturbances.
The results show clearly that the higher gain has better associated control with this scenario.
4. Conclusions Free and open-source tools have been used together with open standards to create a simulation environment that eases stochastic evaluation of dynamic model behaviour. While it is true that the current implementations of some of these tools are not as polished as their commercial counterparts, it is clear that progress is being made on this important front. Work is currently being done to add graphical annotations to all the models in the unit operations library to facilitate drag-and-drop model building similar to commercial tools.
References Aiordachioaie, D., Nicolau, V., Munteanu, M., Sirbu, G., 2006. On the noise modelling and simulation. In: Modelica 2006. The Modelica Association. Cruz, A. M. S., 2004. Tennessee eastman plant-wide industrial process challenge problem. complete model. Tech. rep., Technical University of Denmark. Downs, J. J., Vogel, E. F., Mar. 1993. A plant-wide industrial process control problem. Computers ˘ S255. & Chemical Engineering 17 (3), 245âA ¸ URL http://dx.doi.org/10.1016/0098-1354(93)80018-I Sandrock, C., 2010. Identification and generation of realistic input sequences for stochastic simulation with markov processes. In: Cakaj, S. (Ed.), Modeling, Simulation and Optimization (Tolerance and Optimal Conrtol). Intech Press, Ch. 9. Sandrock, C., de Vaal, P., 2008. Determining state transition probabilities using multi-objective optimisation. In: Proceedings of the IASTED International Conference on Modelling and Simulation. IASTED, pp. 26–29.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modelling of micro- and nano-patterned electrodes for the study and control of spillover processes in catalysis I. Bonis, S. Valiño-Pazos, I.S. Fragkopoulos, C. Theodoropoulos* School of Chemical Engineering and Analytical Science, Univerisity of Manchester, Sackville Street, Manchester M13 9PL, UK.
Abstract This work is concerned with the formulation of macroscopic models for electrochemically promoted catalytic systems, i.e. systems where the catalytic activity is enhanced by applying a potential between the catalyst and a counter electrode, resulting in the formation of a double layer on the surface of the catalytic film. This double layer is formed of “backspillover” species, which migrate from the solid electrolyte support. The interaction between the catalyst and the support results in a reversible, non-Faradaic modification of the catalytic activity. This effect has been observed by Stoukides and Vayenas (1981) and explored both experimentally and theoretically. However, few modelling studies have been performed. Here, we propose a systematic modelling framework, which will lead to better understanding and ultimately to efficient design strategies. We consider reaction-diffusion as well as electrostatic phenomena and we apply this technique for the simulation of the oxidation of CO over Pt. Keywords: Electrochemical Promotion of Catalysis, NEMCA effect, distributed parameter system modelling, heterogeneous catalysis, metal-support interaction.
1. Introduction Electrochemical Promotion of Catalysis (EPOC), also referred to as Non-Faradaic Modification of Catalytic Activity (NEMCA), is the enhancement of the rate of surface reactions on a catalyst, in systems where certain ions of the solid electrolyte support migrate to the catalyst and act as promoters. This migration is called backspillover and is affected by electrochemical conditions. The migrating ions are transformed to backspillover species, which form a double layer over the catalytic film, which strongly interacts with the adsorbed species, causing the Electrochemical Promotion (EP). The double layer and consequently the enhancement of catalytic activity are controlled via the potential applied between the catalyst and a counter electrode, the latter considered to be inert. The EPOC effect has been first observed by Stoukides and Vayenas (1981). Since then, it has received increasing attention (Poulidi, et al., 2007; Vayenas, 2001). The notion that the application of a small potential to an electrochemical system can result to an enhancement of catalytic activity by 1-5 orders of magnitude, which does not disappear at steady state, is very appealing indeed. EPOC has been extensively studied experimentally and a theoretical background has been formulated. Several modelling studies have been performed and the NEMCA effect can be predicted and controlled. The approaches vary from atomistic level simulations (Leiva, et al., 2008) to experiment-driven kinetic studies (Poulidi, et al., 2007). For the promising EPOC *
Corresponding author, e-mail:
[email protected].
I. Bonis et al.
152
technology to be exploited in production systems, further, targeted studies need be performed (Anastasijevic, 2009). To this end, a distributed parameter system (DPS) modelling approach is adopted to simulate this class of systems, taking both temporal and spatial variations into account. The DPS model can be can be used: (i) for system parameter estimation from experimental data (ii) for system analysis, providing insights to theorists and identifying potential problems and potential optimisation possibilities for engineers to consider (iii) for robust design of EP systems.
2. Modelling of EPOC Systems A typical EPOC system is made up of a catalyst, an electrolyte support and a counter electrode. As an example we will consider the catalytic oxidization of CO over Pt. Gas species diffuse through the pores of the catalyst, hence the concentration, cj, of species j in gas phase can be computed from a mass conservation equation (Bear, 1988):
dc j dt
+ ( - D j c=j )
Rj
(1)
where Dj is the effective diffusion coefficient and Rj is the overall rate of gas phase reactions that species j participates in. In essence, an EPOC system is a heterogeneous catalysis system, thus the reactants adsorb on the catalyst active sites: O2(g) + 2* 2O* and CO(g) + * CO*. Adsorbed species can diffuse on the surface of the catalyst. Atoms of reactants adsorbed in neighbouring sites can react: CO* + O* CO2(g) + 2*. Thus, the concentration can be computed from the mass conservation Eq. (1). Here j refers to adsorbed species, Dj is the effective surface diffusion coefficient and Rj is the algebraic sum of the rates of reaction to which species j participates. In EPOC systems, the catalyst and the electrode are deposited on either side of the support and potential is applied between them. This incurs transfer of charge, both due to transport of electrons and ions. The governing elliptic Poisson equation is given by (Umashankar, 1989): - (V =)
(2)
Qj
where Q is the current source, is the conductivity, V is the potential. Eq. (2) is derived = Q , J being the current density, with the by combing the continuity equation J definition of the electric potential. The electric field can be computed from E = J
(3)
Charge transport occurs in all 3 solid phases: in the metallic catalyst and the electrode electron transfer takes place, whereas the support only allows ion tranfer. Eqs. (2) and (3) can be used to model both the ion and the electron transfer. Due to current transport, some interesting phenomena occur in the triple phase boundaries (TPB), i.e. at the points where a metal phase, the support and the gas phase are in contact. Ion transport through the support is induced due to the applied current. Specifically, oxygen ions are incorporated at the cathode TPB (which involves the electrode): ½ O2(g) + 2e- + Vö
ra
O2-YSZ and diffuse throughout the support. The rd
O2-YSZ ions are excorporated at the anode TPB: O2-YSZ ½ O2(g) + 2e- + Vö. The application of current results in formation of a backspillover species (BSS) [O- - +]*, according to the reaction:
Modelling of micro- and nano-patterned electrodes for the study and control of spillover processes in catalysis 153 r1
O2-YSZ + * [O- - +]* +2e- + Vö
(4)
The BSS forms a double layer over the catalyst, which interacts strongly with the absorbates and promotes the catalytic reaction, resulting in a non-Faradaic modification of the catalytic activity. BSS diffuses around the catalyst and reacts towards the production of CO2: r2
(5) [O- - +]* + CO* CO2(g) + 2* BSS is also formed at a lower rate by O2-YSZ being incorporated to Pt from the support. The backspillover effect in this system is the controllable migration of O2-YSZ over the catalyst. All mass transport phenomena that occur in the electrode are neglected. Only electric current passes through the electrode, and in particular through its lower boundary, which has a set potential (V = Vcathode), whereas there is no charge transfer in all other boundaries ( n J = 0 , where n is a vector normal to the boudary). The support constitutes the ionic phase of the system. Charge transfer (Eqs.(2), (3)) is due to the transport of the O2-YSZ which are incorporated at the cathode TPB and exporporated at the anode TPB, as well as at the interface with the catalyst. This movement is also modelled in the mass conservation equation (1), for which no mass transfer is considered at all boundaries, except for the cathode TPB, where O2- / t = ra , at the YSZ
anode TPB, where 2O
YSZ
/ t = rd r1 . Moreover, in the interface with the catalyst a
given flux is specified due to the diffusion of O2-YSZ to Pt: -n (- DO2-
YSZ
) = r1 . In this
work we consider that the bulk of the catalyst is solid, thus allowing no mass transfer, although gas and surface diffusion can be observed close to the boundaries, due to its porous structure. Thus, the double layer which should be around the catalyst also has a concentration profile inside it. In Pt, both electrostatic (Eqs.(2), (3)) and mass transfer (Eq.(1)) phenomena occur. Part of the upper boundary of the catalyst allows mass transfer, so as to be in equilibrium with the gas phase above: c j = PX j ( g ) / RT , j = CO(g), O2(g), CO2(g), and does not allow current flow ( n J = 0 ), whereas the rest of the upper boundary is used to close the circuit and has a set potential (V = Vanode), not allowing mass transport n (-Dc) =0 , where c is the concentration or coverage of all species. As far as the BSS is concerned, it is produced in the interface with the support: -n (- D[O +]) = r1 and on the anode TPB: [O +] / t = r1 .
3. Electrochemically Promoted oxidization of CO over Pt We consider the oxidization of CO over a Pt catalyst on an YSZ solid electrolyte support and an Au counter electrode. The 2D computational domain consists of the perpendicular to the length cros-section of the system which is made up of three slabs. This domain and its physical dimensions is depicted in Fig. 1. The model of the system consists of the coupled Partial Differential Equations (PDEs) presented in the previous section. Transient simulation results will be presented for t=0s and t=1s, obtained using COMSOL Multiphysics® 3.5a. Due to the unavailability of diffusion coefficients and kinetic rate constants, the results presented can only be interpreted qualitatively. Further studies will be performed to compute accurate values for those parameters, using experimental data. The system is at T=808K, P=11550 Pa and the applied potential is V=500mV. The results from electrostatic phenomena modelling are illustrated in Fig 2. As mentioned in Section 2, only electron transport occurs in the metal ducts, whereas
I. Bonis et al.
154
only ion transport is possible in the support. Fig.2a depicts the electric current field magnitude profile Pt as well as the corresponding streamlines in the electronic phase and Fig.2b is the corresponding plot for the ionic phase. In Fig.2a the streamlines YSZ highlight the positions where the circuit closes and in Fig.2b, the anode and cathode TPBs are evident. Au The core of the catalyst is considered solid, hence no mass transfer takes place. However, a “boundary layer” of approximately 25Vm in which surface Figure 1: The computational domain diffusion takes place is considered. This is the
(a)
(b)
Figure 2: (a) Current density profiles with electric field streamlines for t=1s and (b) the iontransfer induced current density with the corresponding streamlines.
(porous) region of the catalyst where the double layer of the BSS is contained. Fig.3 presents concentration profiles for t=0s (upper graphs) and t=1s (lower graphs). Adsorbed oxygen and empty site coverages are shown. The effect of closing the circuit is observed in Fig.2, via the blocking of mass transfer at the upper right part of the catalyst. In Fig.3 the coverage of the BSS is also illustrated. At all times the contribution of the TPB to BSS production is more important than the one of the Pt/YSZ interface. The profiles for the oxygen ion coverages on YSZ are presented in Fig.4. For t=0, only adsorption from the cathode is possible, since no O2-YSZ is present on YSZ. As time progresses, both TPBs become conspicuous: the cathode provides O2YSZ to YSZ, which diffuses and desorbs on the anode TBP for the production of BSS.
4. Conclusions and future work A PDE-based model for an electrochemically promoted system has been presented. It considers reaction-diffusion and electrostatic phenomena. Our modelling assumptions have been outlined. Preliminary results for an EP system for the catalytic CO oxidation over Pt have been presented. One obvious extension to the model is to also consider heat transfer. Furthermore, as the length-scales considered are small, we will develop multi-scale models by coupling the macroscopic with a microscopic simulator (e.g. lattice kMC-based), to model surface reaction and diffusion processes. Along those lines, we will direct our future research efforts. Moreover, since for EPOC systems reliable values for diffusion coefficients cannot be found in the literature, the multiscale model developed will be used for parameter estimation using data from patterned electrodes, currently under development for this reason. Preliminary results from the model are qualitatively consistent with the theory of EPOC and experimental data. Experimental validation of all the features of the model is infeasible due to the lack of
Modelling of micro- and nano-patterned electrodes for the study and control of spillover processes in catalysis 155
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Coverage profiles for t=0 (a, b, c) and t=1s (d, e, f) for O* (a, d), empty sites (b, e) and [O + -] (c, f) on the Pt catalyst.
(a) Figure 4: Coverage profiles for t=0 (a) and t=1s (b) for O on the YSZ support
(b) 2-
YSZ
Figure 5: Effect of pO on 2
reaction rate at constant p CO.
suitable data. However, specific aspects may be examined in conjunction with macroscopic experiments. For example, in order to validate the effect of pO2 on the rate of reaction, we consider an inlet stream of reactants with N2 carrier gas with pCO=0.01bar to the system described above. This effect is illustrated in Fig.5 and is compatible with the experimental data presented by Yentekakis et al. (1988).
Acknowledgements The financial support of EPSRC (EP/G022933/1) is gratefully acknowledged. We would also like to thank Prof. Ian Metcalfe and Dr. Danai Poulidi for valuable discussions on EPOC.
References Anastasijevic NA. NEMCA: From discovery to technology. Catal. Today. 2009; 146: 308-311. Bear J. Dynamics of fluids in porous media. Dover Publications; 1988. Leiva EPM, Vazquez C, Rojas MI, Mariscal MM. Computer simulation of the effective double layer occurring on a catalyst surface under electro-chemical promotion conditions. J. Appl. Electrochem. 2008;38(8):1065-1073. Poulidi D, Mather GC, Metcalfe IS. Wireless electrochemical modification of catalytic activity on a mixed protonic-electronic conductor. Solid State Ionics. 2007;178(7-10):675-680. Stoukides M, Vayenas CG. The effect of electrochemical oxygen pumping on the rate and selectivity of ethylene oxidation on polycrystalline silver. J. Catal. 1981;70(1):137-146. Umashankar K. Introduction to engineering electromagnetic fields. World Scientific Pub; 1989. Vayenas CG. Electrochemical activation of catalysis: promotion, electrochemical promotion, and metal-support interactions. Springer US; 2001. Yentenakakis IV, Neophytides S, Vayenas CG. Solid electrolyte aided study of the mechanism of CO oxidation on polycrystalline platinum. Journal of Catalysis 111 (1988) 152-169.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Multiscale Modeling of a Silicon Solar Wafer Manufacturing Process Ruochen Liu, German Oliveros, Seetharaman Sridhar1 , B. Erik Ydstie∗ Dept. of Chemical (1 Material Science and) Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213, USA
Abstract In this paper we present the conceptual design of a new continuous process to fabricate silicon wafers for further use in solar cells. We develop a partial album of models which show the behaviors of solidification, fluid flow and heat transfer. A multiscale modeling approach is used to solve the conservation equations in order to find the relevant design parameters for further control and optimization studies. Keywords: multiscale modeling, silicon wafering, phase change, crystallization
1. Introduction and Conclusions At CMU we are developing a novel float process which promises to reduce the cost of producing silicon wafers for the solar cell industries by a factor of 4 or more relative to current technology. The process is motivated by the fact that the sun supplies an enormous amount of energy every year (1024 joules), and covering 0.05% of the surface of the earth with 20% efficient solar cells can easily satisfy all our energy needs. Laboratory scale systems have demonstrated conversions efficiency of 40% and more. But the cost in making such cells remains an issue and less than 1% of electricity use comes from direct conversion of solar energy to electricity. Several manufacturing innovations are needed before this technology can be expected to have a significant impact. The most popular process for making wafers for solar cells grows a monocrystalline ingot using Czochralski crystallizers by slowly pulling a silicon seed attached to a rotating rod from the silicon melt. The size of these cylinders is determined by the pulling rate, the heat supplied to the furnace and the rotating speed. A posterior sawing produces wafers 150 to 250 microns in thickness. The Czochralski along with the Bridgman techniques contribute to more than 80% of the silicon substrate used in the solar cell industry. The main disadvantages of these processes is the large amount of material losses, imperfections, and high costs caused by the ingot sawing which make up to one third of the production costs of the wafer and generate around 50% of material loses [Luque and Hegedus (2003)]. Many alternative processes have been proposed to produce wafers continuously. The Edge-Defined Film Growth (EFG) and the continuous casting methods have been industrialized, but they have not had yet a significant economic impact due to the high production cost and high defect density which produces many recombination sites and poor conversion efficiency (<14%). In this paper we describe models that capture different process scales ranging from the macro-scale to the micro-scale. The aim is to develop an album of multi-scale models ∗
[email protected]
1
Multiscale Modeling of a Silicon Solar Wafer Manufacturing Process
157
[Marquardt (1996)] for pilot plant design, scale-up and process optimization. As described by Pantelides (2001), the interactions between different scales or levels can be aggregated subsequently by making approximations from one level and use them at the next level. We have found that multi-scale modeling can be used to develop an understanding of the interaction between fluid flow and heat transfer in the melt, solidification and interfacial dynamics in the crystallization front. The influence of heat conduction, convection and radiation are shown to play important roles in determining fluid flow behavior in the melt. The models show that the process concept is feasible. Information obtained from the models will be used to design pilot scale systems planned for the near future.
2. Process Description Figure 1 shows the proposed design divided into three zones. In the first zone, small pellets of silicon are melted and conveyed into a stabilization zone where molten silicon achieves a steady fluid flow and temperature profiles (Zone II) for further cooling in Zone III. The molten silicon solidifies due to radiative heat-transfer to the cooling plates in Zone III. The rate of heat transfer is controlled by adjusting the rate of flow of the coolant and the distance to the silicon. The process can be seen to be somewhat similar to the Pilkington float process used to form flat glass [Ydstie and Jiao (2006)]. The main difference is that a float substrate is not needed since solid silicon floats much like ice floats on water.
Figure 1: Conceptual design of the silicon wafer process system
The resulting sheet is slowly pulled out using a roller. Molten silicon can be easily recycled back to the furnace (Zone II). In order to avoid undesired reactions with ambient oxygen, argon flows along the furnace creating an inert atmosphere between the melt and the cooling plate. The velocity of the melt should be low to guarantee the formation of a thin sheet and heating profiles in the furnace must be carefully controlled to maintain uniform thickness of the wafer. The main challenge lies in the technical difficulties and costs incurred in performing experiments at very high temperatures. Silicon has a melting point of 1687 K and it is very reactive to both oxygen and nitrogen and difficult to contain.
3. Fluid flow and heat transfer interaction dynamics In this section, we develop thermal and fluid flow models which capture the phase change behavior. These models can be used for process design, control and optimization. A simplified geometry of the model (Fig. 2) is used to describe the silicon wafering process. Zone II represents the stabilization zone, and Zone III represents the cooling zone. The
Ruochen Liu et al.
158
formation of a thin silicon wafer will take place on the top of the molten silicon bath in Zone II. The thickness will be determined by the pulling and cooling rates. A 1:1 scale water model has shown that we can make 10 cm wide, 150 micron thick sheets of ice at a rate of about 15 cm per minute in such a system. The finite element method is used to describe the behavior of the silicon system.
Figure 2: Geometry of the silicon wafering process system The two dimensional Navier-Stokes equations and continuity equation applied to the molten silicon bath gives: ρ(uu(x, y) · ∇)uu(x, y) = ∇ · [−p(x, y)II + η(∇uu(x, y) + (∇uu(x, y))T )] + F
(1)
∇ · u (x, y) = 0 (2) F The body force accounts for what we will call a “phase distinction term" [Voller (1987)]. It is given by the expression: F=
(1 − B)2 Auu; B3 + q
(3)
B is the liquid fraction, A a large number and q a small number to avoid division by zero. The function vanishes when B is equal to one (pure fluid) and tends asymptotically to infinity as B equals zero, forcing the term u to approximately equal to zero, so that the velocity in the solid domain is constant. The temperature of the system is given by the energy conservation equation as: ∇ · (−k∇T (x, y)) = Q − ρC p u (x, y) · ∇T (x, y)
(4)
Molten silicon at 1773 K is fed to the system from the left boundary. The boundary conditions are trivial except the top surface of the system, where the radiation effect is taken into consideration: 4 − T 4 (x, b3 )) −nn · (−k∇T (x, b3 )) = εσ (Tamb
(5)
The simulation shows that silicon solidifies on top of the molten substrate as the temperature decreases below the melting point in the cooling zone (Fig. 3a). Under the current simulation conditions, we obtain a thickness of silicon around 300 μm. It is worth noting that silicon crystallizes into the melt along the vertical direction, which is orthogonal to the direction of the process flow. Therefore, a high production rate can be achieved because the process rate and the rate limitation of crystallization are decoupled as opposed to the Czochralski and the EFG processes. Figure 3b reveals that in order to achieve a thickness of 300 μm, the pulling rate can be around 20 cm/min. The model is helpful to develop, test, and optimize the process concept which can guide the development of small scale physical experiments of the float system.
Multiscale Modeling of a Silicon Solar Wafer Manufacturing Process
159
Figure 3: a. The temperature profile of the process system and the velocity field of the molten silicon. b. Sensitivity analysis of the CFD model
4. Solidification dynamics Realistic modeling of the solidification of a pure substance is usually described by assuming a process system composed by solid and liquid subdomains divided by a sharp interface. By performing a scaling analysis of our proposed parameters, we found that due to the low values of velocities present in the system, the main mechanism of energy transport will be conduction. This observation allows us to project another orthogonal subspace of our system to study solidification dynamics based on heat diffusion in each phase. The sharp interface is represented mathematically as a discontinuity where heat conduction at each phase balances the latent heat release. From a Lagrangean perspective as the solidification proceeds, the moving interface will “sweep" the entire liquid domain at a velocity dI(y,t) dt as the observer is sitting on top of the moving fluid. This mathematical formulation can be stated as: In each phase: ∂ Ti ∂ 2 Ti = αi 2 ∂t ∂y
i = s, l
(6)
At the interface: ks
∂ Ts ∂ Tl ∂ I(y,t) − kl = ρs ΔH ∂y ∂y ∂t
T = Tmelt
at
at
y = I(y,t)
y = I(y,t) and t > 0
and t > 0
(7)
(8)
As it can be noted, we assume that solidification occurs mainly in the vertical direction. We furthermore note that in the Lagrangean formulation we exchange spatial with timetime dependence in the model. We integrate the calculation of the moving interface using finite differences.
5. Interfacial stability The presence of impurities can influence the dynamics of the system because of the effect of solute rejection in the solidification front. The difference in mass diffusivities between
Ruochen Liu et al.
160
Figure 4: Evolution of temperature profile
the solid and the liquid will cause impurities to accumulate in a small boundary layer towards the liquid. In a system with more than one component, the solid liquid interface does not have to be flat. Mullins and Sekerka (1964) studied the effects of the morphological stability of a binary alloy under purely diffusive transport. This theory predicts the onset of interfacial breakdown of a flat interface when a sinusoidal perturbation of infinitesimal amplitude is applied to the initially flat front. According to this theoretical development, surface tension and temperature gradients at each side of the interface provide stability to the system and will always favor the decay of the perturbation; whereas concentration gradients in the liquid, generated by solute rejection, enhance the perturbation effects. We have performed a stability analysis in order to calculate the maximum amount of impurities in the melt before the interface breaks down. The results from these studies are available from the authors upon request and will be reported in the conference presentation. This research was funded by Grants from Industrial Learning Systems and the National Science Foundation
References Luque, A., Hegedus, S. (Eds.), 2003. Handbook of Photovoltaic Science and Engineering. John Wiley & Sons. Marquardt, W., 1996. Trends in computer-aided process modeling. Computers & Chemical Engineering 20, 591–609. Mullins, W. W., Sekerka, R. F., 1964. Stability of a planar interface during solidification of a dilute binary alloy. Journal of Applied Physics 35, 444–451. Pantelides, C. C., 2001. New challenges and opportunities for process modelling. In: European Symposium on Computer Aided Process Engineering - 11, 34th European Symposium of the Working Party on Computer Aided Process Engineering. Vol. 9 of Computer Aided Chemical Engineering. Elsevier, pp. 15 – 26. Voller, V. R., 1987. A fixed grid numerical modelling methodology for convection-diffusion mushy region phase-change problems. International Journal of Heat and Mass Transfer 30, 1709– 1719. Ydstie, B. E., Jiao, Y., 2006. Passivity based control of the float glass process: Multi-scale decomposition and real-time optimization of complex flows. IEEE Control Systems Magazine 26, 64–72.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Process modelling and model reduction for chemical engineering applications Bogdan Dorneanu,a,b Johan Grievink,a Costin S. Bildeac a Delft University of Technology, Dept. of Chemical Engineering, Julianalaan 136, 2628BL Delft, The Netherlands b Delft University of Technology, Process and Energy Dept., Leeghwaterstraat 44, 2628CA Delft, The Netherlands c University Politehnica of Bucharest, Dept. of Chemical Engineering, Str. Gh. Polizu 17, 001061 Bucharest, Romania
Abstract Process systems & models have structural and behavioural features, which can be exploited for model reduction. A reduction approach is introduced addressing both features, while systematically going through the usual model development steps. Thus, the reduction is not applied only at mathematical and numerical level, but at physical and systemic level as well. The benefit is that the model reduction is achieved while retaining the topological structure of the processand the meaning of the variables. The application of the approach is investigated for two types of processes: (a) the iso-butane alkylation process as a complex process with simple products and (b) the freezing step in ice cream manufacture as an example of simple process with complex products. Keywords: process modelling, model structure reduction, alkylation, ice cream freezing
1. Introduction Models have become widely used during the last decades for supporting a broad range of chemical engineering activities, such as product and process design and development, process monitoring and control, real time optimization of plant operation or supply chain management. The use of models for such applications allows for a rapid screening of the design alternatives, better-informed operational decisions, low cost and shorter time-to-market for products and processes. Tremendous advancements have been made in the past years in the development of numerical techniques, in computing speed and data storage. However, along with the increase in speed of computing, the process models have become more extensive and more complicated. They increasingly cover a wider scope, represent a deeper process integration and intensification and have more details at much smaller spatial scales. Rigorous models of complicated systems cannot always be effectively used for inverse problem type of applications such as design and optimization. Inverse problems require repetitive solution of the model in numerous iterative passes and remain critical in computing time and use of resources. Achieving a short turnaround time in time-constrained work processes is desirable. A reduction of the model complexity is often required to make a model-based solution practical. Current numerical approaches in systems engineering apply order-reduction to a model in its entirety, without preserving the underlying network structure of the process or its multi-scale decomposition [1]. Retaining the meaningful structural features in a reduced model of a process is a necessity for long-term practical use of the model, certainly in an environment where adaptations in process unit models occur regularly.
B.Dorneanu et al.
162
In this contribution, a structure retaining approach to perform model reduction is introduced, which is trying to balance the model structural and behavioural features to obtain computationally feasible models.
2. Process modelling and model reduction
Structure
2.1. Dual way of modelling: decomposition and aggregation over detail levels In chemical engineering, the mathematical modelling of physical systems is performed by decomposition and aggregation of the system across a hierarchy of appropriately chosen levels [2]. The decomposition is performed by breaking the process into smaller entities, which are modelled individually. The system-wide model is constructed by coupling these individual models, through aggregation. The complexity of a model depends on a series of attributes related to the process considered: number of units, physical resources (mass, energy, momentum, electrical charge, etc.), number of domains, phases, as well as the number and type of phenomena that need to be taken into account. Secondly, there is another group of attributes related to the mathematical structure of the model (type of equations, sparsity, non-linearity, multiplicity etc.). A model is simplified until its complexity becomes manageable, while still offering the required accuracy. The operation going the opposite way, to increase the complexity of a model by adding detailed information, is called refinement. Reduction
M1 M2
Sub-system 0
M4
M’’1
M3 Refinement
M’4 Decomposition
M’3 Aggregation
Aggregaation
Decomposition
M ’2
Reduction
Sub-system 1
M2
M ’2 Refinement
Sub-system 2
Level of detail 0
Level of detail 1
Behaviour
Figure 1. Dual way of modelling over levels
To conclude, four operations can be performed on models, in sets of two: Decomposition and Aggregation of STRUCTURE over sub-systems Refinement and Reduction of detail/complexity of BEHAVIOUR These four operations are shown in Figure 1, where the decomposition and the aggregation are vertically oriented, while the refinement and the reduction are horizontally presented. The approach has several advantages. By decomposing the model into smaller entities, the understanding of the behaviour and the function of each hierarchical level is easier. It is possible to standardize models of entities (reactors, distillation columns, etc.) that can be easily re-used for building a different aggregated (complex) model. Once the standardization is available, the modelling procedures become easier and faster, and, in the same time, flexible. Lastly, refinement and reduction are combined when some parts of the model are reduced and others are refined.
Process modelling and model reduction for chemical engineering applications 2.2. Model reduction with process knowledge Model reduction consists of a series of approaches used for obtaining accurate, yet computationally less expensive models of process systems. The state-of-the art of the model reduction involves a host of diverse methods and techniques, which can be classified in two distinct categories: (a) Dedicated conceptual approaches in the physical-mathematical domain [3], [4]. These approaches relate to the level of detail of specific phenomena (physical or kinetic lumping, reduction of the reaction networks, etc.) in process systems. (b) Numerical, data driven techniques in systems engineering [1] (projection and balancing methods, etc.). An inventory of these approaches is presented in Table 1. Table 1. Model reduction options and approaches in chemical engineering applications Options Level
Approach Attribute Resource
Ignore resource
Unit
Structure
Physical
Compartment Lumping
Domain Region Phase Coordinate
Steady state assumption; Integrate/average out; Symmetry
Connectivity
Transfer&Transport
Ignore terms; Average
System
Unit
Lumping/Aggregation
Behaviour
Sources & sinks
Lumping; Time-scale analysis; Sensitivity analysis
Mathematical
Numerical
assumption
Differential
Order-reduction
Linear; Non-linear
equation
Simplification
Linearization; Approximation of functional expressions
Algebraic
Linearization; Approximation of functional expressions;
equation
Simplification
Numerical scheme
Discretization; Functional approximation
The traditional approach for reducing the complexity of process models does not always give the best results. Although significant reduction of the number of equations is achieved, attention should be paid on the internal structure of the problem, as well as on retaining the physical meaning of the variables [5]. For this reason a structure retaining model reduction approach has been developed. The approach aims first at simplifying the physical (units, domains, phases, etc.) and the systemic (connectivity) level of the chemical process model, as well as the behavioural (phenomena) structure (first part of Table 1). Only then additional mathematical (number of equations) and numerical scheme reductions are selectively applied to
163
164
B.Dorneanu et al.
individual compartments or units (second part of Table 1). In the following step, the reduced models of the individual units are connected at system level and the reduced model of the full process is obtained. In this way, the reduction procedure is able to preserve the essential structural features of the process. Moreover, the physical meaning of the variables and equations is preserved as much as possible. The success criteria for the reduction approach are to develop a model which: (a) is solved successfully in an acceptable amount of time, and (b) is used successfully for practical applications. The application of the approach is investigated for two types of processes: (a) the iso-butane alkylation process, as a complex process with simple (one-phase) products and (b) the freezing step in ice cream manufacture, as an example of a process unit with complex products, which are presented in Figure 2.
CS1
1
M2 R
75 50
M1
25 0 0
1
2
-25
Time / [h]
3
4
Cumulative distribution function
100 (Deviation from the steady state)
Recycle molar flow / [kmol/h]
Figure 2. a) iso-Butane alkylation plant; b) Ice cream freezing
0.75 z = 100%
0.5 No melting
z = 25%
0.25 0
0
25
50
75
100
Particle size, s [m]
Figure 3. a) Dynamic simulation results of the iso-butane alkylation plant for 10% decrease of fresh butene flowrate: Recycle flowrate; b) Ice particle cumulative distribution along the freezer tube with no particle melting (dashed line) and with particle melting (solid lines)
3. Model reduction for complex processes with simple products For this first type of process, the simplification acts mostly at the systemic and the higher levels of the physical structure (units) of the chemical plant (Table 1). The process model is decomposed in smaller models of the units in the process flowsheet, as shown in Figure 2a, and the model reduction is applied to these units individually. The resulting reduced models of the units are then connected in order to obtain the reduced model of the full process. The behaviour of the reduced model (M2), obtained using the structure retaining approach is compared with a rigorous model of the process (R), as well as with a reduced model obtained by applying the traditional order-reduction approach (M1), with loss of structure knowledge. It can be observed from Figure 3a that the model M2
Process modelling and model reduction for chemical engineering applications correctly predicts the high sensitivity to disturbances of the process R, while the model M1 wrongly indicates that a new steady state is reached. More detailed information on the development of the reduced model version M2 can be found in [6].
4. Model reduction for simple processes with complex products For this type of processes the reduction approach acts on the lower levels (Table 1) of the physical structure (species, phases, domains, compartments), as well as on the behavioural level (phenomena). The development of a rigorous model for the freezer presented in Figure 2b, is conceptually conceivable. Nevertheless, it is currently impossible to solve such a rigorous model because many high-dimensional laws of conservation will arise and the knowledge regarding the rate phenomena that occur inside the system is still limited. For this reason, the reduction is performed in this case both conceptually and numerically during the stages of the modelling procedure. The product side is divided into two domains, the fluid bulk (B), composed of a liquid matrix (water, sugar, fat, air, other components) and dispersed ice particles, and a solid frozen layer (FL) of ice at the freezer wall, as shown in Figure 2b. The number of coordinates is reduced from five to three, by averaging out the radial and angular coordinates, while the conservation equations for the resources are complemented with simple rate laws for the relevant physical phenomena. These rate laws are deliberately kept simple by coupling a flux to only one dominant driving force and allowing for experimentally adjustable rate parameters. The obtained model, with 7 partial differential equations (PDE’s in 3-D), with 7 initial and 9 boundary condtions, and 54 nonlinear algebraic equations (NLAE’s). Its steady state version is used successfully for predicting the cumulative ice particle size distribution along the freezer (Figure 3).
5. Conclusions The paper proposes an approach to systematize and exploit the structural and behavioural features of a process for model reduction. The model reduction simplifies the physical structure and behaviour, as well as the systemic level of the chemical process model before selectively applying mathematical and numerical reductions to individual entities of the model. Then, the reduced models of these entities are connected in order to obtain the reduced model of the full process. The application and the advantages of the approach are presented for two types of processes: (a) complex processes with simple products and (b) simple processes with complex products. The reduction works well for the cases considered and the resulting models are solved in acceptable amounts of time. However, the optimality of the process structure decomposition when developing the reduced model is still open.
References [1]. W. Marquardt, 2001, Nonlinear model reduction for optimization based control of transient chemical
processes, Proceedings of the 6th International Conference of Chemical Process Control, p. 12 [2]. K.U. Klatt, W. Marquardt, 2009, Perspectives for process systems engineering – Personal views from academia and industry, Computers and Chemical Engineering 33, p. 536 [3]. M.S. Okino, M.L. Mavrouniotis, 1998, Simplification of mathematical models of chemical reaction systems, Chemical Reviews 98, p. 391 [4]. I. Dones, H.A. Preisig, 2010, Model simplification and time-scale assumptions applied to distillation modelling, Computers and Chemical Engineering 34, p. 732 [5]. S. K. Wattanwar, 2010, Identification of low order models for large scale processes, PhD thesis, Eindhoven University of Technology, The Netherlands [6]. B. Dorneanu, C.S. Bildea, J. Grievink, 2009, On the application of model reduction to plantwide control, Computer and Chemical Engineering 33, p. 699
165
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
General-purpose graphics processing units application for diffusion simulation using cellular automata A. Kolnoochenko, P. Gurikov, N. Menshutina∗ CAPE Department; MUCTR; Moscow, Russia
Abstract An approach to use general-purpose graphics processing units (GPGPU) using nVidia CUDA technology1 to increase the performance of cellular automata is presented in this paper. Shown that the calculations on the GPU faster (more than one order of magnitude) in comparison with CPU calculations and a well-functioning C-code. Cellular automata with Margolus’ neighborhood are used for the demonstration of general steps in CUDA simulations. Keywords: GPGPU, CUDA, cellular automata, parallel computing
1. Introduction Cellular automata (CA) theory was developed in early 40s of 20th century and nowadays it has become a reliable way for modeling a variety of real systems both discrete and continuous [1]. CA is used as a simulating tool in many fields: from active hydrodynamics in chemical apparatus to road traffic simulating and from modeling of protein structures to neural systems’ activities. However, with the rise of task complexity the use of cellular automata approach requires large computing capacity to model significant time intervals in dynamic systems. That is why 3D cellular automata are used rather rarely, primarily, because of the necessity to store and process large amount of data. But from the other hand, application of 3D CA makes possible to come closer to detailed and precise modeling of real systems. This is why it is very important to look for new ways to increase calculation speed and computing capacity. One of the remarkable properties of CA approach is the representation of the whole system as a set of independent cells, which are influenced only by neighbors. This allows us to use parallel computing algorithms. Today, CPUs (central processing units) have reached their peak performance and further increase in performance is possible by increasing the number of processor units and by more efficient usage of them. However, modern computers have other more powerful computing hardware. New models of GPU (graphics processing unit) incorporate up to 256 processing units per one board, and there can be 4 GPUs in one computer, resulting in more than 1000 processing units. ∗
[email protected] 1 see
http://www.nvidia.com/object/cuda_home_new.html for details
General-purpose graphics processing units application for diffusion simulation using cellular automata
167
This allows achieving extremely high performance of more than 4 Tflops (1012 floating point operations per second). In addition, GPU are initially designed for parallel computing with SIMT (single instruction - multiple threads) methods that can be successfully applied to modeling using cellular automata. From the one hand diffusion processes are the basic modeling of complex physicalchemical systems, from the other hand, are well studied, making it easier to assess the quality of the model. Models of diffusion on the micro level were considered, namely the CA with Margolus neighborhood and stochastic CA. Therefore, for modeling real systems need to use fields with large size. Additionally in real systems there is interaction between the molecules in the cell with neighboring cells, which also slows down the calculations.
2. Cellular automata with Margolus’ neighborhood
a)
b)
Figure 1. Two ways of lattice’s division: even (a) and odd (b)
a)
b)
Figure 2. Right (a) and left (b) angle block turns
Let’s examine the Margolus’ cellular automaton (CA) for diffusion simulations [2]. Square lattice represents “space” and to each site of this lattice, or cell, there is associated a state variable, called the cell state ranging over a finite set, called state alphabet. In that case state alphabet consists of 2 values: S1 = 1 (there is a particle in this cell) and S2 = 0 (empty cell). All lattice is divided by blocks with 2 × 2 sites in each. There are two ways of this division: even (fig. 1a) and odd (fig. 1b). At each time step (computational iteration) there are two half-iterations — for even and odd divisions one provides right or left angle turn (π/2) to each block (fig. 2). Thus transition rules for lattice size M × M (where M — even) consist in turning M/2 ×M/2 blocks of even lattice and then in rotation M/2 × M/2 of the blocks in odd lattice.
168
A. Kolnoochenko et al.
Here we’ll consider only such a “classical” automaton. It’s able to simulate diffusion with specific diffusion coefficient (D = 3/4). But in more advanced version one could take into account also interactions between different kinds of cells.
3. Division by threads Working with CA using massive parallel multithreading environment, such as CUDA, each processable element must be computed by individual thread, not connected with other. Margolus’ CA is a special case where one element is an entire block 2 × 2, so necessary number of threads is equal to number of blocks in lattice. Each step of generalized cellular automaton consists of two stages: 1. Neighborhood projection. Process collect data from all neighbor cells and current element. 2. Evaluations of transaction function via transaction rule and calculation of the new state of element (it depends on element state and neighborhood). In ideal case these operations must be implemented independently one from one another. It will allow to modify transaction rules and neighborhood projection rules. Inside GPU computations functions have different types: 1. Functions, computed on and called from GPU. Marked in source code as __device__ 2. Functions, computed on a traditional CPUs (sequential subprogram). Marked in source code as __host__ 3. Functions, called from CPU, but computed on GPU — __global__. Also called “kernels” in CUDA notation Probably the most logical structure of GPGPU program for CA computations shown on fig. 3. Notice, that the main computational work passed on a device (GPU) and host handle only logic of program. This general pattern fits for the substantially all CA.
4. Groupinging threads into blocks CUDA architecture has strong hierarchical structure. Multiple threads are grouped into blocks (up to 1024 threads per block) and multiple blocks are combined into grid. All threads inside one block are handled by one multiprocessor and could have shared memory (marked in source code as __shared__). Context switching between threads is very fast — only 1 clock cycle, because all threads physically do the same operation with different data (CUDA have the specific type of parallel architecture — Single Instruction Multiple Thread which is a subtype of more general Single Instruction Multiple Data). One of the main difficulties in grouping threads is that the slowest operation on GPU is memory access (RAM read/write) it could take more than 200 clock cycles. But these operations could be coalesced and executed up to 16 times faster during sequential memory access (all threads in block access memory addresses near each other). In many cases the most preferable strategy is sequential copying of input data to the fast shared memory, located on multiprocessor, random access and computations on shared memory, and
General-purpose graphics processing units application for diffusion simulation using cellular automata
169
sequential writing of results to the global device’s RAM. In general case the fastest way is to group threads in a such manner that elements allow sequential reading and writing, forming kind of “lines”. This approach also has an advantage if neighbors’ states are taking into account and minimize reading operations, but occupy more memory because a perimeter (number of neighbors those states must been copied into shared memory) of rectangle is greater than for example a perimeter of square.
5. Acceleration of computations Coalesced memory access is one of the most efficient CUDA optimizations. But achieve speed up during repeated memory access (read) to random elements one could also using special array type — textures. It’s a cached read-only memory and the main steps of using it are: 1. Texture declaration (program file level) for example using C++ templates texture
tex; 2. Create pitched array on GPU (initialization __host__ function level), using function cudaMallocPitch 3. Binding texture to array (main __host__ level function), using function cudaBindTextureToArray 4. Read data from texture inside kernel function, using functions: tex1D/tex2D/tex3D. One more optimization is using asynchronous operations. Kernel function call return control to __host__ function immediately and during GPU computations the CPU could do some additional service computations. Synchronizations between computational streams (CPU and GPU) could be done using function cudaThreadSynchronize().
6. Results Program performance was tested and GPU version proved to be 125 times faster over analogous CPU program. In this version of program random numbers generation was handled by CPU while other computations take place on GPU (asynchronous call). On a November 2010 CUDA 3.2 was released and also contains CURAND library, so these computations could be handled by GPU and than could probably increase program performance, because increase speed of pseudorandom numbers generator and it will be not necessary to copy additional data between host and device. Margolus’ cellular automaton is proved to be a great tool for diffusion simulation and it’s more complicated and advanced version allows to achieve relevant results while modeling diffusion in porous media [3].
References 1. Norman Margolus Tommaso Toffoli. Invertible cellular automata: A review. Physica, 45:229– 253, 1990. 2. Norman Margolus Tommaso Toffoli. Cellular Automata Machines: A New Environment for Modeling. Massachusetts Institute of Technology, 1987. 3. N. Menshutina P. Gurikov, A. Kolnoochenko. 3d reversible cellular automata for simulation of the drug release from aerogel-drug formulations. Computer Aided Process Engineering, 26:943– 947, 2009.
170
Figure 3. Cellular automaton’s lattice
A. Kolnoochenko et al.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Mercury Transformation Modelling with Bromine Addition in Coal Derived Flue Gases Kevin J. Hughes, Lin Ma, Richard T.J. Porter and Mohamed Pourkashanian Centre for Computational Fluid Dynamics, School of Process, Environmental and Materials Engineering, University of Leeds, LS2 9JT, UK
Abstract In this work we present a gas-phase mechanism describing the oxidation of mercury by bromine and chlorine for application to coal derived flue gases. Developed sub-models for Hg-Cl and Hg-Br oxidation are combined with NOX, sulphur, chlorine and bromine chemistry in order to form an overall homogeneous mechanism that is applicable to post-combustion flue gases. The mechanism is compared to experimental data obtained from flow reactor studies and then investigated using rate-of-production and sensitivity analyses in order to identify the most important and influential reaction channels. Keywords: Combustion, pollution control, mercury transformation, chemical kinetics.
1. Introduction Mercury emissions from coal-fired power stations are coming under increasing scrutiny in Europe. In 2005, the Community Strategy Concerning Mercury was drawn up by the European Union and aims at a reduction of emissions by member states. Germany became the first European country to regulate mercury emissions from power plants (0.03 mg/m3STP). During coal combustion, mercury is vaporised and released to the gas-phase in its elemental form (Hg0). Hg0 can be converted to oxidised mercury (Hg2+) where reaction with chlorine compounds present in coal to form HgCl2 is the predominant channel under normal power plant operation. In addition, particulate bound mercury (Hg P) can be formed by adsorption onto fly ash or activated carbon. Unburned carbon particles in fly ash are believed to provide reactive sites for mercury adsorption and also heterogeneous oxidation. Hg2+ can easily be removed by flue gas desulphurisation scrubbers where HgP can be removed by electrostatic precipitators or other particle separation methods. Mercury transformation is extremely complicated and can depend on a variety of factors such as coals’ chemical and mineralogical composition, combustion conditions, plant configuration, flue gas composition and temperature-time history from combustion zone to stack. Bromine compounds are strong oxidisers of mercury so there is a strong interest in using them as additives in the form of Br2, HBr, bromine salts or impregnated activated carbons. A number of pilot scale and industrial demonstrations have been reported [1,2]. In addition, a few kinetic modelling studies of mercury oxidation by bromine species have been reported [3,4] but there is still a significant lack of fundamental understanding of the associated physical and chemical interactions. This paper presents a homogeneous mechanism for mercury oxidation by chlorine and bromine species, consolidating previous work on the development of Hg-Cl [5] and Hg-Br [6] mechanisms. The impact of different halogen mixtures on mercury oxidation is assessed by numerical simulation and comparison to experimental data. The interactions of Hg/Cl/Br are analysed and discussed.
172
K.J. Hughes et al.
2. Model development The detailed chemical mechanism was assembled by combining mechanisms for Hg-Cl [5] and Hg-Br [6] oxidation which include supporting mechanisms for NO X, sulphur, small C/H/O species, chlorine and bromine. The Hg-Cl and analogous Hg-Br sub-mechanisms consist of 4 reversible interactions for the promotion of Hg0 to an intermediate (HgCl or HgBr) by Cl, Cl2, HCl and HOCl or the equivalent bromine species, followed by another 4 reactions for the further oxidation of the intermediate to mercury dihalide (HgCl2 or HgBr2) by the same halogen containing species. The Arrhenius rate parameters for the initiating bi-molecular recombination reaction; Hg + Br + M ļ HgBr + M (1) where M is any third-body participant molecule, and its analogous chlorine reaction were derived from the direct experimental determinations by Donohoue et al. [7,8] by recasting the expression in a standard three parameter form. Five Hg-Br-Cl cross reactions proposed by Niksa et al. [3] were added in addition to two reactions for the further oxidation of HgBr by BrCl; (2) HgBr + BrCl ļ HgBr2 + Cl HgBr + BrCl ļ HgBrCl + Br (3) which take the same rates for the equivalent reaction with diatomic bromine as proposed by Niksa et al. [3]. A further 8 Br-Cl cross reactions taken from the NIST chemical kinetics database [9] were added. Thermochemical data in the form of NASA polynomials for the new species BrCl were taken from Burcat’s database [10] but were unavailable for HgBrCl, so a fitting procedure was employed to average values for HgCl2 and HgBr2 in order to estimate the required values. In total, the mechanism comprises 87 species in 403 reversible and 8 irreversible reactions. The mechanism is available on request.
3. Application of reaction mechanism under experimental conditions Experimental data presented by Silcox et al. [11] have been used for validation of the gas-phase model. The data were obtained in a tubular-glass reactor which is fed exhaust gases from a natural gas burner to which small quantities of mercury, HCl and HBr may be added. Simulation of the species pro¿les was performed by the PREMIX module of the CHEMKIN II package [12]. Hg oxidation simulations begin at the reactor inlet and used one of the two imposed distance–temperature histories obtained experimentally, which consist of an initial high temperature region up to 1350 K for about 2 s, followed by a quench to ~600 K at approximately 440 or 210 Ks-1. The reactor length (132.08 cm), internal diameter (47 mm) and Àow rate (6 SLPM) were set to those of the experiment, resulting in residence times of ~7 s. The reactor inlet composition was assigned based on experimental data [11], as shown in table 1, and then refined using an equilibrium calculation with the gaseq software [13] in order to generate quantities of representative radicals in the flue gas matrix. Chlorine containing species consist largely of HCl (98 %) and were specified at 0 or 500 ppmv equivalent HCl. By contrast, HBr dissociates much more readily than HCl at post-combustion conditions to produce a greater quantity of Br radicals, accounting for around 34 vol% of bromine species at the reactor inlet. This is a primary reason for the enhanced gas-phase mercury oxidation of bromine compared to chlorine. Bromine species were specified at 0 or 25 ppmv equivalent HBr.
Mercury Transformation Modelling with Bromine Addition in Coal Derived Flue Gases 173 Table 1: Composition for gas-phase kinetic simulations Species Composition
Hg 3.3 ppbv
O2 0.8 vol%
H2O 16.5 vol%
CO2 7.7 vol %
NO 30 ppmv
HCl 0, 100, 400 ppmv
HBr 0, 25 ppmv
Figures 1 and 2 present comparisons of model predictions and experimental data concerning mercury oxidation for HBr and HCl addition. The model predicts homogeneous oxidation to begin at high temperatures of around 1100 K. The results show clearly that the extents of oxidation are much greater when HBr is added and at lower quantities than HCl. At the lower quench rate (figure 1) the model displays reasonable qualitative agreement to the experimentally determined effect of different levels of halogen addition; however, the model quantitatively under predicts the extent of mercury oxidation under all scenarios. The impact of addition HCl when HBr is present as displayed by the model is much less pronounced than is observed experimentally at this quenching rate. The comparison at the higher quench rate shows good qualitative agreement for the impact of different levels of halogen addition and reasonable quantitative agreement for the cases of oxidation by HCl species only, but in contrast to the lower quench rate, the model over predicts the impact of HBr addition. This means that the model is unable to capture the dependence of mercury oxidation on quench rate, which may imply mechanistic or parametric uncertainty; e.g., deficiencies in bromine chemistry or insufficient oxidized mercury species reduction routes. The qualitative disagreement may also reflect the difficulties associated with mercury measurement. 80
Experiment Model
70
% Mercury Oxidation
60 50 40 30 20 10 0
0 ppm HCl, 25 ppm HBr
100 ppm HCl, 100 ppm HCl, 400 ppm HCl, 400 ppm HCl, 25 ppm HBr 25 ppm HBr 0 ppm HBr 0 ppm HBr
Fig. 1. Comparison of measured and predicted mercury oxidation with HCl and HBr addition at the lower quench rate (210 Ks-1). Experimental conditions were repeated, and error bars were calculated from the standard deviation of the measurements.
4. Kinetic analysis of reaction mechanism In order to investigate the kinetic pathways that lead to gas-phase mercury oxidation, the mechanism was further analyzed using rate-of-production and brute force sensitivity analyses. In the brute force sensitivity analysis, a simulation is run with the inlet composition of table 1 with 500 ppm of equivalent HCl and 25 ppm of equivalent HBr using the integration package SPRINT [14] in which a simplified temperature ramp
K.J. Hughes et al.
174
80
Experiment Model
70
% Mercury Oxidation
60 50 40 30 20 10 0
0 ppm HCl, 25 ppm HBr
100 ppm HCl, 100 ppm HCl, 400 ppm HCl, 400 ppm HCl, 25 ppm HBr 25 ppm HBr 0 ppm HBr 0 ppm HBr
Fig. 2. Comparison of measured and predicted mercury oxidation with HCl and HBr addition at the higher quench rate (440 K/s).
approximating to the experimental profile is performed, with a total integration time of 7.2s resulting in an extent of Hg oxidation of 58.3 %. After the first simulation with unaltered values, a series of successive simulations are carried out with pre-exponential factors of each reaction increased by 50 % whilst holding all others at their given values in order to assess the sensitivity of each reaction. The collated results are then graphically analyzed by comparison to the simulation with unaltered values. The rateof-production analysis was performed using the KINALC post-processor [15] at a number of points along the simulated reactor trajectories of Silcox et al. [11] with the same inlet compositions used in the sensitivity analysis. Further details of the computational analyses are given elsewhere [5]. Change in extent of Hg oxidation % -10
-5
0
5
10
15
2Br + M Br2 + M HgBr + Br2 HgBr2 + Br Hg + Br + M HgBr + M HgBr + Br HgBr2 HgBr + BrCl HgBrCl + Br Hg + Br2 HgBr + Br BrO + OH Br + HO2 H + HO2 O2 + HO2 OH + H2O2 HO2 + H2O OH + HO2 H2O + O2 Br + HO2 HBr + O2 NO + HO2 NO2 + OH HgBr + Cl2 HgBrCl + Cl
Fig. 3. Sensitivity analysis results with HBr addition, change in Hg conversion percentage for 50% increase in each reaction.
Mercury Transformation Modelling with Bromine Addition in Coal Derived Flue Gases 175 The top 13 most sensitive reactions are presented in the bar plot of figure 3. The recombination reaction of bromine radicals is the most sensitive reaction due to its control of Br and Br2. Reactions of the Hg-Br sub-mechanism involving bromine radicals and diatomic bromine display high sensitivities and also a reaction involving further oxidation by BrCl to form HgBrCl. Lower sensitivities are found for reactions of O and H containing species that control OH which in turn effects the concentration of bromine species. The rate-of-production analysis reveals that, at high temperatures Br radical formation is controlled by the interaction of HBr and OH radicals; however, as the temperature drops below 1000 K the interaction of BrO with HBr becomes more significant. At temperatures below 800 K, the reactions of HgBr with Br or Br2 compete for the second stage oxidation, with Br2 becoming the more dominant oxidiser as the temperature drops.
5. Conclusions A mechanism for the prediction of homogeneous mercury oxidation by chlorine and bromine species in coal-air combustion flue gases have been developed and compared to experimental data. Sensitivity and rate-of-production has revealed the dominance of the Hg oxidation by bromine over chlorine.
Acknowledgement The authors gratefully acknowledge financial support from EPSRC.
References [1] Y. Cao, Q. Wang, J. Li, J. Cheng, C. Chan, M. Cohron and W. Pan, 2009, Environ. Sci Technol., 43, 2812-2817. [2] B. Vosteen and L. Lindau, 2006, VGB PowerTech, 86, 70-75. [3] S. Niksa, B. Padak, B. Krishnakumar and C.V. Naik, (2010), Energy Fuels, 24, 1020-1029. [4] B. Cauch, C.L. Senior, G.D. Silcox, J.S. Lightly, 2008, Effects of Quench Rate, NO, and Quartz Surface Area on Gas Phase Oxidation of Mercury by Bromine. 2008 Annual Conference of the American Institute of Chemical Engineers, Philadelphia, Paper #123b. [5] M. Gharebaghi, K.J. Hughes, R.T.J. Porter, M. Pourkashanian and A. Williams, 2010, Proc. Combust. Inst., 22, available online. [6] M. Gharebaghi, J. Gibson, K.J. Hughes, R. Irons, R.T.J. Porter, M. Pourkashanian and A. Williams, 2010, A Modeling Study of Mercury Transformation in Coal-fired Power Plants, Proceedings of the American Flame Research Committee 2010 Pacific Rim Combustion Symposium, Maui. [7] D.L. Donohoue, D. Bauer and A.J. Hynes, 2005, J. Phys. Chem. A, 109, 7732-7741. [8] D.L. Donohoue, D. Bauer, B. Cossairt and A.J. Hynes, 2006, J. Phys. Chem. A 110, 66236632. [9] http://kinetics.nist.gov/kinetics/ [10] http://garfield.chem.elte.hu/Burcat/burcat.html [11] G. Silcox, P. Buitrago, C. Senior and B. Van Otten, 2010, Gas-Phase Mercury Oxidation by Halogens: Effects of Bromine and Chlorine. Proceedings of the Air & Waste Management Association, Calgary, Paper #1048. [12] R.J. Kee, F.M. Rupley, J.A. Miller, 1993, CHEMKIN-2: A Fortran Chemical Kinetics Package for the Analysis of Gas-Phase Chemical Kinetics. Sandia Laboratories Report: 5898009B. [13] http://www.arcl02.dsl.pipex.com/ [14] M. Berzins, 1985, R.M. Furzeland, Shell Research Ltd., TNER 85058. [15] T. Turányi. http://www.chem.leeds.ac.uk/Combustion/kinalc.htm
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Optimal design of multiple dividing wall columns based on genetic programming Fernando I. Gómez-Castro,a,b Mario A. Rodríguez-Ángeles,b Juan G. SegoviaHernández,b Claudia Gutiérrez-Antonio,c Abel Briones-Ramírezd a
Instituto Tecnológico de Celaya, Departamento de Ingeniería Química, Av. Tecnológico y García Cubas S/N, Celaya, Guanajuato, 38010, México b Universidad de Guanajuato, Campus Guanajuato, División de Ciencias Naturales y Exactas, Departamento de Ingeniería Química, Noria Alta S/N, Guanajuato, Guanajuato, 36050, México c CIATEQ, A.C., Av. del Retablo 150 Col. Fovissste, Querétaro, Querétaro, 76150, México d Exxerpro Solutions, Av. del Sol 1B Interior 4B, Plaza Comercial El Sol, El Tintero, Querétaro, Querétaro, 76134, México
Abstract In this work, two schemes are analyzed for the reduction on energy consumptions for ternary distillation: a Petlyuk column, PC, and a Petlyuk with postfractionator system, PCP. To perform the optimal design of the analyzed systems, the use of multiobjective genetic algorithms has been considered. Moreover, a strategy for diameter calculation is proposed for the dividing wall column, DWC, and double dividing wall column, DDWC, which is based on their distribution of internal flows. Results show that genetic algorithm tool allows obtaining optimal designs for the PC and PCP systems, with low energy consumptions. Furthermore, the design strategy for the DWC and DDWC shows that the physical structure required for one or two dividing walls is quite similar; thereby, it appears to be an adequate method for the sizing of the dividing wall systems. Keywords: Multiple dividing wall columns, stochastic optimization, columns sizing.
1. Introduction Thermally coupled distillation sequences are a good option to reduce energy consumption in the separation of fluid mixtures. One of the more important thermally coupled schemes is the Petlyuk column, which may reduce energy requirements up to 30% in comparison to conventional sequences [1]. An alternative system, recently analyzed, consists on a Petlyuk system with an additional column attached, known as postfractionator; in some cases, this system can achieve even lower heat duties than the Petlyuk column [2]. Because of mechanical issues, a thermodynamically equivalent system known as the dividing wall column is used instead of the Petlyuk column; for
Optimal Design of Multiple Dividing Wall Columns Based on Genetic Programming 177 the Petlyuk column with postfractionator it has been proposed that its equivalent could be a double dividing wall column [3]. A dividing wall column consists in a shell, in which a metallic wall is inserted; thus an appropriate diameter must be used to support the maximum vapor flow rate, allowing a proper pressure drop along the column and avoiding flooding. A strategy to calculate the diameter of the DWC has been recently proposed [4], based on the vapor flow rate distribution on the column; nevertheless, there is no such methodology for DDWC. Therefore, in this work an extension of the methodology for the DWC is proposed to obtain proper diameter calculations for the DDWC. To obtain low-energy designs for the dividing wall systems, a multiobjective genetic algorithm has been used to find the Pareto front of optimal designs for the DWC and DDWC. The optimal designs obtained offer a good distribution of the vapor flows, which allows requiring trays with a lower diameter.
2. Design and optimization tool: multi objective genetic algorithm The design and optimization of the analyzed systems have been performed by using a multi objective genetic algorithm with constraints, coupled to the process simulator Aspen Plus. Due to the characteristics of the search space, conventional derivativebased optimization methodologies may present considerable difficulties finding a solution near to global optimum, while stochastic optimization algorithms are robust and efficient tools for solving such optimization problems. When a multiobjective optimization is considered, the set of solutions found by the genetic algorithm is known as the Pareto front. In the case of Petlyuk-like distillation columns, the multiobjective optimization considers the simultaneous minimization of the heat duty of the sequence, and the number of stages in each column of the scheme. The minimization problem is formulated as: ሺܳǡ ܰ݅ ሻ ൌ ݂൫ܳǡ ܴǡ ܰ݅ ǡ ݆ܰ ǡ ܰ ݏǡ ܰ ܨǡ ݆ܨ൯
s.t.
(1)
݇ݕ ሬሬሬሬԦ ሬሬሬሬԦ ݇ݔ Where R is the reflux ratio, N i is the total number of stages in the column i, N j is the stage number of the interlinking flow j, N s is the side stream stage, N F is the feed stage ݇ݔand ሬሬሬሬԦ ݇ݕare the vector of number in the prefractionator, F j is the interlinking flow, and ሬሬሬሬԦ required and obtained purities or recoveries. During the optimization process the most consuming time activity is the evaluation in Aspen Plus of the objectives and constraints. For that reason, we speed up the multiobjective strategy using neuronal networks, decreasing in at least 50% the computational time [6].
3. Calculation of the diameter of the dividing wall columns For the determination of the diameters of these systems, the strategy presented by Premkumar and Rangaiah [4] for the DWC has been extended for the DDWC. For a single tray:
F.I. Gómez-Castro et al.
178
ܦൌ ሺͶ ܩΤͲǤͺߨߩܸ ܸ݉ܽ ݔሻͳȀʹ
(2)
In Eq. 2, D is the diameter of the tray (m), G is the total vapor flow rate (kg/s), DQGȡ V is the vapor density (kg/m3). It has been considered that the actual vapor velocity corresponds to the 80% of the maximum vapor velocity, V max . The equivalencies for the vapor flows for the DDWC are shown in Figure 1.
Figure 1. Vapor flow distribution for the double dividing wall column.
The two dividing walls in the DDWC are represented in the PCP system by the prefractionator and the postfractionator. Thus, the trays on the DDWC must be designed to support the vapor flowing not only through the main column, C2, but also the vapor on the side columns, C1 and C3. A similar approach is considered for the DWC, where only the columns C1 and C2 exist. The diameter of the DWC and DDWC is taken as equal to that of the larger tray, since it has the higher vapor rate flowing across it. V max is the flooding vapor velocity, and is given by: ܸ݉ܽ ݔൌ ͳܭሺߩ ܮെ ߩܸ Τߩܸ ሻͳȀʹ
(3)
K 1 has been considered as 0.07 m/s, as proposed by Premkumar and Rangaiah for the DWC using Sieve trays [4].
4. Case of study The analyzed mixtures are shown in Table 1. It can be seen that the mixture M1 has a low molar feed composition of the middle-boiling component, n-hexane, while mixture M2 has a high composition of the middle-boiling component, methanol, in the feed stream. Pareto fronts of both Petlyuk-like schemes have been generated with the multiobjective genetic algorithm; the parameters of this optimization were 50
Optimal Design of Multiple Dividing Wall Columns Based on Genetic Programming 179 generations of 1000 individuals each one. From the Pareto front, 10 optimal designs were selected and their simulations were analyzed with more detail. Table 1. Analyzed mixtures Mixture
M1
M2
Component
Feed Composition
Feed Flowrate, kmol/h
Purity
n-pentane n-hexane n-heptane methyl formate methanol n-butanol
0.40 0.20 0.40 0.06 0.913 0.027
18.14 9.07 18.14 6.0 91.3 2.7
98.70% 98.00% 98.60% 98.60% 99.97% 98.30%
5. Results Since this work has been developed to observe the performance of the DDWC, In Figure 2 the Pareto fronts for the DDWC are shown.
Figure 2. Pareto fronts (a) M1, (b) M2
For the mixture M1 the energy consumption for the DDWC has been observed to be lower or similar than for the DWC. For mixture M2, in most cases the DWC shows lower energy consumption, and even small changes in the structure of the DDWC may have a great impact on the heat duty of the system, as can be seen in Table 2, where the distribution of stages on the columns in shown. Table 2. Distribution of stages, M2
Case N MC,DWC N PRE,DWC N MC,DDWC N PRE,DDWC N POST,DDWC
1 59 11 57 6 12
2 59 10 56 9 12
3 58 11 54 6 12
4 58 10 57 7 11
5 57 11 53 8 12
6 57 10 57 6 11
7 54 10 51 7 12
8 53 12 57 6 10
9 53 10 49 6 12
10 51 10 56 6 10
F.I. Gómez-Castro et al.
180
In Table 3 calculated diameters for the respective lowest energy consumption cases are shown. To compare, calculated diameters for the initial, non-optimized designs are shown. Those initial designs have been obtained by short-cut methods. It is clear that the diameter required for a single and a double dividing wall column is not quite different. It also can be seen that the diameter for the optimal designs results lower than the required for the non-optimal designs. Thus, the design and optimization methodology presented allows a better vapor flow distribution and lower diameters, which will have a direct impact on equipment costs. Table 3. Calculated diameters (m) Mixture
M1 M2
DWC, opt 1.02 0.95
DWC, init 2.15 1.91
DDWC, opt 0.94 1.08
DDWC, init 2.05 1.67
6. Conclusions In this work a design and optimization strategy based on evolutionary techniques has been presented. The multiobjective genetic algorithm allows obtaining a number of optimal solutions to the design and optimization problem. It has been found that a good design of the DDWC presents lower energy requirements than the DWC for mixtures, where the middle-boiling component appears in a low concentration on the feed stream. On the other hand, when the composition of the middle-boiling component on the feed is high, the DWC is the best alternative, since even small changes in the structure of the DDWC may increase considerably the energy requirements. According to the diameter calculations, the diameter required for a dividing wall column appears to be quite similar when using one or two dividing walls, thus it is expected that the shell construction costs are not different for both cases. Furthermore, the optimization methodology allows obtaining designs with a good vapor flow distribution, thus requiring lower diameters for the shell of the dividing wall systems. Those designs are compared in terms of energy, trays and diameter for the same separation, and interesting trends have been obtained.
References [1] G. Dünnebier and C. Pantelides, Ind. Eng. Chem. Res., 38 (1999) 162. [2] F.I. Gómez-Castro, J.G. Segovia-Hernández, S. Hernández, C. Gutiérrez-Antonio and A. Briones-Ramírez, Chem. Eng. Technol., 31(2008) 1246. [3] Y.H. Kim, Chem. Eng. Proc., 45(2006) 254. [4] R. Premkumar and G.P. Rangaiah, Chem. Eng. Res. Des., 87 (2009) 47. [5] C. Gutiérrez-Antonio and A. Briones-Ramírez, Comp. and Chem. Eng., 33(2009) 454. [6] C. Gutiérrez-Antonio and A. Briones-Ramírez, Computer Aided Chemical Engineering, 28(2010) 391.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Retrofit design of a pharmaceutical batch process considering green chemistry and engineering principles Alireza Banimostafaa, Stavros Papadokonstantakisa, Konrad Hungerbühlera a
Swiss federal institute of technology (ETH),Zurich 8093,Switzerland
Abstract The objective of this paper is to address retrofitting actions to enhance both monetary and non-monetary aspects of chemical process design. This is done using a systematic path flow decomposition (PFD) method, multidimensional process assessment indicators, heuristics, and model based methodologies for generation of inherently safer design alternatives. In this study a set of so-called EHS and LCIA, indicators are newly introduced in the path flow decomposition method and evaluated in parallel with previously developed cost oriented indicators at process and unit operation level. The developed methodology is applied to a pharmaceutical batch process as a case study in order to highlight the multi-objective potential of improvements. Based on calculated indicators and a table of heuristics retrofit actions are suggested for an inherently safer design meeting up green chemistry metrics and improving the economic performance at the same time. Keywords: Retrofit design, PFD, batch process, green chemistry, inherently safe.
1. Introduction Rapid globalization during the last decades and increasing regulatory efforts from governments forced the chemical industry to meet up the rising concern about green chemistry and engineering aspects of chemical process design. Beside these nonmonetary issues the economic performance remains a very crucial aspect of design, and therefore considering the capital intensive nature of chemical industry, redesign of existing production plants has been recently regarded as a key strategic decision. This retrofitting task has been performed on continuous processes for the production of bulk chemicals, however in the case of “low volume-high value” batch plants only few studies address the importance of improving process design through systematic retrofitting methodologies. Fig.1 illustrates a selection of retrofit incentives typically arising in chemical industry. So far in literature different retrofit design methods, classified by the type of retrofit incentive have been established, e.g., methods for energy savings target [1][2], overall cost efficiency [3][4]; waste minimization [5]; increasing production capacity [6] and improving flexibility [7]. However, for some of the available methods couple of drawbacks exists i.e.:
182
A. Banimostafa et al.
Figure 1. Typical retrofit incentives for chemical process design
• •
In most of the cases the methods have been applied to rather simplified process models, diminishing the complexities of a real process retrofitting design. Currently not all retrofitting incentives have been exploited, e.g. there is little effort to meet green chemistry and engineering principles, and therefore the concept of inherently safe process design has not been fully exercised.
In the current study the aforementioned limitations are tackled in depth by carrying out a systematic path flow decomposition (PFD) method, multidimensional process assessment indicators, heuristics, and model based methodologies performed on a case study from pharmaceutical industry.
2. Methodology The original PFD method presented by Uerdingen et al. [8] contains a set of indicators focusing on economic efficiency. Fig.2 depicts the modified methodology which demonstrates the previously defined indicators (PFIs), i.e., “MVA, EWC, RQ, and AF”* in addition to newly introduced path flow indicators (PFIs), namely “EHS [9], LCIACED and LCIAGWP”**, representing four (i.e., Prevention of waste, Design for energy efficiency, Safer solvents and auxiliaries and Inherently safer chemistry for accident prevention) of the twelve principles of green chemistry developed by Anastas&Warner [10]. After identification of open and cycle path flows and establishment of mass balance at each vertex and for each component the PFIs can be calculated as follows. MVA is only applicable to open component path flows and is represented by Eq.1: ܣܸܯ ൌ ݉ Ǥ ሺܲܲ െ ܴܲ ሻ
(Eq.1)
, ܲܲ , and ܴܲ correspond respectively to flow rate of component c in in which ݉ open path flow op, the specific value outside process boundaries (e.g. fuel or sales price), and the purchase price.
*
Material Value Added; Energy & Waste Cost; Reaction Quality; Accumulation Factor. Environment Health & Safety; Life Cycle Impact Assessment; Cumulative Energy Demand; Global Warming Potential.
**
Retrofit design of a pharmaceutical batch process improving green process chemistry
183
Figure 2. The modified path flow decomposition (PFD) method with newly introduced path flow indicators (PFIs).
EWC is applicable to both open and cycle path flows and is represented by Eq.2: ܥܹܧ ൌ σ ௨ୀଵ ܲܧ௨ ܳ௨ σೆು
ሺ் ǡ ሻ ೠǡ
ೠసభ ೠǡೠ ೠǡೠ ሺ் ǡ ሻ
݉ Ǥ ሺܹܯܣ ሻ
(Eq.2)
in which u is a sub-operation for cost allocation along path flow p, U is the total number of sub-operations, up is a path flow contributing to the corresponding sub-operations and UP is the total number of path flows for this sub-operation, ܲܧ௨ and ܳ௨ are respectively the utility price and specific energy consumption required for sub-operation u, ܣ௨ is the allocation factor at mean temperatures ܶ and pressures ܲ , and ܹܯܣ is the mass specific allocation factor for calculation of waste cost for component c in path flow p. RQ is also applicable to both open and cycle path flows and is calculated by Eq.3: ܴܳ ൌ σோୀଵ σோ ୀଵ
కೝǡೝǡாೝǡೝǡ σಷು సభ
(Eq.3)
where r is a reactive unit-operation and R is the total number of reactive unit-operations along path flow p, rp is a reaction and RP is the total number of reactions in reactive unit-operations, ߦǡǡ is the extent of reaction, fp is the final product and FP is the total number of final products.
184
A. Banimostafa et al.
AF is only calculated for cycle path flows according to Eq.4: ܨܣ ൌ
ಶು σసభቀσೌసభ ǡೌ ାσುೀ సభ ௗǡ ቁ
(Eq.4)
where component c leaves the cycle path cp from its vertex i with a total number of I vertices and Ep and PO are the number of positively incident edges and the number of process output flows in i (see Fig 2, step2). EHS is also appropriate for open and cycle path flows and is defined by Eq.5: ܵܪܧሺ݂݁ሻ ൌ ݉ Ǥ ሺݔܽܯ௨ ሺ݀݊ܫሺ݂݁ሻ ሺܶ௨ ሻሻǤ ܰ
(Eq.5)
where ݀݊ܫሺ݂݁ሻ ሺܶሻis the EHS index value of path flow calculated according to nine different effect categories, ef (i.e. Persistency, Air Hazard, Water Hazard, Irritation, Acute toxicity, Chronic toxicity, Mobility, Fire/Explosion, Reaction/Decomposition) depending on temperature of unit u for some effect categories, and ܰ is the number of the units the path flow passes through. Finally LCIA is only valid for open path flows and is calculated for CED and GWP according to Eq.6.1 and Eq.6.2: ܣܫܥܮǡா ൌ ݉ Ǥ ሺܦܧܥǡ െ ܦܧܥǡ௧ ሻ
(Eq.6.1)
ܣܫܥܮǡீௐ ൌ ݉ Ǥ ሺܹܲܩǡ െ ܹܲܩǡ௧ ሻ
(Eq.6.2)
where ܦܧܥǡ ǡ ܦܧܥǡ௧ are respectively the cumulative energy demand (expressed in MJ-Eq) for production and treatment of the open path, e.g., in recovery, incineration or waste water treatment plant. The same concept is applicable in the case of GWP (expressed in CO2-Eq).
3. Results According to the PFIs values and a table of heuristics based on Kletz [11] rules* four general classes of retrofit actions were established in this case study, namely solvent recovery, solvent substitution, time/volume debottlenecking, and adjustment of operating parameters. Fig.3 is an example of terminal results based on five sets of different retrofit actions, (RA1-RA5) compared to the base case scenario. The final step is to study the trade-offs between the retrofit objectives by establishing the respective Pareto-fronts and studying experience based and statistically derived (PCA*-based) weighting schemes.
*
Intensification, Substitution, Attenuation, Simplification.
Retrofit design of a pharmaceutical batch process improving green process chemistry
185
ϭ͘ϱ WƌŝĐĞ
,^
'tW
ϭ Ϭ͘ϱ Ϭ ĂƐĞĐĂƐĞ Zϭ
ZϮ
Zϯ
Zϰ
Zϱ
Figure 3. Final evaluation categories based on five different retrofit actions compared to base case.
References [1] [2] [3]
[4]
[5]
[6]
[7]
[8]
[9]
[10] [11]
A. R. Ciric and C. A. Floudas, "A Retrofit Approach for Heat-Exchanger Networks," Computers & Chemical Engineering, vol. 13, pp. 703-715, 1989. D. M. Fraser and N. Hallale, "Retrofit of Mass Exchange Networks Using Pinch Technology," Aiche Journal, vol. 46, pp. 2112-2117, 2000. W. R. Fisher, M. F. Doherty, and J. M. Douglas, "Screening of Process Retrofit Alternatives," Industrial & Engineering Chemistry Research, vol. 26, pp. 2195-2204, 1987. D. A. Nelson and J. M. Douglas, "A Systematic Procedure for Retrofitting ChemicalPlants to Operate Utilizing Different Reaction Paths," Industrial & Engineering Chemistry Research, vol. 29, pp. 819-829, 1990. M. M. Dantus and K. A. High, "Economic Evaluation for the Retrofit of Chemical Processes through Waste Minimization and Process Integration," Industrial & Engineering Chemistry Research, vol. 35, pp. 4566-4578, 1996. H. Rapoport, R. Lavie, and E. Kehat, "Retrofit Design of New Units into an Existing Plant - Case-Study - Adding New Units to an Aromatics Plant," Computers & Chemical Engineering, vol. 18, pp. 743-753, 1994. E. N. Pistikopoulos and I. E. Grossmann, "Optimal Retrofit Design for Improving Process Flexibility in Linear-Systems," Computers & Chemical Engineering, vol. 12, pp. 719-731, 1988. E. Uerdingen, U. Fischer, K. Hungerbuhler, and R. Gani, "Screening for Profitable Retrofit Options of Chemical Processes: A New Method," Aiche Journal, vol. 49, pp. 2400-2418, 2003. G. Koller, U. Fischer, K. Hungerbuhler, and R. Gani, "Assessing safety, heath, and environmental impact early during process development," Ind. Eng. Chem. Res., vol. 39, pp. 960-972, 2000. P. T. Anastas, J. C. Warner "Green chemistry: Theory and practice " Oxford University Press: New York, pp. 30, 1998. T. Kletz, P. Amyotte, "Process plants, a handbook for inherently safer design” 2nd edition, CRC Press, Boca Raton, 2010.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Design and control of an energy integrated biodiesel process Anton A. Kiss,a Costin Sorin Bildeab a
AkzoNobel – Research, Development and Innovation, Velperweg 76, 6824 BM, Arnhem, The Netherlands. E-mail : [email protected] b University “Politehnica” of Bucharest, Centre for Technology Transfer in Process Industries, Polizu 1-7, 011061 Bucharest, Romania. E-mail: [email protected]
Abstract This work presents the design and plantwide control of energy integrated biodiesel processes based on reactive distillation (RD) and reactive absorption (RA). In spite of the high degree of integration, the processes are well controllable using efficient control structures, as clearly shown by the results of rigorous simulations in Aspen Plus Dynamics. These heat-integrated RD and RA processes eliminate all conventional catalyst related operations, efficiently uses the raw materials and the reactor volume offering complete conversion of the fatty acids and allowing significant energy savings. Keywords: reactive distillation, reactive absorption, plantwide control, heat-integration
1. Introduction Biodiesel is a renewable and biodagradable fuel, with properties similar to petro-diesel. The most common technologies for its production employ homogeneous catalysts, in batch or continuous processes where both reaction and separation steps can create bottlenecks. There are several processes currently in use at pilot or industrial scale: batch, continuous, supercritical, enzymatic, and two-step processes (Kiss and Bildea, 2011). Catalytic reactive separations offers novel opportunities for manufacturing fatty acid alkyl esters involved in specialty chemicals and at a larger scale in biodiesel. The integrated design, corroborated with the use of a solid catalyst (Kiss et al., 2006), provides major benefits such as high productivity, low investment and operating costs. This work presents novel integrated processes based on reactive distillation and absorption, aiming to reduce furthermore the energy requirements for biodiesel production, leading to competitive operating costs. Despite the high degree of integration, the processes are well controllable using efficient control structures. Rigorous simulations embedding experimental results were performed using computer aided process engineering tools, such as AspenTech Aspen Plus and Aspen Dynamics. The columns were simulated using the rigorous RADFRAC unit with RateSep (ratebased) model, and explicitly considering three phase balances. Simulation results are given for a plant producing 10 ktpy fatty acid methyl esters (FAME) from methanol and waste vegetable oil with high free fatty acids (FFA) content, using sulfated zirconia as green catalyst. The heat-integrated process eliminates all conventional catalyst related operations, efficiently uses the raw materials and the reactor volume offering complete conversion of the fatty acids and allowing significant energy savings. Remarkably, the energy requirements of this process are 45-85% lower than the reference cases, while the capital investment cost remains the same as no additional equipment is required.
Design and control of an energy integrated biodiesel process
187
2. Problem statement Biodiesel is an attractive but still costly alternative fuel. The market pressure demands further reduction of the operating costs by reducing the energy requirements and by replacing the raw materials with inexpensive waste oils with high FFA content. The common problem of all conventional processes is the use of liquid catalysts that require neutralization and an expensive multi-step separation that generates salt waste streams. To solve these problems, in this work we make use of solid acids applied in an esterification process based on catalytic reactive separation. Such an integrated process is able to shift the chemical equilibrium to completion and preserve the solid catalyst activity by continuously removing the products. Moreover, the investment and operating costs are lower, as compared to conventional processes. Several reactive separation processes based on fatty acids esterification were reported recently, aiming at high performance and productivity, as well as low energy requirements: • Reactive distillation: 191.2 kW h / ton biodiesel (Kiss et al., 2008) • Dual reactive distillation: 166.8 kW h / ton biodiesel (Dimian et al., 2009) • Reactive absorption: 138.4 kW h / ton biodiesel (Kiss, 2009). This work goes one step further by heat-integrating the reactive separation processes, thus allowing a significant reduction of the energy requirements: from 191.2 down to 108.8 kWh / ton biodiesel (45% decrease in case of reactive distillation), and from 138.4 down to 21.6 kWh / ton biodiesel (85% savings in case of reactive absorption). Moreover, considering the high degree of integration, the controllability and dynamics of the process is also investigated. An economic evaluation is also provided in order to allow a straightforward comparison with other biodiesel processes reported in literature.
3. Design and simulation The integrated biodiesel processes were designed according to previously reported process synthesis methods for reactive separations. Rigorous simulations embedding experimental results were performed using Aspen Plus Dynamics. The reactive columns were simulated using the rigorous RADFRAC unit with RateSep (rate-based) model enabled, and explicitly considering three phase balances. The physical properties required for the simulation and the binary interaction parameters for the methanol-water and acid-ester pairs were available in the Aspen Plus database of pure components, while the other interaction parameters were estimated using the UNIFAC – Dortmund modified group contribution method. The fatty components were conveniently lumped into one fatty acid and ester as follows: R-COOH + CH3OH ļ R-COO-CH3 + H2O. The kinetic expression used is: r = A exp (–Ea / (RT)) CAcid CAlcohol, where A is the Arrhenius factor (250 kmol·m3·kg-2·s-1) and Ea is the activation energy (55000 kJ/kmol).
4. Heat-integrated reactive distillation The reference flowsheet (Kiss et al., 2008) based on reactive distillation (RD) is relatively simple, with just a few operating units, two cold streams that need to be preheated (fatty acid and alcohol) and two hot streams that have to be cooled down (top water and bottom fatty esters). Therefore the heat-integration was performed by applying previously reported heuristic rules (Dimian and Bildea, 2008). Consequently, feed-effluent heat exchangers (FEHE) could be used to bring the reactants at the required temperature. Remarkably, no additional heat exchanger is required by the heatintegrated RD setup. Therefore, the capital investment for the heat-integrated design is the same as for the base case (643.1 kEuro). However, the operating costs can be significantly reduced due to the lower requirements for cooling and heating utilities.
A. A. Kiss and C. S. Bildea
188
PC1
LC1 FC3
FC1
DEC
FEHE1
LC3
WATER ACID
RDC
LC2
X
FC2
FEHE2
X
ALCO
CC1 LC4
PC2
REC-ALCO
FAME
FLASH
LC5
F-ESTER
Figure 1. Plantwide control structure for the process based on reactive distillation. Figure 1 illustrates the improved process design including heat-integration around the RD column. The hot bottom product of the column is used to pre-heat both reactants. High conversion of the reactants is achieved, with the productivity of the RD unit exceeding 20 kg fatty ester/kg catalyst/h and the purity specifications over 99.9 %wt for the biodiesel product (FAME stream). Note that the total amount of the optional recycle streams is not significant representing less than 0.1% of the biodiesel production rate. Feeding the reactants according to their stoichiometric ratio is essential to achieve high products purity. This constraint must be fulfilled also during the transitory regimes arising due to planned production rate changes or unexpected disturbances. To meet this target, the control structure employs inventory control loops and uses the reactants ratio to control the concentration of acid in bottom stream, achieving total conversion of the reactants and preventing difficult separations. In spite of the high degree of integration this heat-integrated RD process is very well controllable, under specific disturbances. Figure 2 depicts the dynamic simulation results. The simulation starts from the steady state. At time t = 1 h, the Acid flow rate is increased by 10%. Then, at time t = 5 h, the Acid flow rate is decreased by –10% with respect to the nominal value. The new production rate is achieved in about 2 hours hence the settling time is short and within the acceptable range of operation. The purity of FAME remains practically constant throughout the dynamic regime, the main impurity being methanol and the acid concentration staying below the 2000 ppm requirement of the ASTM D6751-08 standard – i.e. acid number less than 0.50 mg KOH/g biodiesel (Kiss, 2011). 250
ACID
1000
200 ALCO
750
150
500
100 WATER
250
50
0
0 0
2
4
1
300
FAME
1250
6
Time / [h]
8
10
Mass fraction / [-]
Flow rate / [kg/h]
1500
WATER
0.999 FAME 0.998 0.997 0.996 0.995 0
2
4
6
8
10
Time / [h]
Figure 2. Simulation results: acid flow rate disturbance of +10% at 1h, and –10% at 5h.
Design and control of an energy integrated biodiesel process
189
FEHE1 PC1
Rec-Top
Acid
LC2
DEC FC1
LC3
HEX1
LC1
Water TC1 CC1
RAC
Rec-Btm Alcohol
Vapor
LC5 PC2
TC2
LC6
CC2
FC2
FLASH LC4
FAME FEHE2
Figure 3. Plantwide control structure for the process based on reactive absorption.
5. Heat-integrated reactive absorption Reactive absorption (RA) offers additional advantages, on top of the RD benefits, such as: lower CapEx and OpEx due to the absence of a reboiler and condenser, as well as no thermal degradation of the products due to the lower temperature profile in the column. As a consequence, the process becomes more energy efficient, leading to low capital (533.4 kEuro) and operating costs. The reference flowsheet (Kiss, 2009) was improved by adding heat-integration around the RA column (Figure 3). Compared to RD, the integrated biodiesel processes based on RA have fewer degrees of freedom. This makes it challenging to correctly set the reactants feed ratio and thus avoiding impurities in the products (Bildea and Kiss, 2011). However, in spite of the high degree of integration this RA process is also very well controllable. The concentration of acid in bottom is controlled by manipulating the alcohol flow rate. The temperature of the column-inlet acid stream is manipulated by a methanol concentration controller (Kiss et al., 2011). Figure 4 depicts the dynamic simulation results for the flowsheet presented in Figure 3, when the recycle of methanol vapours is considered. At time t = 1 h, the acid flow rate is increased by +10%, then at time t = 5 h, the acid flow rate is decreased by –10%. Production rate changes are easily achieved and products purity is maintained high, with the acid concentration on-spec according to the ASTM D6751-08 standard. 250
Aci d
100 0
200 Alcohol
75 0 50 0
150 100
Water
25 0
50
0
0 0
2
4
6
Time / [h]
8
10
1
250 0 FAME
Water
0.98
200 0
0.96
150 0
0.94
100 0
0.92
500
Acid in FAME
0.9
Mass fraction / [ppm]
300
FAME
125 0
Mass fraction / [-]
Flow rates / [kg/h]
150 0
0 0
2
4
6
8
10
Time / [h]
Figure 4. Dynamic simulation results for the flowsheet with methanol recycle (stream Rec-Btm): acid flow rate disturbance of +10% at 1h, and –10% at 5h.
A. A. Kiss and C. S. Bildea
FAME
300
1250
Acid
250
1000
200 Alcohol
750 500
150 100
Water
250
50
0
0 0
2
4
6
8
10
1
350 0
FAME
300 0
0.96
250 0
0.92
200 0
Water
150 0
0.88
100 0
0.84
500
Acid in FAME
0.8
Mass fraction / [ppm]
1500
Mass fraction / [-]
Flow rates / [kg/h]
190
0
0
2
Time / [h]
4
6
8
10
Time / [h]
Figure 5. Dynamic simulation results for the flowsheet without methanol recycle: acid flow rate disturbance of +10% at 1h, and –10% at 5h.
Energy use (kWh/ton)
Similarly, Figure 5 shows the results for the flowsheet without the bottom recycle of methanol. The same scenario is assumed for the dynamic simulation, but a degradation of dynamic performance is observed when the purities of the product streams are taken into account. Therefore, the alternative with methanol recycle is preferable. Figure 6 illustrates the energy requirements per ton of biodiesel, for the main processing steps encountered in conventional two-step processes vs reactive separation processes. 1800 1600 1400 1200 1000 800 600 400 200 0
1772.6
622.1
90.3
1st methanol recovery
2nd methanol recovery
206.6
191.2
Glycerol FAME Reactive purification purification distillation
108.8
Heat integrated RD
166.8
138.4
21.6
Dual Reactive Heat reactive absorption integrated distillation RA
Figure 6. Energy requirements: conventional two-step process vs reactive-separations.
6. Conclusions The novel heat-integrated reactive separation processes proposed in this work eliminate all conventional catalyst related operations, improves efficiency and considerably reduces the energy requirements for biodiesel production. A key result is an efficient control structure that ensures the required reactants ratio to achieve total conversion and prevent difficult separations. Bypasses on the heat exchangers may be used to further improve the controllability and flexibility of the system. Remarkably, in spite of the high degree of integration these reactive separation processes are well controllable, under specific disturbances, as demonstrated by the results of rigorous simulations.
References 1. 2. 3. 4. 5. 6. 7. 8.
C. S. Bildea, A. A. Kiss, 2011, Chem. Eng. Res. Des., 89, 187-196. A. C. Dimian, C. S. Bildea, 2008, Chemical process design, Wiley-VCH, Weinheim. A. C. Dimian, C. S. Bildea, F. Omota, A. A. Kiss, 2009, Comp. & Chem. Eng., 33, 743-750. A. Kiss, A. C. Dimian, G. Rothenberg, 2006, Adv. Synth. Cat., 348, 75-81. A. Kiss, A. C. Dimian, G. Rothenberg, 2008, Energy & Fuels, 22, 598-604. A. Kiss, 2009, Sep. Purif. Technol., 69, 280-287. A. Kiss, 2011, Fuel. Proc. Technol., Article in press, DOI: 10.1016/j.fuproc.2011.02.003 A. Kiss, C. S. Bildea, 2011, Biores. Technol., 102, 490-498.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A systematic approach towards applicability of reactive distillation Anton A. Kiss,a Prachi Singh,b Cornald J. G. van Striena a
AkzoNobel – Research, Development and Innovation, Velperweg 76, 6824 BM, Arnhem, The Netherlands. E-mail : [email protected] b Dutch Separation Technology Institute (DSTI), Stationsstraat 77, 3811MH, Amersfoort, The Netherlands. E-mail: [email protected]
Abstract The novel systematic approach proposed in this work determines the boundary conditions for reactive distillation (e.g. relative volatilities, target purities, equilibrium conversion, and equipment restriction), checks the integrated process constraints, evaluates the economic feasibility and provides guidelines applicable to any potential RD process application. This approach allows a quick and easy RD feasibility analysis for wide range of chemical processes. In this study three industrial relevant case studies (HDA, MeAc hydrolysis, FAME) illustrate the validity of the proposed approach. Keywords: reactive distillation, case studies, economics, feasibility methodology
1. Introduction Integrating reaction and separation into one unit provides important benefits such as: reduced CapEx and lower OpEx, simplified operation, no recycle streams and no or reduced waste. Reactive distillation (RD) is a process intensification technology that combines both chemical reaction and distillation in a single processing unit. This technique is especially useful for equilibrium-limited reactions such as esterification and ester hydrolysis reactions. Conversion and selectivity can be increased far beyond what is expected by the chemical equilibrium due to the continuous removal of the products from the reactive zone of the integrated column. This helps not only to reduce the capital and investment costs, but it is also important for sustainable development due to a lower consumption of valuable resources (e.g. chemicals, utilities). Currently, the typical design of RD is still based on expensive and time-consuming sequences of laboratory and pilot plant experiments – the main reason being the absence of an established RD design procedure that is suitable for a straightforward evaluation. Hence, the problem is how to determine quickly but reliably when RD is also an economically feasible / attractive alternative to an existing or a new chemical process. To solve this problem a systematic approach was developed through which the RD economic feasibility for a process can be quickly and reliably evaluated. A major advantage of this approach is its applicability to a wide range of all-scale processes and multiproduct environments. The systematic approach proposed in this work determines the boundary conditions for RD (relative volatilities, target purities, equilibrium conversion, equipment restrictions) checks the integrated process constraints, evaluates the economic feasibility and provides guidelines applicable to almost any potential RD process application. In order to illustrate the applicability of the proposed methodology, three industrial relevant case studies are examined: methyl-acetate hydrolysis, hydro-dealkylation (HDA) of toluene and fatty acid methyl esters (FAME) production.
192
A. A. Kiss et al.
2. RD feasibility analysis The methodology to evaluate RD economics feasibility is shown in Figure 1. In the proposed approach, some basic information on the chemical process is required such as: vapour liquid equilibrium, stoichiometry of reaction, kinetics, enthalpy of reaction. The next step is to check the number of products and reaction type. If there is only one product present, last step of the main reaction is irreversible and there are no side reactions present then there is no advantage of using RD over a simple reactor.1,2 As both operations occur in same unit simultaneously, there must be a proper match between the temperatures required for reaction and separation.3 For example, in the conventional HDA process the temperature difference between the reaction and the separation process is 120°C. In this case, RD was found to be technically applicable, yet it was not economically attractive.4 Therefore, for a feasible RD process the temperature difference between separation and main reaction should be lower than 50°C. Moreover, the operating pressure and temperature should not be close to the critical pressure of the key components.5 If the column operates at critical pressure of key components, these will be present in the vapour form – although in the vast majority of RD processes the reaction takes place in liquid phase.5 For example, the synthesis of dimethyl carbonate (DMC) by catalytic esterification of carbon dioxide and methanol occurs at near critical regions of carbon dioxide at 73 bar and 80-100°C.6 Hence, the RD alternative is not applicable in this process as the main reaction takes place in gas phase. Relative volatility of key chemical components is also a crucial parameter for RD feasibility analysis. Temperature dependence of vapour pressure of individual components can results in decreased relative volatility as temperature increases in multicomponent systems. This can create a mismatch of favourable temperature for kinetics and relative volatilities which can make RD process unattractive.7 For instance, in the hydrolysis reaction of methyl-acetate, the reactant (MeAc) is the lightest component and it is difficult to keep it in the reactive zone. Therefore, a conventional RD process is not applicable in this case. A relative volatility of minimum 1.1 was chosen for this RD feasibility analysis, as this is the typical value for distillation process.8 Chemical equilibrium constant influences the reactant flow and residence time in a RD column. A low equilibrium constant will require excess of reactants and higher number of reactive trays for RD column. However, this will result in an increased total annual cost (TAC) of RD process. In the case of methyl-acetate hydrolysis, the equilibrium constant is small (KEQ ~0.0013 at 50°C).9 Still, non-conventional RD processes such as reactive dividing-wall column could be an attractive alternative for this process.10 Nevertheless, for conventional RD processes a KEQ >1 is recommended. Inerts are present in many processes and in some RD commercial systems lighter reactant are fed together with an inert. The presence of inerts reduces the concentration of reactants and results in lower reaction rates. Hence, KEQ is reduced which increases the loss of reactant. However, certain amount of inerts can be beneficial for optimum conversion – e.g. the MTBE production, in which n-butene serves as a coolant for the reactive zone, thereby keeping the temperature of the reactive zone at a level where the equilibrium is favorable for MTBE conversion.11 In RD processes, the specific reaction rate for the main reaction can not be too low as it requires large liquid holdups, large amounts of catalyst on each reactive tray and eventually a larger column.5 Therefore, the reaction rate for main reaction should be higher than 10-5kmol/kgcat.sec (e.g. methylacetate hydrolysis).9 In RD processes, the desired column temperature should be lower than that of side reactions. For instance, in the methyl tert-butyl ether (MTBE) process, two consecutive side reactions are present: the irreversible dimerization of isobutene to di-isobutene (Tb= 101.45°C) and the reversible dehydration of methanol to dimethyl
A systematic approach towards applicability of reactive distillation
193
ether (DME, Tb= –24.83°C). Therefore, the purity of the desired products, isobutene and methanol could be reduced due to the by-products formed.12 The heat of reaction should be lower than the heat of vaporization of key components. A higher heat of reaction will result in drying out of trays and reduce the conversion. The last criterion is checking the production rate – if it is above 0.5-1 kton/yr then a RD process is feasible. For lower production rates it is important to evaluate the gross profit of the process. A gross profit higher than 15% for small scale production is suitable for RD process application (e.g. pharmaceutical industry). Ultimately, if all conditions are fulfilled then RD process is also attractive (e.g. fatty acids esterification to FAME).13-15
Figure 1. Flow diagram of the RD process application feasibility analysis.
A. A. Kiss et al.
194
3. Case studies Following examples were selected as case studies to evaluate the RD process feasibility:
4000 3000 2000 1000
Engineering & overheads
Reactors & vessels
Pumps & compressors
Fired heaters
Heat exchangers
Distillation column
0 RD column
Methyl-acetate
Cost (kGBP)
0.2 0.3
no l
0 .4
0 .5
Me tha
0.6
0.1
0 .9
0.
0.2 cid 0.3 c a 0.4 e ti Ac 6 0.5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Reactive distillation Conventional process
5000
0.7
0.7
6000
0 .8
0.8
0.9
0.1
Methyl acetate hydrolysis. In the production of acetic acid and methanol by hydrolysis of methyl acetate, the ester reactant is the lightest hence it is very difficult to keep it in the reactive zone. Figure 2 (left) shows the residue curve map (RCM) of methyl-acetate / methanol / acetic acid. Methanol and its ester form a binary azeotrope. Hence, it is not likely to obtain high purity methanol under ‘neat’ RD operation. Due to some technical constraints conventional RD process is not technically feasible,9 although RD is suitable for the reverse reaction. However, non-conventional RD processes such as reactive dividing-wall column (RDWC) could be a viable alternative.10
Figure 2. Residue curve map (RCM) of methyl-acetate / methanol / acetic acid (left). Cost comparison for a toluene hydro-dealkylation plant (right) HDA process. In the conventional process of hydro-dealkylation (HDA) of toluene to benzene, the reaction requires 20-25 bar and 400°C, whereas the RD column is operated at 30 bar and about 280°C in the reactive zone. As the optimal reaction vs separation conditions are significantly different and the pressure required by the RD column is higher, the drawbacks cancel out the overall advantages of the RD process. Figure 2 (right) gives a summary of the key capital cost elements for both RD and conventional processes for a design basis of 150 kton/yr xylenes and a costing basis in thousands of pounds (kGBP).4 The net effect is that the estimated capital saving is only in the order of 4% well below 25–50% improvement typically which is required to drive a new technology development.4 Therefore, in this case the RD application is not economically attractive in spite of being technically feasible and applicable. Biodiesel production. Fatty acid methyl esters – the main components of biodiesel – can be directly produced by esterification of free fatty acids (FFA) with methanol or bioethanol.13-15 Conventionally, biodiesel is produced batch-wise using homogeneous catalysts that have many associated problems (neutralization, separation, salt waste streams). The RD process powered by solid catalysts offers unique advantages, such as: higher productivity, efficient use of raw materials and equipment, no catalyst related issues, elimination of alcohol excess and recycle, lower capital and operating costs. Therefore, in this case, the RD process proves to be technologically feasible (Figure 3) and at the same time economically attractive, using only ~109 kW·h / ton biodiesel.14-16
A systematic approach towards applicability of reactive distillation Temperature / °C
Molar fraction 0
0.2
0.4
0.6
0.8
60
1
100
140
180
220
0
0
Fatty acid
195
Water 3
3 Water Acid
6
6
Stage
Alcohol
Recycle
Biodiesel FAME
9
Reaction rate
9
12
12
Ester
Methanol
Temperature
15
15 0
0.2
0.4
0.6
Molar fraction
0.8
1
0
0.5
1
1.5
2
2.5
Reaction rate / kmol/hr
Figure 3. RD process for fatty acids esterification (left). Composition, temperature and reaction rate profiles along the RD column (right) The three industrial relevant case studies briefly presented here, illustrate the potential applicability range of the proposed methodology. Based on the previous experience, we are also confident that virtually any potential RD application can be quickly and reliably evaluated using this approach that check if a process is not only technically feasible but also economically attractive – a very important criteria at industrial scale.
4. Conclusions The novel systematic approach proposed in this study evaluates the RD process feasibility, based on minimal knowledge of kinetics, thermodynamic and economics. The main advantage of this approach is that all important parameters influencing the design of an RD process are taken into account. A major requirement is that the process conditions that are suitable for the chemical reaction must be in line with the conditions required for vapour-liquid separation. The industrial relevant case studies analyzed in this study, validate the proposed approach for RD process feasibility analysis and make it clear when a RD process is technically feasible and also economically attractive.
References 1. Kenig E., Lakobsson K., Banik P., Aittamaa L., Gorak A., Koskinen M., Wettmann P., 1999, Chem. Eng. Sci., 54, 1347-1352 2. Aittamaa J., Kenig E., Jakobsson K., Banik P., Schembecker G., Górak A. et al., 1996, BriteEuram Project Reactive distillation BE95-1335 3. Kaymak D. B. and Luyben W. L., 2004, Comp. & Chem. Eng., 32, 1456–1470 4. Stitt E. H., 2002, Chem. Eng. Sci., 57, 1537-1543 5. Kaymak D. B., Luyben W. L., 2004, Ind. Eng. Chem. Res., Vol. 43, 3666-3671 6. Cao F., Fang D., Liu D. and Ying W., 2002, Fuel Chem. Div. Preprints, 74 (1), 295-297 7. Luyben W. L. and Yu C., 2008, Text Book, ISBN 978-0-470-22612-4 8. Perry's Chemical Engineers' Handbook (8th Edition), 2008, McGraw-Hill 9. Lin Y., Chen J., Cheng J., Huang H., Yu C., 2008, Chem. Eng. Sci., 63, 1668-1682 10. Sander S., Flisch C., Geissler E., Schoenmakers H., Ryll O. and Hasse H., 2007, Chem. Eng. Res. and Des., 85 (A1), 149–154 11. Higler A. P., Taylor R., Krishna R., 1999, Chem. Eng. Sci., 54, 1389-1395 12. Qi Z., Kienlei A., Steini E., Mohl K., Tuchlenski A. and Sundmacher K., 2004, Chem. Eng. Res. & Des., 82(A2), 185–191 13. Dimian A. C., Bildea C. S., Omota F., Kiss A. A., 2009, Comp. & Chem. Eng., 33, 743-750 14. Kiss A. A., Dimian A. C., Rothenberg G., 2008, Energy & Fuels, 22, 598-604 15. Kiss A. A., 2010, Comp. & Chem. Eng., 34, 812-820 16. Kiss A. A., 2011, Fuel Proc. Technol., Article in press, DOI: 10.1016/j.fuproc.2011.02.003
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences Miguel A. Navarroa, José A. Caballeroa, Ignacio E. Grossmannb. a
Department of Chemical Engineering, University of Alicante., Ap Correos 99, 03080, Alicante, Spain b Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Av. 15213. Pittsburgh, PA USA.
Abstract This paper presents a new strategy for simulation of thermally coupled distillation sequences using process simulators. First, we show that the two side streams connections that produce a ‘thermal couple’ can be accurately substituted by a combination of a material stream and a heat flow. In this way, the sequence of thermally coupled distillation columns can be simulated without recycle streams like any conventional simulation of zeotropic distillation sequences. In that way a sequence of thermally coupled distillation columns is as difficult to converge as any other distillation system without recycles. In most situations this approach introduces negligible errors, but in any case provides excellent initial points to the rigorous simulations with recycle streams. Different examples are presented, including mixtures of hydrocarbons (C4’s C5’s – C6’s), aromatics (BTX), alcohols, non-ideal azeotropic systems (acetone, benzene, chloroform) and systems involving 4 or 5 components. Different thermodynamic equivalent configurations that correspond to different alternatives for implementing this approach are also described. Keywords: Distillation; Simulation; Thermally Coupled Distillation.
1. Introduction Sustainable development of process systems motivates the pursuit of design solutions that achieve efficient use of energy. Distillation consumes about 3% of the energy worldwide1.Thermally Coupled Distillation systems (TCD), have acquired a renewed interest in the last years because of the possibilities of savings in energy and total costs, in some cases over 30% - 40%, when compared to systems with conventional columns. Furthermore, TCD has a richer space of alternative designs than conventional separation systems2. Most of the chemical process simulators include models for side columns or even Petlyuk type configurations. But when we have thermally coupled systems involving more than two columns (and in some cases even with two columns), the simulation of these systems is difficult because the two side flows connecting the columns produces systems with a large number of ‘recycle’ streams (in a modular simulator these recycle or information is converged though tear streams). Whatever the method used to converge the cyclic structure of the flowsheet (direct substitution, Newton or quasi-Newton methods) good initial values, close to the final solution are mandatory to converge the system maintaining the product specifications. A large number of tear streams slows down the simulation and makes the problem difficult to converge.
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences
197
2. Application of the new strategies of simulation, "acyclic system simulation" The basic idea used in this paper is to avoid the recycle structures that appear in TCD systems in modular process simulators. This idea is based on the work by Carlberg and Westerberg3,4. They proved, in the context of the Underwood shortcut method, that the two side streams in a TCD system connecting the rectifying section of the first column (see Figure 1a) with column 2, is equivalent to a superheated vapour stream whose flow is the net flow (difference between vapour exiting the column and the liquid entering in it) -Figure 1b-. If the two side streams are connecting the stripping section of the first column with the second column then these two streams are equivalent to a single subcooled liquid stream whose flow is the net flow (in this case liquid minus vapour flows) –See Figures 1c,d-. A
A
A Q
V
Superheated vapor
L
F = V-L B
ABC
B
ABC
C
ABC
C
(a)
F = V-L saturated vapor
C
(b)
A
(e)
A
A
ABC
ABC
B
ABC
B
B F= L-V
Saturated Liquid F = L-V
B
V Sub-cooled Liquid
L
-Q
C
(c)
C
(d)
C
(f)
Figure 1. a,b,e equivalent configurations. c,d,f equivalent configurations However, this approach cannot, in general, be implemented in modular process simulators because the degree of superheating and / or subcooling could be so large that it produces results without physical meaning, and therefore it can lead to failure in the convergence of the simulator. Fortunately, it is possible to solve this problem substituting the superheating or subcooling streams by a combination of a material and an energy stream. In the rectifying section, the material stream is vapor at its dew point and the energy stream is equivalent to the energy removed if we include a partial condenser to provide reflux to the first column -See Figure 1e-. In the stripping section, the material stream is liquid at its bubble point and the energy stream is equivalent to the energy added if we include a reboiler to provide vapor to the first column -See Figure 1f. Although this strategy is only an artificial representation to simulate the behavior of thermally coupled systems without requiring recycles, the results are good if the streams
198
M.A. Navarro, J.A. Caballero, I.E. Grossmann
introduced/withdrawn in/from column 2 were in equilibrium with the liquid and vapor flowing through this column (V1C1 with L2C2) -See Figure 2-.
Figure 2. Details of the connection between columns, "Cyclic system simulation"
Figure 3. Details of the connection between columns, "Acyclic system simulation"
Unfortunately the equilibrium assumption is not entirely true. In the Carlberg & Westerberg approximation it is considered that there is no mass exchange between the vapor and liquid streams. In the rigorous simulation, the energy streams are used to simulate the elimination of liquid that is withdrawn from column 2 to column 1 since it vaporizes part of the liquid stream, which is equivalent to the liquid removed- See Figure 3-. This is the main source of error. But if the vapor and liquid streams are introduced/withdrawn in/from the same tray, the error introduced is small and can usually be neglected. In any case, in the worst case the values obtained with this technique provide excellent initial points to converge the rigorous simulations of the original system.
3. Examples and results Different examples are presented, including mixtures of hydrocarbons (C4’s - C5’s – C6’s), aromatics (BTX), alcohols, non-ideal azeotropic systems (acetone, benzene, chloroform) and systems involving 4 or 5 components. Different thermodynamic equivalent configurations that correspond to different alternatives for implementing this approach are also described. All the simulations were performed using ASPEN HYSYS. The parameters studied to compare the results between cyclic and acyclic simulations are reboiler and condenser duties and the vapor/liquid internal flows. First, three components system were studied: a mixture of aromatics (Benzene, Toluene, p-Xylene), alcohols (methanol, ethanol, butanol), hydrocarbons (n-hexane, n-heptane, noctane). In all the cases, both cyclic and acyclic simulations are solved independently, although the results of the acyclic simulation were used as initial points of the actual system (with cyclic structure)-See Figure 4-.
Figure4: Simulations of acyclic and cyclic system configurations.
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences
199
Difficult separations involving three component systems were also studied: a) similar volatilities (i-butane, n-butane, cyclobutane) b) azeotropic systems (benzene-acetonechloroform) or multi-component mixture that must be separate in C3's, C4's and C5's groups. The separation of 4 components was also studied (Butane-Pentane-HexaneHeptane) in a sequence with 16 thermodynamically equivalent configurations. In this case, 3 configurations were studied using the same methodology previously explained See Figure 5-. Finally, the separation of a mixture of 5 components is also presented.
FEED
FEED
FEED
Configuration 1
Configuration 2
Configuration 3
Figure5: Thermodynamically equivalents configurations
The results obtained in these systems are shown in the following table:
Max. Error
INTERNAL FLOW Average Error St. Deviation
3 Components iButane-nButane-cyclebutane 3,42% Hexane-Heptane-Octane 3,47% Methanol-Ethanol-Butanol 3,94% Benzene-Toluene-Xylene 4,55% Azeotropic Distillation 4,04% C4's-C5's-C6's 6,72% Acetone - Acetic Acid - Acetic Anhydride 18,40% 4 Components (Butane-Pentane-Hexane-Heptane) Configuration 1 21,16% Configuration 2 22,42% Configuration 3 22,42% 5 Components (Butane-Pentane-Hexane-Heptane-Octane) Configuration 1 79,27%
ENERGY Max. Error
0,74% 0,96% 0,86% 1,46% 1,09% 1,43% 3,91%
0,87% 0,96% 0,92% 1,40% 0,98% 1,72% 4,61%
0,04% 0,08% 0,08% 0,19% 0,36% 0,05% 0,91%
4,88% 5,03% 4,86%
6,00% 6,16% 6,07%
0,41% 0,12% 0,41%
23,61%
31,46%
0,19%
Table1: Results obtained in all different systems
The internal flows of the worst case studied (Acetone - Acetic Acid - Acetic Anhydride) can be seen in the following figures:
200
M.A. Navarro, J.A. Caballero, I.E. Grossmann
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences sĂƉŽƌ&ůŽǁͲ ŽůƵŵŶϭ
>ŝƋƵŝĚ&ůŽǁͲ ŽůƵŵŶϭ ϭϱϬ
sĂƉŽƌ&ůŽǁ;ŬŵŽůͬŚͿ
>ŝƋƵŝĚ&ůŽǁ;ŬŵŽůͬŚͿ
ϮϬϬ ϭϱϬ ϭϬϬ
LJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ ĐLJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ
ϱϬ
ϭϬϬ LJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ ϱϬ
ĐLJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ
Ϭ
Ϭ Ϭ
Ϯ
ϰ
ϲ
ϴ
Ϭ
Ϯ
ϰ dƌĂLJƐ
dƌĂLJƐ
>ŝƋƵŝĚ&ůŽǁͲ ŽůƵŵŶϮ
ϲ
ϴ
sĂƉŽƌ&ůŽǁͲ ŽůƵŵŶϮ
ϰϬϬ ϯϱϬ ϯϬϬ
LJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ ĐLJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ
ϮϱϬ ϮϬϬ
sĂƉŽƌ&ůŽǁ;ŬŵŽůͬŚͿ
ϰϱϬ
>ŝƋƵŝĚ&ůŽǁ;ŬŵŽůͬŚͿ
5
LJĐůŝĐ ĚŝƐƚŝůůĂƚŝŽŶ ĐLJĐůŝĐ ĚŝƐƚŝůůĂƚŝŽŶ
ϰϬϬ ϯϱϬ ϯϬϬ ϮϱϬ ϮϬϬ
Ϭ
ϭϬ
ϮϬ
ϯϬ
ϰϬ
ϱϬ
ϲϬ
Ϭ
ϭϬ
dƌĂLJƐ
ϮϬ
ϯϬ
ϰϬ
ϱϬ
ϲϬ
dƌĂLJƐ
Figure 12: Comparison of flows for AAA separation
4. Conclusions The application of a new strategy for simulation of thermally coupled distillation sequences using process simulators to several case studies has shown that the results obtained with acyclic sequence technique are very close to those obtained with recycle calculation, with average errors below 2% for 3 component mixtures. The average error increases slightly with the number of components, due to the errors propagation as a consequence of the larger number of thermally coupled columns in the system. However, in all the cases the “acyclic simulation” produces excellent results, comparable with those of the actual system. Furthermore, this new strategy allows to get very good starting points to converge the rigorous simulations of these systems. In conclusion, this new technique works very well to quickly and easily study thermally coupled distillation system for the separation of 3, 4 or 5 components mixtures. Acknowledgements The authors gratefully acknowledge the financial support from the ‘‘Ministerio de Ciencia e Innovación’’ of Spain under project CTQ2009-14420-C02-02.
References 1. Soave, G.; Feliu, J. A., 2002, Saving energy in distillation towers by feed splitting. Applied Thermal Engineering, 22, (8), 889. 2. Giridhar, A,; Agrawal, R., 2010, Synthesis of distillation configurations.II a search formulation for basic configurations. Computers & Chemical Engineering, 34(1) 84. 3. Carlberg, N. A. Westerberg, A. W., 1989, Temperature Heat Diagrams For Complex Columns .2. Underwoods Method For Side Strippers And Enrichers. Industrial & Engineering Chemistry Research, 28(9) 1379-1386 4. Carlberg, N. A. Westerberg, A. W., 1989, Temperature Heat Diagrams For Complex Columns .3. Underwoods Method For The Petlyuk Configuration Industrial & Engineering Chemistry Research 28(9) 1386-1397
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Spatiotemporal pattern formation in an electrochemical membrane reactor during deep CO removal from reformate gas Richard Hanke-Rauschenbacha,*, Sebastian Kirscha and Kai Sundmachera,b a
Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany b Process Systems Engineering, Otto-von-Guericke University, Universitätsplatz 2, 39106 Magdeburg, Germany *E-mail address: [email protected]
Abstract The preferential oxidation of CO from reformate gas in a spatially distributed electrochemical membrane reactor has been investigated. The reactor shows oscillations of the electric potential in space and time when operated in the galvanostatic mode. The operating behavior is complex and not straight-forward to predict, which hampers the application of classical methods for the design of such a reactor. Within the present work, a model-based approach is discussed to characterize the oscillations and their influence on the performance of the reactor. Keywords: H2 production, electrochemical membrane reactor, nonlinear dynamics.
1. Introduction One of the key issues limiting the application of proton exchange membrane fuel cells is their susceptibility to traces of carbon monoxide within the hydrogen used as fuel. CO is produced in substantial amounts during the conversion of hydrocarbons to hydrogenrich syngas. Typically, the fuel processor is coupled to or followed by a water-gas shift system, which reduces the CO content to a level of 1-3 vol%. In a subsequent removal step the CO content has to be decreased to tolerable levels of 10-30 ppm. Regarding this final purification step, the preferential oxidation (PrOx) of CO currently seems to be the most promising option for fuel cell systems with on-site hydrogen production. Recently, Zhang and Datta [1] suggested a novel approach involving the electrochemical preferential oxidation (ECPrOx) of CO, which might have the potential to replace the PrOx in the above mentioned scheme. The main advantage, in comparison to the PrOx concept, is that non-selectively oxidized hydrogen is converted into electrical energy instead of being burned. Anode reactions CO + H 2 O ↔ CO 2 + 2H + + 2e − (desired) H 2 ↔ 2H + + 2e − (undesired)
Cathode reaction O 2 + 4H + + 4e − → 2H 2 O
The design of such an ECPrOx reactor is similar to a PEM fuel cell. In contrast, instead of platinum, a platinum-ruthenium alloy is used as anode catalyst. When operated in the galvanostatic mode the reactor exhibits oscillations of the cell voltage, which allow for a selective electro-oxidation of CO at relatively low overpotentials [1-3]. The reason for this behavior is a cyclic interplay of the CO surface coverage șCO and the anode overpotential ȘA (Fig. 1).
Hanke-Rauschenbach et al.
202
(1)
H2
ε
εH2 (1)
ηA (V)
θCO (1)
1 In our previous contribu(b) (a) i, xCO Phase 2 tions [4-6], the gradient-less 0.98 + system was investigated. It 0.96 θCO has been shown that the − Phase 1 (activator) 0.94 selectivity of O2 towards 0 0.25 0.5 0.75 1 1.25 1.5 CO2 is decreasing with in- if ηA>ηcrit t (s) A creasing O2 conversion (i.e. 0.6 (c) with increasing cell current). ηcrit 0.4 A Phase 1 η This suggests either the use A + Phase 2 0.2 (inhibitor) of a reactor cascade or the + use of a spatially distributed 0 0 0.25 0.5 0.75 1 1.25 1.5 i reactor. For the cascade two t (s) different electrical con- Figure 1. Two-phase mechanism to explain the autonofigurations exist: (i) an elec- mous potential oscillations of the system: (a) Graphic trical series connection and representation of the interplay between the key variables; (ii) an electrical parallel (b) and (c) time evolution of the CO surface coverage connection of the reactors. ș and anode overvoltage Ș . CO A While the former leads to a significant increase in the selectivity the latter does not (Fig. 2). As a reason for this behavior rigid electrical coupling between the reactors, introduced by the electrical parallel connection, was identified. This coupling causes a synchronization of oscillation frequencies of the single reactors, which leads to an enslavement of all the upstream reactors by the last reactor down-stream the cascade [5]. In the present contribution, the behavior of the spatially distributed reactor is analyzed. The above mentioned unfavorably electrical coupling is an intrinsic property of this system, which is introduced by the lateral electrolytic resistance. The system shows complex patterns in space and time [7,8]. Parameters influencing the patterns and thereby the reactor performance are the residence time, the CO mole fraction at the reactor inlet, the electrolyte conductivity and its geometry as well as the applied current. The prediction of the operat1 1 ing behavior is not straightseries forward and hampers the single design of such a reactor by 0.9 0.95 reactor single means of classical methods. reactor Therefore, here a modelseries 0.8 0.9 based approach has been connection chosen, in order to get inparallel sight in the underlying pheparallel connection 0.7 0.85 nomena and to prepare a connection validating experiment. The (a) (b) model is briefly introduced 0.6 0.8 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 in the following section. X (1) X (1) CO CO Subsequently, the influence Figure 2: Hydrogen recovery degree İH2 as function of of selected design and operCO conversion XCO for two coupled ECPrOx reactors ating parameters on the type (the current density is the parameter of the curves): a) of pattern and the reactor Qualitative prediction [5] and b) experimental proof [6]. performance is discussed.
Spatiotemporal pattern formation in a membrane reactor
203
2. Model To capture the essential dynamics of the system, a transient, isothermal, spatially onedimensional distributed reactor model has been employed [7]. It primarily considers changes of the state variables in the direction along the channel (z-coordinate). For the membrane additionally changes in the direction through the plane (y-coordinate) are accounted for. A couple of simplifications have been made during model development (see [7] for details). Here only the governing equations are briefly collected. The unknown profiles of the CO mole fraction xCO(z,t) and the gas velocity v(z,t) within the anode channel are determined by the following material balances. p ∂x CO p ∂ (x CO v ) − σ CO =− BC : x CO (t , z = 0 ) = x CO,in RT ∂t RT ∂z p ∂v 0=− − σ α α = {H 2 , CO, H 2 O} BC : v(t , z = 0 ) = v in RT ∂z α
¦
The balances for the species at the catalyst surface yield the local profiles of the coverage șCO(z,t), șH(z,t) and șOH(z,t) and the fraction of free adsorption sites ș0(z,t).
γC t∗
∂θ β ∂t
=σβ
β = {CO, H, OH}
θ 0 = 1 − θ CO − θ H − θ OH
The material balances are completed by a set of nonlinear expressions describing the kinetics of the sorption processes and the electrochemical electrode reactions. They are given in [7] and enter the model through the source terms ı. The rate of the electrochemical reactions depend on the electrode potential, which needs be determined from the charge balances for the electrolyte and the anodic and cathodic electrochemical double layers.
0 = −κ
c dla c dlc
∂ 2ϕ m ∂z
2
−κ
∂ 2ϕ m ∂y
2
BCs :
∂ϕ ∂ (ϕ a − ϕ ma ) = −κ m ∂t ∂y ∂ϕ ∂ (ϕ c − ϕ mc ) = κ m ∂t ∂y
∂ϕ m ∂z
= 0 ∀y, t , z = 0 , y ,t
∂ϕ m ∂z
= 0 ∀y, t z = L , y ,t
ϕ m (z, y = 0, t ) = ϕ ma (z, t ), ϕ m (z, y = d , t ) = ϕ mc (z , t ) + σ ea− z , y = 0 ,t
+ σ ec−
ϕa : 0 =
I − A
z=L
³ −κ
z =0
∂ϕ m ∂y
dz z , y = 0,t
ϕ c : ϕ c = 0 ∀z , t (grounded cathode)
0 , y = d ,t
The final model consists of eight nonlinear, coupled partial differential equations and one implicit algebraic relation. Due to the high numerical effort for the solution of the model (up to four days on a up-to-date desktop computer), the set of equations has been further simplified [8]. For this purpose a couple of quasi-stationarity assumptions have been incorporated, which allow for the analytical solution of some parts of the model.
3. Results and Discussion To elucidate the impact of the intrinsic electric couplings on the reactor performance four qualitative different scenarios are considered (Tab. 1). In scenario “A” the model is studied in the limit of zero conductivity, representing a situation without electric coupling. Scenarios “B” and “C” represent situations with increased conductivity (B: 1 S/m
204
Hanke-Rauschenbach et al.
and C: 10 S/m) to study the inscenario reactor type electric couplings fluence of migration- and meanA PFTR none field-coupling, respectively. In B PFTR migration scenario “D” complete backcoupling mixing is considered, representC PFTR mean-field ing the spatially lumped system coupling as a reference scenario. D CSTR none The performance of the reactor in the respective scenar- Table 1: Definition of different scenarios to study the inios is compared by means of the fluence of the electric couplings. time-averaged hydrogen recovery degree İH2 and carbon-monoxide conversion XCO, which are defined as:
1 T →∞ T
ε H2 = lim
t +T
³ t
G H2,out (τ ) G H2,in
dτ
1 T →∞ T
X CO = lim
t +T
³ t
G CO,in − G CO,out (τ ) G CO,in
dτ
Both quantities take values between zero and one. The most desirable operating point would be XCO=1 and İH2=1 meaning that all CO is oxidized and all hydrogen being recovered (corresponds to a selectivity of O2 towards CO2 of SCO2,O2=1). The Symbols GH2,in and GCO,in stand for the molar flow rates of hydrogen and carbon monoxide at the inlet of the reactor. GH2,out and GCO,out are the periodic changing H2 and CO molar outlet flow rates, which can be easily measured in an experiment or calculated from the model above. Fig. 3 compares the different scenarios for various applied currents. In general, by increasing the current XCO is increasing, while İH2 is decreasing due to increasing hydrogen consumption. However, significant losses in performance are seen for the scenarios B-D, meaning that electric coupling as well as stirring have a negative impact. To relate the performance degradation to pattern formation, space-time plots of the anode overpotential ǻija and derived amplitude spectra are compared at a given current density (0.27A/cm2) in Fig.4. The amplitude spectra show the amplitudes of the frequencies contributing to the local oscillations (in logarithmic grey scale) along the spatial coordinate [7]. From Fig. 4 A (left column) it can be seen that in scenario A each reaction site oscillates with an intrinsic frequency (the lowest curve in the amplitude spectra; approx. 3 Hz at the reactor inlet and 0.6 Hz at the outlet). Other contributions to the signal mark the higher harmonics. The reason for the observed behavior is the oxidation of carbon monoxide and the resulting decrease of CO-content along the channel. Due to the missing electric interaction the anode overpotential lacks any spatial order and the mean anode overpotential ǻija (the easiest accessible variable in the experiment) shows a steady signal because local oscillations cancel out in average. In Fig. 4 B (center column) Figure 3: Hydrogen recovery degree İH2 as function the impact of migration-coupling can of CO conversion XCO for the different scenarios.
Spatiotemporal pattern formation in a membrane reactor
205
Figure 4: Spatio-temporal profiles of ǻija (top row) and the respective amplitude spectra (bottom).
be studied. For the given parameters migration-coupling is the dominant electric interaction and leads to a spatiotemporal chaotic behavior (e.g. see the broad distribution of frequencies). However, its impact on the performance (Fig. 3) is limited because the oscillation frequency of each reaction site is approximately its intrinsic frequency. Finally, in scenario C (Fig. 4, right column) the performance loss is more dramatic. At the given parameters mean-field-coupling is the dominant coupling process, which leads to strict entanglement of the oscillations. As effectively the downstream part of the reactor forces the upstream part to slow down the oscillations (compare amplitude spectra A and C), the inlet region is CO saturated most of the time to such degree that even during the short oxidation phase (Fig. 1 “Phase 2”) no CO can be oxidised as no free sites for H2O dissociation are left. The upstream part is dead in accordance with the results of two ECPrOx reactors connected in parallel (see introduction). To summarize, the impact of the membrane conductivity on pattern formation was studied, which themselves influence the reactor performance as seen above. However, the conductivity influences also the ohmic losses in the reactor. Therefore from a principal point of view, the conductivity should be maximal. The partial saturation of the 1D-reactor or the enslavement of upsteram reactors (if many are connected in parallel) due to intrinsic mean-field-coupling is therefore a problem with high relevance for the design of a future ECPrOx system.
References [1] J.X. Zhang and R. Datta, J. Electrochem. Soc. 152, A1180 (2005) [2] J.X. Zhang and R. Datta, J. Electrochem. Soc. 149, A1423 (2002) [3] J.X. Zhang, J.D. Fehribach and R. Datta, J. Electrochem. Soc. 151, A689 (2004) [4] H. Lu, L. Rihko-Struckmann, R. Hanke-Rauschenbach et al., Top. Catal. 51, 89 (2008) [5] R. Hanke-Rauschenbach, C. Weinzierl, M. Krasnik et al., J. Electrochem. Soc. 156, B1267 [6] H. Lu, L. Rihko, R. Hanke-Rauschenbach et al,. Electrochim. Acta 54, 1184 (2009) [7] R. Hanke-Rauschenbach, S. Kirsch, R. Kelling et al., J. Electrochem. Soc. 157, B1521 (2010) [8] S. Kirsch, R. Hanke-Rauschenbach and K. Sundmacher, J. Electrochem. Soc. (2011), accepted
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of Design and Operation of Reverse Osmosis Based Desalination Process Using MINLP Approach Incorporating Fouling Effect Kamal M. Sassia , Iqbal. M. Mujtabaa a
School of Engineering Design and Technology, University of Bradford, Bradford, West Yorkshir, BD7 1DP, UK, Emial: [email protected]
Abstract The synthesis of reverse osmosis (RO) networks for water desalination is investigated here by state space approach via a superstructure problem. The proposed superstructure considers every possible connection between the process units. The effect of fouling is described by an exponential function which represents the decline in water permeability coefficient. The optimal designs of RO layout using MINLP technique are obtained for brackish water desalination utilizing spiral wound membrane element. In this work, a variable fouling profile along membrane stages is considered. The total annualized cost of the RO networks is minimized in order to find the optimal operation and configuration of RO systems. It was found that the optimal design and operation of RO process are sensitive to the fouling distribution between stages even though the overall fouling remains constant. Keywords: Reverse osmosis, optimum design, fouling
1. Introduction One of the most pervasive problems troubling people throughout the world is the lack of fresh water. Recently, seawater desalination by RO has been the main source of drinking water supply in many regions in the world. RO membranes used in desalination are capable of producing good water quality by removing most of the salts and some other contaminants from water sources. The most critical obstacle restricts further growth and wider application of membrane separation processes is the fouling. Fouling generally results in decreased permeate flux, decreased product quality and increased feed pressure to maintain the fresh water demand. Usually fouling increases the energy consumption and chemicals due to frequent membrane cleaning for removing foulants and consequently results in higher treatment cost (Seidel and Elimelech, 2002). Several researchers optimized RO network design using MINLP approach (ElHalwagi, 1992; Zhu et al., 1997; Lu et al., 2007). Uppaluri et al. (2004) have used stochastic optimization technique utilizing a simulated annealing procedure for the design of membrane networks. In this work, RO network design problem based on a superstructure is formulated as MINLP Problem which is solved using outer approximation algorithm within gPROMS software. Most of the previous studies presented the permeability decline rate due to fouling as average and equal for all RO stages in the design and optimization purposes. Here different cases with varying fouling percentage in each stage are considered in order to predict their impact on the design and the cost of the RO process.
Optimization of design and operation of reverse osmosis based desalination process using MINLP approach incrporating fouling effect 207
2. RO Network Superstructure for the RO process configuration is shown in Fig. 1. Some connections are excluded from the general superstructure for simplification, for example, brine recycle stream to the same stage or mixing permeate stream with the brine are not shown. Each RO stage is assumed to contain parallel RO modules that accommodate the same type of membrane element and operate under the same operation conditions. Note that the mixing of streams is only allowable for the streams with equal pressure.
Fig. 1 Superstructure for two stages RO process
3. Optimization Problem Formulation The optimization problem is described as: Given: Fixed water demand and salt concentration, feed specification and design specifications of each membrane element. Optimize: The number of stages, the number of pressure vessels (PV) in each stage, feed pressure and flow, brine recycle stream from the last stage to the feed inlet, existence of the turbine and brine bypass stream outlet from first stage, existence of the inter-stage booster pump and its outlet pressure. Minimize: The total annualized cost. Subject to: Equality constraints such as process model; inequality constraints such as linear bounds on optimization variables. Mathematically the optimization problem can be represented as: Minimize TAC , Subject to: Equality constrains Model equations; Product demand and quality Inequality constrains ; ; ; ; ; ; ; Where TAC is the objective function representing the total annualized cost of the candidate configuration satisfying the operation and design restrictions. The most important cost components affecting the produced water price are included and given in are continuous variables and represent feed Fig. 2 (Lu et al., 2007). pressure, feed flow, brine bypass fraction and brine recycle fraction respectively. S, N, d and are integer parameters representing number of stages, number of PV’s in
K. M. Sassi and I. M. Mujtaba
208
each stage, number of pumps and number of turbines. The mathematical model equations for RO module used in this work are given in (Sassi and Mujtaba, 2010). Total annualized cost ($ y-1) Feed pre-treatment cost ($)
; Pump or turbine capital cost ($) ; Membrane modules cost ($)
Net pumping cost ($ y-1) ; Chemical treatment cost ($ y-1)
;
; Membrane replacement cost ($ y-1) ; Annual Spares cost ($ y-1)
Fig. 2 Cost equations
4. Case Study The MINLP problem was considered to optimize the configuration and operating parameters of RO process at a given demand. The characteristics of spiral wound membrane which is used here are presented in (Abbas, 2005). The parameters used in optimization calculation are given in Table 1. For 180 days of operation, several cases were solved, in which the fouling percentage into stages vary while the total production is maintained about 40 m3 h-1 with the maximum salt concentration of 100 ppm. Table 1 Input parameters Membrane module cost ($) PV cost ($) Feed temperature (°C) Maximum operating pressure (bar) Maximum flow rate per module (m-3 h-1) Turbine efficiency (%) Pump efficiency (%) Electricity cost ($ kWh-1)
Value 900 1000 25 41 19 80 75 0.08
Ref. (Lu et al., 2007) (Lu et al., 2007) (Abbas, 2005) (Abbas, 2005) (Lu et al., 2007) Assumed (Lu et al., 2007) (Lu et al., 2007)
4.1. Membrane Fouling Most previous models of reverse osmosis do not take in account the effect of fouling (Oh et al., 2009). In this work, an exponential function proposed by (Al-Bastaki, 2004) was used to represent the decline in water permeability coefficient. The water permeability is approximated as follows: Where is the initial water permeability, F is the fouling factor representing the decay in permeability coefficient caused by the effect of fouling and scaling. F is given as (Al-Bastaki, 2004):
In the past, for simulation and optimization purposes the effect of fouling on RO unit (which has more than one stage) is assumed to be equal, i.e. the decrease in water permeability with time has the same rate for all stages. The prediction of water flux through the membrane surface which is function of fouling has been carried out using fixed permeability decline rate for all stages. Many researchers showed that the value of fouling in different stages of RO process varies and depends on the stage location in the process layout (Zhu et al., 1997; Huiting et al., 2001; Vrouwenvelder et al., 2009).
Optimization of design and operation of reverse osmosis based desalination process using MINLP approach incrporating fouling effect 209 In this work, the fouling effect is incorporated within the MINLP optimization formulation. For two-stage RO process, different fouling percentages in membrane stages are assumed. The permeability coefficients are assumed to have different values depending on the stage position in the processing array. Xf1 represents the percentage of fouling level in the first stage. For RO process with two stages, if fouling extent is assumed equal in all stages (average) and the permeability coefficient for first stage equal that in second stage, then. Xf1=50. 4.2. MINLP Optimization Results Table 2 shows the optimization results obtained for different fouling distribution sequences. It can be seen that the optimization results are oriented to a section of the search region where the installing new pump in all fouling distribution scenarios is not favored because of the added cost of installing new booster pump prevail over the gain from the extra quantity of permeate produced as shown in Fig. 3. Bypass part of the brine also is not desirable because the brine bypass increases the operating cost without considerable enhancement in the product quantity. There is no mixing between streams with different pressure which was set as a condition in constructing the superstructure results that the recycle of brine should be done after passing through the turbine. This makes the brine recycle not always an attractive option. In an attempt to minimize pretreatment and chemical costs about 4.6 % of the brine is recycled at Xf1=50 (Fig. 3a). Table 2 Summary of MINLP optimization results Xf1 50 (avg.) 60 Optimum process layout Fig. 3a Fig. 3b Number of PV in stage 1 8 7 Number of PV in stage 2 3 5 Permeate concentration (ppm) 64 73 Feed flow (m-3 h-1) 45.5 44.4 Feed pressure (bar) 22.6 21.1 Overall water recover % 88.8 89.9 Outlet brine recycle (%) 4.6 0 Total annualized cost ($ y-1) 88301 87254 Product cost ($ m-3of Permeate) 0.252 0.249
80 Fig. 3b 3 9 79 44.3 17.9 90.0 0 83087 0.237
Fig. 3 Optimum process arrangements The process layout identified is changed as the fouling level in the first stage varies. As feed salinity is relatively low, two stages configuration was selected in all cases whereas one stages layout is appropriate for process with higher feed concentration (Abbas, 2005). In Table 2, it has been observed that the number of PV’s in second stage are increased in a reasonable way to compensate the flux reduction in the first stage due to rise in the fouling percentage while the number of PV’s in the first stage are decreased.
210
K. M. Sassi and I. M. Mujtaba
Fig. 4 shows optimal process feed pressure trajectory for different fouling distributions between stages. It can be seen that for lower fouling percentage in the first stage more feed pressure is needed to meet the water demand. The maximum value of the feed pressure is reached at Xf1=50. The variations in initial values of feed pressures are caused by the diversity of processes configurations that have been adapted for each fouling level. It is illustrated very clearly that the feed pressure decreases with decreasing fouling in the second stage or increasing fouling share in the first stage. Fig. 5 shows the annual operating cost profile for different scenario of fouling percentages as a function of time at a fixed demand. It can be seen that the operating cost is inversely proportional to the fouling levels in first stage and this may be caused by the effect of decreasing feed pressure (Fig. 4).
Fig. 4 Feed pressure at different fouling condition
Fig. 5 Annual operating cost profiles at different fouling condition
5. Conclusion The optimum RO design with spiral wound membrane for seawater desalination process is studied here for different fouling levels in RO stages. For each fouling level, the optimal operating parameters are also determined. The continuous/discrete simultaneous superstructure for RO process that contains all possible alternatives of a potential RO network is presented. The total annualized cost as objective function contains all the most important aspects of the reverse osmosis process costs including capital and operating costs. The optimal designs of RO layout are obtained using MINLP approach while optimizing the operating parameters. The study shows that the optimal design and operation of RO process are sensitive to the fouling distribution between stages although the overall fouling remains constant.
References Seidel, M. Elimelech, 2002, Journal of Membrane Science, 203, 1-2, 245-255. M.M. Elhalwagi, 1992, AICHE J., 38, 1185-1198. M.J. Zhu, M.M. ElHalwagi, M. AlAhmad, 1997, Journal of Membrane Science, 129, 161-174. Y.Y. Lu, Y.D. Hu, X.L. Zhang, L.Y. Wu, Q.Z. Liu, (2007), Journal of Membrane Science, 287, 219-229. R.V.S. Uppaluri, P. Linke, A.C. Kokossis, 2004, Ind. Eng. Chem. Res., 43, 4305-4322. K.M. Sassi, I.M. Mujtaba, 2010,Computer Aided Chemical Engineering, Elsevier, 28, 895-900. A. Abbas, 2005, Chemical Engineering and Processing 44, 999-1004. H.J. Oh, T.M. Hwang, S. Lee, 2009, Desalination, 238, 128-139. N. Al-Bastaki, A. Abbas, 2004, Chemical Engineering and Processing, 43, 555-558. H. Huiting, J. Kappelhof, T.G.J. Bosklopper, 2001, Desalination, 139, 183-189. J.S. Vrouwenvelder, J.A.M. van Paassen, J.C. Kruithof, M.C.M. van Loosdrecht, 2009, Journal of Membrane Science, 338, 92-99.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Logic-Sequential Approach to the Synthesis of Complex Thermally Coupled Distillation Systems. José A. Caballeroa, Ignacio E. Grossmannb. a
Department of Chemical Engineering, University of Alicante., Ap Correos 99, 03080, Alicante, Spain b Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Av. 15213. Pittsburgh, PAUSA.
Abstract In this work a methodology is presented for the rigorous optimization of complex thermally coupled distillation systems using a sequential logic-mathematical programming approach. In order to explicitly include in the search space the possibility of divided wall columns (that are thermodynamically equivalent to three fully thermally coupled separations tasks) we use an hybrid logical representation of the system that takes into account the separation tasks, the states (mixtures) produced by the tasks and the possibilities of tasks (states) aggregation to generate divided wall columns (DWC). Once the sequence of tasks, and the DWC that these tasks produces, is determined it is possible to synthesize the sequence of actual columns searching in the space of thermodynamically equivalent configurations. An example of a five component mixture illustrates the procedure. Keywords: Distillation; Thermally Coupled Distillation, Process Synthesis, Disjunctive Programming, Divided Wall Columns.
1. Introduction Distillation is the most common separation and purification technique. Around 90-95% of all separations and purifications in the chemical industry are based on distillation, and this situation is not likely to change in the near future1. However, distillation is also one of the more energy inefficient unit operations. In the last years Thermally Coupled Distillation (TCD) has acquire a renewed interest because, when compared to conventional systems, it is possible to reach over 30% in energy reduction. Besides, some thermally coupled configurations are ‘thermodynamically equivalent’ to divided wall columns (DWC) (that produce important savings in investment). One of the major difficulties in the synthesis involving TCD is that the number of alternatives grows up much faster than when only conventional columns are considered. i.e. for a 5 component mixture there are 203 basic configurations2,3 (A Basic configuration is a sequence of separation tasks without taking into account the thermal state of the streams connecting the separation tasks).If we consider also the internal structure of heat exchangers there are around 104alternatives, and if we consider the thermodynamically equivalent configurations the number of alternatives is greater than2,4 2·105
212
J.A. Caballero, I.E. Grossmann
At the view of the huge number of alternatives it is not practical (may be not possible) try to generate a single sequence of columns considering directly all the alternatives. Giridhar& Agrawal3,4 and Caballero & Grossmann2 proposed an initial search considering only the basic configurations and then refine the search optimizing the heat exchanger structure. However, when the DWCs are considered the total cost of the DWC cannot be approximated by the sum of costs of individual tasks, and although in general it is possible to get good solutions, usually important improvements can be obtained explicitly introducing the DWCs in the initial search.
2. Logic approach to TCD sequences with DWCs The problem we are dealing with can be state as follows: given a M component mixture without azeotropes, generate a sequence of distillation columns to separate the mixture in N (N<M) fractions, where each fraction must not contain components of the other fractions (key components sharp split separation) consideringall the alternatives from conventional to fully thermally coupled configurations explicitly including DWCs. Without losing generality and for the sake of simplicity we consider the sharp separation of N components. Caballero & Grossmann2,5presented a set of logical relations between separation tasks that assure feasible sequences, the first step is then to extend those logical equations to take into account the possibility of including DWCs. It is important remark that exist a one to one relationshipbetween the sequence of separation tasks and the states formed by those tasks. Therefore, we can take advantage of this duality and express the logical relations to include DWCs in terms of states (instead of tasks). Although from a theoretical point of view it is possible to generate a multi-wall column, that in the extreme case separate all the components in a single column, nowadays only columns with one wall has been built and operate, so we constraint to a maximum of a wall in a given column. A divided wall column is formed by the union of three separation tasks (or by four states:a feed state, the two states produced by the first separation task and the intermediate product state). For example, Figure 1 shows 2 out of the four alternatives that produce C as intermediate product (state).
A
A
AB
AB
B
B
ABCD
ABCDE
BCD
BCDE
ABCD
BC
C
CD
ABCDE
BCD
C
BCDE
D DE
D DE
E
E
Logic-Sequential Approach to the Synthesis of Complex TCD systems
A
A AB
AB ABC
B
B
ABCDE
ABCDE
213
C
C
D
CDE
D DE
DE E
E
Figure 1.- Two alternatives to generate a divided wall column with C as intermdiate product. To generate the logical relations that assure that all alternative DWC are taken into account first it is necessary to identify which combinations of states (or tasks) are able to generate a DWC. The next conditions assure that all the DWC are taken into account: 1. 2. 3. 4.
All intermediate states (those that do not include a component with extreme volatility) could form part of a DWC. The intermediate product state in the DWC must be produced by two different states. The two states that produce intermediate product must be generated by a single contribution. The state that generates intermediate states in the DWC must be produced by the same state.
These logical equations can be added to the set of logical relationships previously presented by Caballero & Grossmann2,5and included in an MINLP model. For example, in a 5 component mixture it is possible identify 15 different DWCs (3 with B as intermediate product, 4 with C as intermediate product, 3 with D, 2 with BC, 2 with CD and 1 with BCD) As commented above, due to the huge number of alternatives, it is not practical try to search in the full space of alternatives (more than 2·105), instead we propose the following sequential approach: 1. First, instead of searching only in the space of basic configurations, a simultaneous search including also the internal structure of heat exchangers is performed. The model is posed in terms of ‘separation sections’ and ‘pseudo-columns’ using the STN formalism6, and formulated and solved as a disjunctive programming problem. The cost is evaluated for each individual section. An extra set of logical relationships allows determine the potential existence of a DWC, if this is the case, the sections that form this DWC are grouped and the cost is evaluated taking into account this possibility. In that way a tight lower bound to the actual cost of the system is obtained.
J.A. Caballero, I.E. Grossmann
214
In previous works5 this problem was solved sequentially: First the basic configurations and then, with the sequence of tasks fixed, the internal heat exchanger structure. This approach guarantees a ‘good’ solution, butwhen DWCs are considered the quality of this approach tends to worsenbecause DWCs do not only modify the heat exchanger structure but also the column configurations. 2. Once the sequence of separation tasks is established, the sequence of actual columns must be generated. Basically, here we must consider all the thermodynamically equivalent configurations that allow the distribution of sections in the minimum number of actual columns. In this stage operability considerations can be included (i.e. consider only configurations in which the vapour flows always from higher to lower pressures –easier to control configurations7,8-) or some building constraints, i.e. try to design columns with a single diameter or in which the diameters are similar. This provided an upper bound to the total cost. 3. A rigorous simulation of the configuration is performed in order to validate the model, check the assumptions and simplifications and correct, if necessary any parameter or data. 4. Add a binary cut to avoid repeated solutions go back to stage 1 until the lower and upper bound cross each other.
3. Example In order to illustrate the procedure consider a 5 component mixture (Data are shown in Table 1). Cost data are obtained from Turtonet al9. Physical properties of the compounds were obtained from open databases. The optimal solution was obtained using GAMS in around 20 minutes of CPU running under windows 7 (2.4 GHz, 8GB of RAM). The optimal solution (See Figure 2) includes a divided wall column together with two other columns with complex thermal coupled. Note that although configurations with two DWCs are possible, the total cost of this option is larger because it would produce a huge column. On the other side, the presence of DWCs reduces the number of thermodynamically equivalent alternatives, because most of the degrees of freedomintroduced by the thermal couples are used to generate the DWC. In this example there are only 2 thermodynamically equivalent configurations. Table 1.- Some basic data for the example. Component Feed molar fraction Benzene 0.3 Toluene 0.2 Ethylbenzene 0.1 Styrene 0.2 ǹ-methyl styrene 0.2 Total Feed = 200 kmol/h Key components recovery = 0.98
Logic-Sequential Approach to the Synthesis of Complex TCD systems
215
833 kW 777 kW
3083 kW A ABC BC ABCDE
BCD C
(200 kmol/h)
D = 1.11 m CD D = 2.59 m
D = 1.16 m
D
E 4472 kW
1250 kW
Figure 2. Optimal solution of the example Acknowledgements The authors want to acknowledge the financial support to the Spanish "Ministerio de Ciencia e Innovación" under the project CTQ2009-14420-C02
References 1. Soave, G.; Feliu, J. A., Saving energy in distillation towers by feed splitting. Applied Thermal Engineering 2002, 22, (8), 889. 2. Caballero, J. A.; Grossmann, I. E., Structural considerations and modeling in the synthesis of heat-integrated-thermally coupled distillation sequences. Industrial & Engineering Chemistry Research 2006, 45, (25), 8454-8474. 3. Giridhar, A.; Agrawal, R., Synthesis of distillation configurations: I. Characteristics of a good search space. Computers & Chemical Engineering 2010, 34, (1), 73. 4. Giridhar, A.; Agrawal, R., Synthesis of distillation configurations. II: A search formulation for basic configurations. Computers & Chemical Engineering 2010, 34, (1), 84. 5. Caballero, J. A.; Grossmann, I. E., Design of distillation sequences: from conventional to fully thermally coupled distillation systems. Computers & Chemical Engineering 2004, 28, (11), 2307-2329. 6. Yeomans, H.; Grossmann, I. E., A systematic modeling framework of superstructure optimization in process synthesis. Computers & Chemical Engineering 1999, 23, (6), 709-731. 7. Agrawal, R., More operable fully thermally coupled distribution column configurations for multicomponent distillation. Chemical Engineering Research & Design 1999, 77, (A6), 543553. 8. Caballero, J. A.; Grossmann, I. E., Thermodynamically equivalent configurations for thermally coupled distillation. AIChE Journal 2003, 49, (11), 2864-2884. 9. Turton, R.; Bailie, R.C.; Whiting, W.B.; Shaeiwitz, J.A.; Analysis, Synthesis and Design of Chemical Processes. 2003, Prentice Hall PTR New Yersey.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes Fani Boukouvala,a Rohit Ramachandran, a Aditya Vanarase, a Fernando J. Muzzio,a Marianthi G. Ierapetritou,a a
Dept. Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ, 08854
Abstract Dynamic flowsheet modeling and simulation is a pre-requisite for the design, analysis, control and optimization of an integrated process. Whilst several integrated modeling and simulation tools (commercial and non-commercial) have proven to be effective for fluids processes, their use has been fairly limited for solids processes. The objective of this study is to build a dynamic flowsheet simulation of an integrated continuous downstream pharmaceutical process, using a combination of fundamental and empirical models. Using two cases,the results elucidate (i) the evolution of key particle properties during the transient state (start-up and shutdown), (ii) the effect of changes in process parameters and/or material properties which typically can vary during continuous manufacturing and (iii) the dynamic response and recycle dynamics of an integrated blender and a recirculation tank. Simulation results lend credence to developing a dynamic flowsheet simulation of a fully integrated downstream pharmaceutical process which can be further extended to the general class of solids processes. Keywords: continuous manufacturing, pharmaceutical, dynamic flowsheet modeling.
1. Introduction The pharmaceutical industry is a tightly regulated industry where all production must comply with good manufacturing practices (GMP) and quality requirements should be strictly satisfied. Historically, manufacturing in the pharmaceutical industry has been carried out in batch mode which potentially results in expensive, inefficient and poorly controlled processes[1-2]. Recently, both pharmaceutical industries and regulatory authorities have recognized that continuous manufacturing has significant potential to improve product quality [3]. Moreover, environmental, health and safety issues are driving the industry towards more efficient and more predictive manufacturing. Therefore, a great opportunity arises for developing a generic continuous manufacturing platform that will benefit from state of the art strategies, modelling tools and enabling technologies to implement this transition. In this work we focus on the manufacturing of oral solid dosage drugs which consist of approximately 85% of the entire pharmaceutical production. A typical manufacturing process for a powder based product (e.g., tablets and most capsules) involves multiple processing steps, of which the most common are powder feeding, blending, granulation and tableting or capsule-filling. The integrated design of such a continuous system requires the detailed characterization of the unit operations involved, with the purpose of resolving the flow and stress fields within the equipment and quantifying the
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes 217 functional relationship of key quality attributes with process parameters and material properties. Computer aided process design and simulation tools have been successfully used in a plethora of chemical industries to expedite development and optimize the design and operation of integrated processes [4]. Specifically for pharmaceutical manufacturing, however, there is a lack of simulation tools that can handle particulate processes that will be used for evaluating process design alternatives, scheduling, control, optimization and debottlenecking of an integrated production line.
2. Motivation and Objectives The overall philosophy of this study is to simulate an integrated continuous pharmaceutical downstream process whose models are based on the underlying physics and chemistry of the process and have been experimentally validated for various formulations and operating conditions. Challenges due to the complexity and variability that particulate processes introduce to the overall system, that need to be overcome for the implementation of this initiative include: (1) the characterization of all unit operations, identification of their important kinetic and thermodynamic parameters and development of models that describe their mechanisms, (2) experimental studies and data acquisition of multi-dimensional evolutions and distributions of key particle properties, (3) identification and elimination of primary bottlenecks of the integrated system to maximize throughput, (4) identification of all the possible manipulated and controlled variables and their interactions, (5) accounting for recycle dynamics and their impact on control structure selection to achieve a robust, controllable and controlled process that is maintained within the desired design space and (6) integration of process design and control to identify globally valid operating conditions. As a first attempt to model and dynamically simulate an integrated downstream pharmaceutical process, this study will focus on developing an integrated model in a dynamic simulation environment, of a feeder-blender-granulator system. Specific objectives will be to track evolution of key particle properties during (i) the transient state, (ii) the transition from one formulation and/or operating condition to another (i.e., effect of material properties and/or process parameters) and (iii) when a recycle loop is implemented for the blending process.
3. Dynamic integration
model
development
and
To illustrate the dynamics of an integrated flowsheet model, 2 case studies are considered.
Figure 1. Proposed integrated flowsheet model
3.1. Case 1: Integration of Feeder, Blender, Granulator The feeder-blender-granulator system (see Figure 1a) consists of two feeders (API + excipient) that feed into a blender where the API and excipient are mixed due to convective/diffusive forces. The mixture of API/excipient is then continuously transported into a granulator whereby through the addition of liquid binder, the particles are formed into larger granules, to improve its flow and dissolution properties.
F. Boukouvala et al.
218
Each continuous feeder operates under closed-loop proportional-integral (PI) control whereby the feedrate is specified as the set-point and the feeder RPM is manipulated to ensure that the set-point is met. To model each feeder, set-point changes were made to the feedrate and the dynamic response was observed to follow a first order profile. Therefore, a first-order plus time delay (FOPDT) model was used to fit the data. A population balance model was used to describe the dynamics of the blending and granulation process (Equation 1) డிሺ௭ǡǡ௧ሻ
డ
ௗ௭
డ
ௗ
ቃ ൌ Ը௧ ሺݖǡ ݎǡ ݐሻ െ Ըௗ௧ ሺݖǡ ݎǡ ݐሻ (1) Here r is the vector of internal variables used to characterize the distribution and z is the vector of external coordinates used to depict spatial position. F(z,r,t) is the population డ ௗ distribution function (a.k.a. the number density function). The term ቂܨሺݖǡ ݎǡ ݐሻ ቃ డ ௗ௧ would account for the rate at which the distribution evolves with respect to position and డ ௗ௭ ቂܨሺݖǡ ݎǡ ݐሻ ቃ accounts for the time due to the rate of consolidation. The term డ௭ ௗ௧ evolution of the distribution of the particle population with respect to spatial position. The function Ը௧ ሺݖǡ ݎǡ ݐሻ and Ըௗ௧ ሺݖǡ ݎǡ ݐሻ accounts for the formation and depletion of particles respectively due to discrete aggregation and breakage phenomena. In the blender model, aggregation and breakage are neglected; therefore the PBM reduces to a two-dimensional model with respect to the vector z where z denotes the axial and radial direction. In the granulation model, the granulator is assumed to be well-mixed; therefore the PBM is a four-dimensional model with respect to r, where r denotes the volume fractions of the API, excipient, liquid and gas. Details of the blending model and granulation model can be found in [5-6]. డ௧
డ௭
ቂܨሺݖǡ ݎǡ ݐሻ
ௗ௧
ቃ
డ
ቂܨሺݖǡ ݎǡ ݐሻ
ௗ௧
Figure 2.Dynamic integrated simulation results: (a) total mass flowrate, (b) average particle diameter, (c) particle bulk density, (d) API concentration
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes 219 Figure 2a depicts the total mass flowrate of powder that exits the blender and enters the granulator. It can be seen that steady state is reached by t=150s, whereby a step change is introduced to the rpm (rpm is doubled). This results in a sharp increase in the mass flow rate which gradually reduces to the original mass flowrate. Figure 2b depicts the average granule diameter upon particles exiting the granulator. It can be seen that in the first few seconds of operation, no powder enters the granulator as they are still being processed in the blender. Upon powder entering the granulator, there is a sharp increase in granule diameter as granules undergo aggregation and eventually a steady state is attained by t=150s, whereby the step change in rpm results in a slight immediate decrease in granule diameter (due to the sudden influx of more fine powder in the granulator). Eventually, this leads to more fine powder being aggregated and this results in an increase in granule diameter till a new steady state is achieved. Similar transient profiles are achieved for granule bulk density and granule API concentration (see Figures 2c and 2d) 3.2. Case 2: Effect of Recirculation tank In a separate simulation, the integration of a blender with a recirculation tank was considered, to ascertain the effect of recycle dynamics on a unit operation. A system such as the one shown in Figure 1 (accounting for the second blender as the recirculation tank) could be advantageous in any continuous powder processing line. It offers several advantages including recirculation of the excess or out of specification material produced by the continuous mixer, providing material to other subsequent units operating under the desired capacity, minimizing the effect of overshoot in the feed rate (occurred during the refill operation) on blend uniformity. In order to obtain the dynamic response of the integrated system, mass balance equations were solved for the system of the two blenders, while the RTD model Ei (T ) used was assumed to be a 1-D axial dispersion model (Equation 2). Ei (T )
ª (1 (T W 0i ) / W i )2 º exp « » , i=1,2 2 S (T W 0i ) /(W i Pei ) ¬ 4(T W 0i ) /(W i Pei ) ¼ 1
(2)
RTD model parameters (residence time ( W ), dead time ( W 0 ) and Peclet number ( Pe )) for both blenders were obtained by fitting the experimental impulse response data. (a)
(b)
Figure 3. (a) Dynamic response of the integrated system undergoing feeder refills, (b) Sluggish response of the system undergoing excessive recycle flow rates
Dynamic response of the integrated system was computed for a particular input feed rate dataset (feeder undergoing refills). As shown in Figure 3a, overshoot in the feed rate at the outlet of the mixer decreases with increase in the recirculation flow rate. For
F. Boukouvala et al.
220
excessive recycle flow rates (10 times or 50 time in the input flow rate, the system response became sluggish. A shift in the baseline can be observed in Figure 3b. In conclusion, recycle only to a certain extent was found to improve the overall performance.
4. Conclusions and Current work This work aims to attain the knowledge required to design and optimize continuous manufacturing systems for a variety of pharmaceutical powder based products. In this work, the recently developed gPROMs-SOLIDS [7] simulation package and MATLAB will be used to perform dynamic flowsheet simulation of a multi-component integrated tablet manufacturing system. One of the major difficulties when dealing with solid processes is the lack of knowledge of the properties that characterize the powder mixtures and how these properties affect the final product properties. However, recent powder characterization studies have provided significant insight into how material properties of active ingredients, and excipients and their compositions in a mixture affect the behavior of the powders in different apparatus. Thus, a great opportunity arises for merging all the knowledge, experience, experimental and modeling work available for the development of a detailed flowsheet model for tablet production. Furthermore, the unification in a single flowsheet modeling platform of all the different available modeling techniques which range from empirical, population- balance, firstprinciple and Discrete Element Method (DEM) models that describe different unit operations, is another area which requires significant effort. Lastly, a detailed simulation will facilitate the following: (1) Quality by Design (QbD) features by the use of intrinsic process knowledge that will establish the functional relationships between key quality attributes, process parameters and material properties. Development of a hybrid superstructure model- will also be used to define the design space of the system, and (2) Process Analytical Technology (PAT) features through the development of online sensors, process control, multivariate analysis, statistical analysis and real time quality control.
References 1. 2. 3. 4. 5. 6. 7.
Gorsek, A. and P. Glavic, Design of Batch Versus Continuous Processes: Part I: Single-Purpose Equipment. Chemical Engineering Research and Design, 1997. 75(7): p. 709-717. Leuenberger, H., New trends in the production of pharmaceutical granules: batch versus continuous processing. European Journal of Pharmaceutics and Biopharmaceutics, 2001. 52(3): p. 289-296. Plumb, K., Continuous Processing in the Pharmaceutical Industry: Changing the Mind Set. Chemical Engineering Research and Design, 2005. 83(6): p. 730-738. Biegler T L, Grossmann E I, and Westerberg W.A, Systematic Methods of Chemica Process Design. International Series in the Physical and Chemical Engineering Sciences. 1997, New Jersey: Prentice Hall. Boukouvala, F., et al., Computational approaches for studying the granular dynamics of continuous blending processes II: Population balance and data-based methods, Manuscript under review. 2010. Poon, J.M.H., et al., Experimental validation studies on a multi-dimensional and multiscale population balance model of batch granulation. Chemical Engineering Science, 2009. 64(4): p. 775-786. ProcessSystemsEnterprise, gPROMS Advanced User Guide. 2003: London, UK.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Phenomena-based Process Synthesis and Design to achieve Process Intensification Philip Lutzea, Rafiqul Ganib, John M. Woodleya,b a
PROCESS, Department of chemical and Biochemical Engineering, Technical
University of Denmark, Soltofts Plads, DK-2800 Lyngby, Denmark b
CAPEC, Department of chemical and Biochemical Engineering, Technical University
of Denmark, Soltofts Plads, DK-2800 Lyngby, Denmark
Abstract In order to improve processes incorporating process intensification and to allow them to go beyond pre-defined unit operations, the process has to be viewed at a lower level of aggregation, namely the phenomena scale. In this contribution, an approach for aggregating processes through phenomena building blocks in a systematic methodology is presented. First, all potential phenomena are identified, and then synthesized to phenomena-based flowsheets which are then screened against pre-defined constraints before the most promising options are identified, optimized and verified at the unit operation level. This phenomena-based synthesis/design methodology is tested through a case study. Keywords: Process Synthesis, Phenomena, Process Intensification.
1. Introduction Process Intensification (PI) has attracted considerable interest as a potential means of process improvement and to meet the increasing demand for sustainable production. PI aims to improve processes without sacrificing product quality by increasing efficiency, reducing energy consumption, costs, volume, and waste as well as improving safety. In previous work [1], we reported the development of a general computer-aided systematic synthesis and design methodology incorporating PI. Even though, process improvements were achieved, this methodology is limited to pre-defined PI unit operations which are retrieved from a knowledge base. In order to invent new unit operations, going beyond those currently in existence, to achieve potentially even higher improvements, the process should be viewed at a lower level of aggregation [2, 3]. The similarity of the structure of flowsheets and molecules has been reported before [4] comparing molecules to processes and groups in molecules to unit operations respectively. This analogy can be extended through phenomena since they can be compared to atoms. That is, different combinations of phenomena lead to different characteristics/performance and therefore to different physical unit operations/ flowsheets, just as different combinations of atoms lead to different molecules with
222
P.Lutze et al.
different characteristics/performance. Hence, in order to extend the search space for process improvement, process synthesis and design incorporating PI needs to be investigated at the phenomenological level.
2. General Phenomena- based Synthesis Framework The developed phenomena-based synthesis and design approach is based on two contributions: a) the use of phenomena building blocks together with connection equations to represent a process; b) the use of a methodology for identification, generation and screening of phenomena-based flowsheets that systematically reduces the search space for the optimal solution. 2.1. Concept of phenomena-based aggregation Phenomena building blocks consist of mass, component, energy and momentum balances as well as constraint equations describing the phenomenon and the inlet and outlet stream conditions. In general, phenomena building blocks can be classified by the number of distinct phases involved which are further sub-classified into mixing, stream dividing, phase creation, phase transition, phase separation, reaction and energy transfer phenomena. Mixing phenomena have a minimum of two inlet streams and one outlet stream while dividing phenomena have one inlet and a minimum of two outlet streams. Phase creation implies the appearance of a new phase. It has a single phase inlet and a two-phase outlet. Phase transition blocks are defined to have one inlet and one outlet stream, each of mixed phases. Two-phase separation phenomena have one two-phase inlet stream and two outlets of pure phase streams. Energy transfer phenomena are defined to have either one inlet and one outlet, for example, for simplification of heating/cooling by an external source, or two inlet and two outlet streams, like the phenomena of convective or conductive heat transfer between two streams. Phenomena blocks can be connected through the use of suitable connection rules by streams or linking streams (for simultaneous occurrence of phenomena). Streams connecting phenomena building blocks may contain one or more phases. For example, a dividing block can be connected to any building block while a phase transition or phase separation cannot because they require an inlet stream with a two-phase mixture (see Table. 1). Another example of a connection rule is that an L-L phase split should be linked before a phase transition (V-L equilibrium) block since the vapor is in equilibrium with each of the two liquid phases and not with the total liquid mixture. Table 1. Examples of direct connectivity between phenomena building blocks. Second (following) phenomena block Mixing Phase transition Phase creation First phenomena block Ideal V-L (EQ) L-L split Mixing (2-phase) Ideal Yes Yes No Phase transition V-L (EQ) Yes No No Phase creation L-L split Yes Yes No 2.2. Methodology The developed methodology follows a stepwise hierarchical decomposition in which the
Phenomena-based Process Synthesis and Design to achieve Process Intensification 223 lower level steps employ simple and easy calculations, while the higher level steps employ more and more rigorous and detailed calculations (see Figure 1). First, the scenario, the goal and the constraints of the synthesis/ design problem and a performance metrics are defined. Second, the system is analyzed with respect to pure component, mixture and reaction properties to identify a set of phenomena building blocks that may be used in the processing steps of a flowsheet. These are retrieved from a phenomena building block library. Third, the identified phenomena building blocks are connected using the general connectivity rules (see Table 1) resulting in a superstructure. The superstructure may represent a large number of alternatives from which redundant options are removed through structural constraints. Fourth, the remaining alternatives are screened out through pre-defined operational constraints and benchmarked through performance metrics. Fifth, for each of the remaining phenomena-based flowsheet alternatives, currently available or novel unit operations are identified, assisted by algorithms or a library of pre-defined units. Linking the phenomena to the actual physical unit is important since additional constraints related to physical units such as wall boundary conditions need to be introduced. The performance criteria may be revised in this step for final optimization of the most promising alternatives and verification by rigorous simulation and experimentation in the last step.
Figure 1. Workflow for the phenomena-based synthesis and design to achieve PI.
3. Case Study The key steps of the phenomena-based synthesis and design methodology are highlighted through a case study involving the continuous production of isopropylacetate from isopropanol and acetic-acid. The liquid-phase reaction is catalyzed by Amberlyst 15 and follows the stoichiometry: CH3COOH C3 H7OH l C5 H10O2 H2O . (1) In step 1, since the reaction is limited by an unfavorable equilibrium, the objective is to increase the product yield in the reaction. Therefore, the product purity is not defined. Additionally, the number of units is selected for screening of options. FObj yield nProduct / nReactant (2) In step 2, pure component and mixture analysis is performed using ICAS [5]. Several binary as well as ternary azeotropes have been found as well as a LL-phase split between water and isopropyl-acetate. The operational window of the liquid phase
224
P.Lutze et al.
reaction, lies between the lowest boiling point, that is the temperature (347.34 K) of the ternary azeotrope of isopropanol, isopropyl-acetate and water at P=1atm and the highest melting point, that is the melting point temperature (289.8 K) of acetic acid at P=1atm. The reaction analysis based on kinetics from Sanz and Gmehling (2006) [6] confirmed the exothermic irreversibility and the equilibrium limitation of the reaction (K>1). Since, Amberlyst 15 was used as a catalyst the maximum allowable temperature to avoid catalyst degradation was set at 403K. Based on this, the following phenomena were identified: mixing (ideal), dividing phenomena, heating/ cooling (countercurrent, co-current, conductive), heterogeneous reaction and phase split. Additionally, from the analysis of ratios of pure component properties (Table 2), promising phase separations of products from reactants are identified to be based on Vapor-Liquid separation (boiling points) or pervaporation (radius of gyration). Both are represented by a phase creation followed by a phase separation phenomena. The phase creation necessary for pervaporation is described by a flux equation [6]. The heat of vaporization is introduced into the energy balance. Also, an additional constraint equation is necessary to assure that the liquid outlet stream is not freezing. The separation phenomenon VL is ideal. Table 2. Normal property ratios between products and reactants Water/ Water/ Acetic Acid/ Isopropanol/ Acetic Acid Isopropanol Isoproyl acetate Isopropyl acetate Boiling Point 1.05 1.05 1.08 1.02 Radius of gyration 4.24 4.56 1.41 1.31 For purposes of illustration, the phase split phenomenon is not taken into consideration and the heterogeneous reaction is simplified to a pseudo-homogeneous liquid phase reaction. In step 3, the identified phenomena were connected to form phenomena-based flowsheets using connectivity rules (see Table 1) and screened by additional logical and structural constraints (such as, the presence of at least one reaction phenomenon and a maximum of four reaction phenomena in one flowsheet).
Figure 2. Examples of identified units from generated phenomena-based flowsheets: A: single phase CSTR; B: CSTR with integrated heating jacket and membrane; C: Isothermal Reactive Flash; D: integrated membrane, thermal controlled tubular reactor. Phenomena: Ideal mixing M; Reaction R; Phase creation: pervaporation P, evaporation E; phase separation PS, Heating H, Cooling C and Dividing D. Utility streams for energy supply/ removal are not shown.
Phenomena-based Process Synthesis and Design to achieve Process Intensification 225 Examples of four generated phenomena-based flowsheets from step 3 and the corresponding identified physical units in step 4 are illustrated in Figure 2. A uniform single phase CSTR (A) was identified through a series of a mixing and reaction phenomenon while a single phase CSTR (B) divided into two compartments was identified in which neighboring compartments are linked (outlet stream of a dividing phenomenon becomes the inlet stream of a mixing phenomenon). An isothermal reactive flash (D) was identified through the simultaneous occurrence of evaporation as well as reaction phenomena. A tubular reactor (D) was identified as consisting of at least three ideal mixing and reaction phenomena in series. In step 4, the performance of the phenomena-based flowsheet options was compared against the objective function and the number of unit operations (Table 3). Ideally, assuming indefinite volumes and equimolar feed, options B and D were found to be equally good. The isothermal reactive flash (C) can be extended by connecting several of them in a network. This would represent a reactive distillation where the product is purified (which is not the objective here, which is to increase the yield) until the low boiling ternary azeotrope of isopropanol/ isopropyl-acetate and water is reached as the top-product. Hence, this would also result in an unavoidable loss of reactant which would limit the yield. In step 5, the most promising alternatives were optimized with respect to the objective function and verified through simulation. Table 3. Benchmarking results of three examples (Assumption: equimolar feed). Yield Number of units Option (kmol product/ kmol one reactant) A 0.65 1 B 0.99 1 C 0.8 1 D 0.99 1
4. Conclusions A methodology for phenomena-based synthesis and design to achieve PI has been developed and tested through a conceptual case study. The advantage of this approach is that it generates potentially novel process options (truly predictive models lead to reliable predictive solutions) as well as the simultaneous development of the necessary process models. The results are promising and further development of this approach together with the necessary tools is subject of current and future work.
References [1] P. Lutze, R. Gani, J.M. Woodley, 2010, Chem Eng Process, 49, 6, 547-558. [2] K.P. Papalexandri, E.N. Pistikopoulos, 1996, AIChE J, 42, 4, 1010-1032. [3] H. Freund, K. Sundmacher, 2008, Chem Eng Process, 47, 12, 2051-2060. [4] L. d’Anterroches, R. Gani, 2005, Fluid Phase Equilib, 228-9, 141-146. [5] C.A. Jaksland, R. Gani, K.M. Lien, 1995, Chem Eng Sci, 50, 511-530. [6] M.T. Sanz, J. Gmehling, 2006, Chem Eng J 123, 9-14.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) ©2011ElsevierB.V.Allrightsreserved.
A Novel Process Design for the Hydroformylation of Higher Alkenes. Michael Müllera,c, Victor Alejandro Merchana,c, Harvey Arellano-Garciaa,c, Reinhard Schomäckerb,c, Günter Woznya,c a
Chair of Process Dynamics and Operation, Sekr. KWT-9 Dept. of Chemistry, Sekr. TC3 c Technische Universität Berlin, Str. des 17. Juni 135, 10623 Berlin, Germany b
Abstract The hydroformylation is one of the most important industrial applications of homogenous catalysis. Due to the decreasing solubility of alkenes with an increasing length of carbon chains in the aqueous phase, the hydroformylation of higher alkenes is not carried out in a biphasic homogeneous system but in a single homogeneous organic phase with the use of a less reactive cobalt catalyst. However, this requires expensive reaction conditions such as high pressures and temperatures. Alternatively, the hydroformylation of higher alkenes with rhodium catalysts has been investigated in a batch operation mode by several scientists (Haumann et al., 2002, Miyagawa et al., 2005). Moreover, their research proved that this process can be carried out in a homogenous environment by the use of micellar systems (e.g. microemulsions). Due to the high cost of catalysts containing phosphine ligands and rare metals, their retention in continuous processes is extremely valuable. In this work, we propose a novel process design for a continuous operation for the hydroformylation of higher alkenes. This process design is defined by a combination of the rhodium-catalysed hydroformylation of higher alkenes in micellar systems including the catalyst recycle. Due to the high costs of rhodium, the catalyst has to be separated completely from the reaction products so as to ensure high process profitability. The recycle is accomplished in two steps, which comprise a decanter and an ultrafiltration step. Moreover, in order to show the feasibility of the proposed process concept, simulation studies, and sensitivity analysis are carried out within a broad operating range. The corresponding modeling work has been executed using the web-based modeling environment MOSAIC (Kuntsche et al., 2010). Keywords: Process design, hydroformylation, micellar catalysis, higher alkenes, MOSAIC
1. Introduction The hydroformylation of short alkenes is a standard process in the industry. However, although it is of great interest, there are only few implementations in industry for higher alkenes. In the open literature, there is no report on the development of a continuous process for the hydroformylation of higher olefins with rhodium catalysts in multiphase systems with or without added micelles. The feasibility of higher olefins hydroformylation in micellar solutions has been confirmed by different authors (eg. Fell et al., 1995, Gimenez-Pedros et al., 2003, Paetzold et al., 2003). In particular the work introduced by (Haumann et al., 2002) and (Miyagawa et al., 2005) demonstrate the
A Novel Process Design for the Hydroformylation of Higher Alkeses.
227
feasibility of the hydroformylation with high reaction rates at low surfactant concentrations near the critical micelle concentration (cmc) or in two-phase micellar systems. Since the products are dissolved in the organic, and the catalyst in the aqueous phase (two-phase reaction), products and catalyst can be separated easier than in a single-phase experimental procedure (Bode et al., 2000). An approach similar to (Bode et al., 2000), which implements a combination of reaction and catalyst separation, and thus, a built-in recycle considered here for the system, has not been pursued. In different studies and processes, the catalyst recycle process was at significantly altered process conditions. This causes for example for the cobalt catalyst, a chemical change that affects strongly the solubility. In this work, the results of the conceptual process design for a continuous mini-plant will be shown. A simulation model for the process has been developed and preliminary kinetic data from literature were implemented in a first step. In order to obtain an accurate description of the reactor and to enable the use of nonstandard rigorous models in the future, the reactor model and the reaction kinetics were first implemented via MOSAIC. Based on this simulation work and considering safety aspects, a mini-plant was designed (see Fig. 2). With the help of the developed miniplant system, the effects on changing operating conditions on the catalyst and its return will be analyzed in a comparative mode, in which the catalyst is not exposed to changing conditions. Thus, an integrated catalyst recycling is pursued.
2. Process concept Figure 1 shows the integrated process concept. First of all, reactants, synthesis gas (CO & H2), as well as dodecen (higher olefin) and the rhodium catalyst solved in an aqueous phase are added to the reactor. Due to the presence of a surfactant, a micellar system is composed that enables the hydroformylation within a homogenous environment. By cooling the reactors downstream, a separation of both (organic /aqueous) liquid phases is carried out in a decanter. Most of the hydrophilic catalyst resides at the aqueous phase that is recycled to the reactor. The organic phase, which almost consists of the product (aldehyde), is separated with an ultra filtration step, whereas olefins as well as the residual catalyst are once more recycled to the reactor. Thus, an almost complete recycle of the valuable catalyst can be obtained. Furthermore, the remaining olefins could also be recycled, which means a lower consumption of educts for the over all process.
Figure 1: Integrated process concept: Hydroformylation in a micellar system
228
M. Müller et al.
3. Process description At the chair of Process Dynamics and Operation at the TU Berlin, a new mini-plant is currently built in order to analyze the whole process (Fig. 2). The process concept can be divided into three sections: reaction, filtration, and product separation.
Figure 2: P & I flow chart of the designed mini-plant, TU Berlin
3.1. Reaction Section The hydroformylation takes place in a high-pressure reactor. This will be realized in two units. In the first step, a mixer-settler-system, in which, after the reaction, the two phases are separated in a decanter, is established. The decanter can be operated independently from the reactor, and thus, the separation of the two phases can be studied in detail. In the second step, a combination between reaction and phaseseparation is arranged in one device. The reactants will be pumped out of the storage tanks into one of the reactions-units. The syngas will be dosed from gas cylinders. 3.2. Filtration Section After the phase separation, the organic phase will be expanded in a Flash unit. The gaseous phase is composed of unreacted alkenes, water, and syngas. This is followed by a condensation and recycles to the reactor. The liquid phase, composed of alkanals, byproducts, surfactants and the catalyst, is delivered to an ultra filtration membrane. The micelles including the catalyst and the surfactants are retained in the membrane, whereas all reaction products und unreacted alkenes permeates. 3.3. Product Separation Section The separation of the products can be realized using hybrid processes, which are of special importance for close boiling substances. Unit operations including distillation, melt crystallization or organophilic nanofiltration are conceivable. In a distillation column the lower boiling alkenes are separated from adols and alkanes. The alkenes are then recycled to the reaction. The mixture of aldols and alkanes will be separated by an
A Novel Process Design for the Hydroformylation of Higher Alkeses.
229
organophilic nanofiltration. The high boiling aldols stay behind in the retentate, linear and branched alkanes permeate. The permeate can then be separated by melt crystallization. 3.4. Plant-Design Parameters Tab.1 and 2 shows an overview of the in- and outlet streams, as well as parameters, respectively. They are the basis for the plant/design. Table 1: In- and outlet streams, calculation for the plant-design Stream
Dodecene
H2
CO
H2O
surfactant
Tridecanal
[mol/h]
1.0
1.0
1.0
1.3
0.1
0.52
Table 2: Parameter used for the simulation studies Temperature
Pressure
Reactor volume
Residence time
Reactant Ratio
80°C
60 bar
3.5 L
14.1 h
1.0
4. Modeling and Simulation Studies Because of the importance of the reactor for the whole process, it is necessary to have an accurate model which reproduces the phenomena taking place in the gas-liquidliquid reaction. Since this requires a good description of phenomena like gas solubility or mass transfer in liquid phase, the reactor model to be used should provide detailed information on assumptions and the thermodynamic relations considered. Unfortunately, the user does not always get enough information. Another important aspect when modeling chemical reactions represents the possibility to directly implement reaction kinetics given in literature. However, most simulation tools offer the possibility to entry the reaction kinetics that fit in a standard scheme, but demand the creation of user subroutines and consequently the knowledge of a high-level programming language in order to implement other kinetics. A good approach to overcome these drawbacks is the possibility to define new reactor models and kinetics based on mathematical models formulated as symbolic expressions (e.g. as LaTeX-expressions). The developed modeling environment MOSAIC (Kuntsche et al., 2010) enables this approach. In MOSAIC models are saved as XMLfiles that can be used to generate language specific code for different simulation environments. The generated models can be exported and embedded in other simulation tools. The models of mass transfer between gas and liquid phase with the film theory and the description of phase equilibrium at the interface with Henry-coefficients were implemented. The implementation of the new reactor model from MOSAIC in CHEMCAD was realized using the code generation to produce the C++ code of a user added model (UAM) which is offered by default. This code requires the library BzzMath (Buzzi-Ferraris) in order to solve the model equations and needs to be compiled before the UAM is used. The implemented model for the reactor was connected with the other existing CHEMCAD-Units. The kinetic of the reaction is taken out of (Zhang et al., 2002). Different simulation studies were carried out to determine not only the mini-plant design but also possible operating points. Moreover, the main parameters for the reaction were found through sensitivity analysis (Tab. 3).
M. Müller et al.
230 Table 3: Results of the sensitivity analysis for the hydroformylation Pressure
Conversion change [% ]
Residence time
Reactant Ratio
-5%
+5%
-5%
+5%
-5%
+5%
-10.04
+8.12
-7.62
+2.20
-7.25
+5.63
5. Conclusions A novel process concept for the hydroformylation of higher alkenes has been designed for a mini-plant system. A first simulation model for the process has been developed. In order to attain an accurate description of the reactor and to allow the use of nonstandard rigorous models in the future, the reactor model and the reaction kinetics were first implemented via MOSAIC. With the help of the developed mini-plant system, the effects on changing operating conditions and unit performance as well as the catalyst and its recycle will be analyzed. Acknowledgement The authors acknowledge the support from the Collaborative Research Centre SFB/TR 63 “InPROMPT- Integrated Chemical Processes in Liquid Multiphase Systems” coordinated by the Berlin Institute of Technology - Technische Universität Berlin and funded by the German Research Foundation.
References 1. B. Fell, C. Schobben, G. Papadogianakis (1995): Hydroformylierung homologer [omega]Alkencarbonsaureester mit wasserlöslichen Rhodiumcarbonyl/tert. Phosphan-Komplexkatalysatorsystemen, Journal of Molecular Catalysis A: ChemicalVolume 101, Issue 3, pp. 179-186. 2. M. Gimenez-Pedros, A. Aghmiz, C. Claver, A. M. Masdeu-Bulto, D. Sinou (2003): Micellar effect in hydroformylation of high olefin catalysed by water-soluble rhodium complexes associated with sulfonated diphosphines, Journal of Molecular Catalysis A: ChemicalVolume 200, Issues 1-2, pp. 157-163. 3. E. Paetzold, G. Oehme, C. Fischer, M. Frank (2003): Phosphinoethylsulfonatoalkylthioethers and diphenyl-[omega]-sulfonatoalkyl-phosphines as ligands and polyoxyethylene-polyoxy-propylene-polyoxyethylene triblock co-polymers as promoters in the rhodium-catalyzed hydroformylation of 1-dodecene in aqueous two-phase systems, Journal of Molecular Catalysis A: ChemicalVolume 200, Issues 1-2, pp. 95-103. 4. Y. Zhang, Z.S. Mao, J. Chen, (2002): Macro-kinetics of biphasic hydroformylation of 1dodecene catalyzed by water-soluble rhodium complex, Catalysis Today Volume 74, Issues 1-2, pp. 23-35. 5. M. Haumann, H. Koch, P. Hugo, R. Schomäcker, Hydroformylation of 1-dodecene using Rh-TPPTS in a microemulsion (2002): Applied Catalysis A: GeneralVolume 225, Issues 12, pp. 239-249. 6. C.C. Miyagawa, J. Kupka, A. Schumpe (2005) Rhodium-catalyzed hydroformylation of 1octene in micro-emulsions and micellar media, Journal of Molecular Catalysis A: ChemicalVolume 234, Issues 1-2, pp. 9-17. 7. G. Bode, M. Lade, R. Schomäcker (2000), The Kinetics of an Interfacial Reaction in Micro-emulsions with Excess Phases, Chem. Eng. Technol. 23, pp. 405-409. 8. S. Kuntsche, H. Arellano-Garcia, G. Wozny (2010), A new Modeling Environment Based on Internet-Standards XML and MathML, Comp. Aided Chem. Eng, Vol. 28, pp 673-678. 9. G. Buzzi-Ferraris, BzzMath : Numerical libraries in C++, Politecnico di Milano, www.chem.polimi.it/homes/gbuzzi
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Flowsheet Optimization by Memetic Algorithms Maren Urselmanna, Sebastian Engella a
Process Dynamics and Operations Group, TU Dortmund, Emil-Figge-Str. 70, 44227 Dortmund, Germany
Abstract In this contribution, the memetic algorithm (MA) proposed in [1] is extended to optimize a flowsheet problem that comprises a reactive distillation column with optional amounts of catalyst on the stages and an (optional) external reactor such that different degrees of integration can be considered. The MA consists of an evolution strategy and a mathematical programming solver. The focus of this paper is on the influence of the presence of structural decisions which are represented as discrete variables in the optimization problem on the computational efficiency of the solution method. The introduction of discrete variables may result in an exponential increase of the computational effort needed for the solution by MINLP techniques. The results of the MA are compared to those obtained using commercially available MINLP solvers. Keywords: flowsheet optimization, reactive distillation, memetic algorithm.
1. Introduction Flowsheet optimization problems of chemical processes are characterized by the presence of a large number of discrete variables (representing e.g. the choice and the connections of equipment), continuous decision variables (as e.g. equipment sizes and operating parameters), complex nonlinear models that restrict the search space, nonlinear cost functions, and the presence of many local optima. The classical approach to solve such problems is to use MINLP solvers that work on a superstructure formulation which explicitly represents all flowsheet alternatives [2]. This solution procedure is usually based on a decomposition of the MINLP problem into an IPmaster-problem (optimization of the choice and the structure of equipment, and their interconnections) and NLP-sub-problems (optimization of continuous variables for fixed discrete variables). The structural decisions lead to a large number of discrete variables and to a significant increase in the computational effort needed for the solution by such methods. The mathematical programming (MP) methods which are employed to solve the continuous sub-problems that arise by fixing the discrete variables provide only one local optimum which depends strongly on the initialization. Thus standard methods may not find the global optimum despite long computation times. Recently, we introduced a memetic algorithm (MA) for the global solution of mixedinteger design problems for a single unit operation, e.g. a reactive distillation column (RDC) [1]. The MA integrates an evolutionary algorithm (EA) and a mathematical NLP solver. The EA generates initial points for the local solver. It works in the space of the design variables whereas the state variables of the designs are computed by the same solver that performs the local optimization. The approach was applied to the example of the heterogeneously catalyzed and kinetically controlled synthesis of methyl-tertiarybutyl-ether (MTBE) from isobutene and methanol in the presence of n-butane [3]. By the use of the MA, the computational effort needed for a global solution of the
232
M. Urselmann and S. Engell
continuous sub-problems could be reduced by 75% in comparison to the reference algorithm (OQNLP/CONOPT). The introduction of structural decisions and additional constraints leads only to a moderate increase in the computational effort which demonstrates the potential of the MA. In this contribution, the MA is extended to a flowsheet optimization problem that comprises a reactive distillation column with optional amounts of catalyst on the stages and an (optional) external reactor such that different degrees of integration (from a totally integrated reactive distillation column without external reactor to a distillation column with pure separation functionality with a pre- or a side reactor) can be considered. The results of the flowsheet optimization by the MA are compared to the results of MINLP-techniques that were used in previous work [4] for the optimization of such a reactor-separator configuration.
2. The memetic algorithm Memetic algorithms are hybrid evolutionary algorithms coupled with local refinement strategies. In this work, an evolution strategy (ES) which is a special variant of an EA is used. ES 2.1. Structure of the MA Initialization design variables The structure of the MA used here is model variables Simulation Evaluation of the shown in Figure 1. The optimization starting point first generation local optimizer Local search procedure starts with a feasible random initialization of the first population. In Selection for reproduction order to evaluate the P individuals of the CONOPT Variation Recombination/Mutation population, the corresponding model design variables variables are computed by CONOPT by model variables Simulation Evaluation of the solving a simulation model. The resulting starting point offspring generation local optimizer Local search point in the space of all variables Selection for new generation represents a possible design which is used No as a starting point for the local Termination criterion fulfilled? optimization in the space of all continuous Yes variables. This local search is also Figure 1. Structure of the memetic algorithm performed by CONOPT. According to the evolutionary model of Lamarck, the genes of the individuals are replaced by the values of the design variables of the corresponding local optimum. As long as no feasible design with N column stages without or in combination with an external reactor is found all model variables within the simulation model are initialized with the value 1. After a first solution has been found, the values of the model variables of the nearest feasible point found so far (measured by the Euclidean norm) are used as initial values. The generation cycle of the ES starts with a random selection of O individuals for reproduction. These individuals are recombined and mutated by problem-specific operators and become offspring individuals that are evaluated in the same manner as the individuals of the initial population. Then the population for the next generation cycle is selected by choosing the P best individuals that do not exceed a maximal ‘life-span’ of N generations out of the set of offspring and parent individuals. The generation cycle stops if a predefined termination criterion is fulfilled e.g. a time limit or a generation limit.
2.2. Variants of the MA Two variants of the MA were tested: the basic algorithm (MAbase) where no restrictions on the number of feeds nor on the number of exchange streams are considered and an extended version of the basic formulation where the number of feeds and the number of
Flowsheet Optimization by Memetic Algorithms
233
exchange streams are restricted to a maximal number of three (MACF1). A detailed description of these variants for the optimization of an RDC without an external reactor can be found in [1]. Here, the exchange streams between the reactor and the column are handled in the same fashion as the feed streams.
3. The case study As a case study, the optimization-based design of a reactive distillation column with an optional external reactor for the production of MTBE from isobutene and methanol (IB + MeOH ļ MTBE) in the presence of n-butane at a pressure of 8 bar is considered. The reaction is kinetically controlled, equilibrium limited and heterogeneously catalyzed. The substance system exhibits 3 binary azeotropes. The desired purity of the product is 99 mole-%. The total amount of the feed streams is fixed (F1,tot = 6.375 mole/s MeOH, F2,tot = 8.625 mole/s IB/n-butane). Structural and operational parameters - e.g. the number of stages and the reflux ratio - have to be determined such that the annual profit of the column is maximized. The superstructure of the process comprises N = 60 stages of which only a subset may be included in the optimal solution. The reboiler and the condenser are modelled as stages without reaction. The model of the reactor is similar to the model of the stages extended by external heat exchange and without the presence of a vapour phase. While the column stages can be purely separating or can possess an integrated functionality, the reactor provides a hold-up which is purely reactive. It has two possible feed streams (F1,cstr, F2,cstr) and possible liquid exchange streams with each stage of the column (from reactor to stage (Rin) and from stage to reactor (Rout)). The objective is to maximize the annual profit which is calculated by the annual revenues for the products minus the annualized investment cost, annual operating cost and annual cost for raw materials. The set of design variables comprises the amounts of both feeds i = 1, 2 on the stages k = 1, ..., N of the column denoted by Fi(k) and in the reactor cstr denoted by Fi,cstr, the amounts of catalyst on the stages k = 2, ...,N í1 denoted by Ecat(k), two variables Įtop and Įbottom (0, 1) for the reflux ratio at the top and the ratio of the evaporation rate to the product removal at the bottom of the column, the binary activation variables Mk for the stages k = 2, …, N – 1, the volume (Vcstr) of and the temperature (Tcstr) in the reactor and variables ERin(k) [0, 0.9] and ERout(k) [0, 1.0] for the ratio of the liquid stream from stage k to the reactor to the total liquid stream that leaves stage k and the ratio of the liquid stream that flows from the reactor to stage k to the total liquid stream that leaves the reactor. The models consist of a large number of algebraic equations that were formulated in the modelling language GAMS. Different models are used by the different algorithms. The reference algorithms SBB/CONOPT and SBB/OQNLP/CONOPT use the superstructure models MTBE_CSTRMINLP and MTBE_CSTRMINLP-CF. In the basic formulation MTBE_CSTRMINLP it is assumed that fractions of both feed streams can enter the column on each stage of the column including the reboiler and the condenser. It is also assumed that fractions of the liquid stream from the reactor can enter each stage of the column except the reboiler and the condenser, and fractions of the liquid streams can leave each of these stages to enter the reactor. MTBE_CSTRMINLP-CF is an extension of the basic formulation by a restriction on the number of the feed streams of each feed and on the number of the exchange streams in each direction to a maximum of three each. These restrictions introduce a large number of binary variables to represent the existence of the streams as well as additional inequality constraints into the model. The MA introduced in this contribution uses the model formulations MTBE_CSTRNLP and
234
M. Urselmann and S. Engell
the simulation model MTBE_CSTRSim. MTBE_CSTRNLP is the model of the continuous sub-problems which arise by fixing all discrete variables of MTBE_CSTRMINLP. The maximal number of stages N is fixed to a value between 10 and 60 and all of these stages are active. MTBE_CSTRSim is the model used to determine the values of the model variables that correspond to a certain column design. It comprises a subset of the equations and of the inequalities of the optimization model MTBE_CSTRNLP. The design variables here are removed from the set of free variables, and the equations and the inequalities that restrict the feasible values of the design variables are removed from the set of constraints as well.
4. Extension of the MA to Flowsheet Optimization The components of the MA developed for the optimization of an RDC only (see [1]) are extended by an optional external reactor. Due to the limited space, only the most important extensions are described here. 4.1. Representation The individuals of the MA are represented by the design variables described in Section 2. Instead of the superstructure representation, a variable-length representation is used here so that individuals of different lengths can be members of the same population. The number of the stages is represented by a single integer variable that defines the number of the remaining variables. The external reactor is represented by a binary variable Mcstr that indicates the existence of the CSTR and by the continuous design variables Fi,cstr, Vcstr, Tcstr, ERin(k) and ERout(k). In case of the formulations with restrictions on the number of streams (MACF1), this vector is extended by two integer variables nRin and nRout that represent the number of exchange streams in each direction and by two vectors iRin and iRout that represent the locations of the exchange streams. reflux ratio: 2.54
36
5.021 mol/s 1.354 mol/s
MeOH
30
23.4 m
4.2. Initialization & Variation All operators for the initialization, recombination and mutation are applied in a hierarchical fashion. In the first step, the number of stages of the column and the existence of the reactor are determined by these operators. The basic concepts of the operators developed for the case study without an external reactor [1] are adapted to handle the variables of the reactor in the same fashion as the variables of the column. The initialization is done randomly with a uniform distribution within the feasible range of the variables. The variation operators may cause a change in the number of stages of the column. In this case, a mapping of the stage indices of the individuals involved to the stages of the offspring is performed [1]. According to this mapping, the variables that correspond to the same stages are varied in groups. Missing values, e.g. in case of an increase of the number of stages caused by the mutation, are initialized randomly within their feasible domains. In the case of no external reactor, i.e. Mcstr = 0, all variables that correspond to the reactor are set to zero. After the application of the different operators, repair procedures are applied to reinstall feasibility with respect to the constraints defined on design variables. The extension of the approach to flow sheet optimization puts the focus on the mutation of the variables that define the existence of optional operation units (here: the external reactor). If the variable Mcstr is mutated with a high probability
5.137 mol/s 3.991 mol/s
0.302 m³ 327 K
20 6.506 mol/s
0.248 mol/s 3.240 mol/s
IB/n-butane
10 42.8 cm
1 pressure: 8.0
boil-up ratio: 1.61
Profit: 1018,830 €/a
MTBE
Figure 2. Best known solution
235
Flowsheet Optimization by Memetic Algorithms
pmut(Mcstr), the collected information about promising values for the variables of the reactor is lost. Therefore pmut(Mcstr) is defined as a parameter to configure the MA.
5. Results All algorithms were tested on a PC with 3.06GHz and 2GB RAM. Algorithms with stochastic influences were tested 10 times and the median performances of these runs are compared with the deterministic runs of SBB/CONOPT. For the case study without external reactor, a parameter tuning was done for all algorithms [1]. These parameters were also used for the extended case study here. The termination criterion is a limit of 4 hours (MA) and a node limit of 10,000 nodes (SBB). To find a good parameter value for pmut(Mcstr), the MA was tested ten times with pmut(Mcstr) = 0.05, 0.1, 0.2, …, 0.5. The values 0.4 in the case of MAbase and 0.2 in the case of MACF1 led to the best results. In Figure 2, the best known solution for both the formulations with and without restrictions on the number of streams that was found by the MA (but not by the other algorithms) is shown. MAbase found this solution in six test runs after 194 min and 40 sec in the median case. The progress curves of the different algorithms are shown in Figure 3. In (a)
1019
(b) 1000
1017 1016
MAbase (Median)
1015 1014
SBB/OQNLP/ 3CONOPT
SBB/CONOPT MAbase (Best) MAbase (Worst) 0
60
120
180
profit [1000€/a]
profit [1000€/a]
1018
950 900
SBB/CONOPT MAbase (Best)
850
MAbase (Median) MAbase (Worst)
240
time [min]
300
360
420 450
800
0
60 120 180 240 300 360 420 480 540 600 660 710 time [min]
Figure 3. Progress curves of the different algorithms, a) MTBE_CSTRMINLP, b) MTBE_CSTRMINLP-CF
both formulations, the MA provided better solutions than the reference algorithms. Without restrictions on the number of streams SBB/CONOPT is the fastest algorithm. It found one solution close (in profit) to the best known solution after 12 min and 6 sec. In case of the formulation MTBE_CSTRMINLP-CF, SBB/CONOPT only reached a solution quality of 951,295 €/a, while the quality of the best solution found by MACF1 is 1018,660 €/a in the median case. The median time needed to find this solution was 236 min and 33 sec. The best known solution was found by MACF1 in one of the ten test runs. SBB/OQNLP with 3 CONOPT calls could not find good solutions within several days of computation.
6. Conclusions The concept of the MA presented in [1] was successfully extended to a flowsheet optimization problem. The introduction of a large number of discrete variables and additional constraints led only to a moderate increase of the computational effort needed for the solution, while the solution quality was constantly high. This shows the potential of the algorithm for larger flowsheet optimization problems which are characterized by a large number of structural decisions that are difficult to handle by MINLP techniques.
References [1] M. Urselmann, S. Engell, Computers & Chemical Engineering (2011), in press. [2] I. E. Grossmann, J. A. Caballero and H. Yeomans, LAAR 30, (2000), 263-284. [3] S. Barkmann, G. Sand and S. Engell, CIT 80, (2008), 107–117.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Biomass to chemicals: Design of an extractive reaction process for the production of 5hydroxymethylfurfural Ana I. Torres,a Prodromos Daoutidis, a Michael Tsapatsis a a
Department of Chemical Engineering and Materials Science,,University of Minnesota, Minneapolis, MN, 55454, USA
Abstract Furanic compounds such as 5-hydroxymethylfurfural (HMF) can be obtained from sugars and have the potential to serve as substitutes of petroleum derived building blocks in the production of fuels and chemicals. In this work, we propose a process for the production of HMF from fructose based on extractive-reaction and formulate an optimization problem in order to find the operating conditions that minimize its cost of production. Keywords: Biorefinery, HMF, process design
1. Introduction 5-hydroxymethylfurfural (HMF) has been widely recognized as a key intermediate in the production of biomass derived fuels and polymers. Its synthesis is based on the acid catalyzed dehydration of sugars, mainly hexoses, which is highly non-selective when taking place in aqueous media. In order to improve HMF yield, several laboratory scale reaction-separation methods have been reported (Kuster 1990, Lewkowski 2000, Roman-Leshkov et al. 2006, Van Dam et al. 1986), yet, studies of the feasibility of these processes from an economic and energy perspective are still scarce. In our previous work (Torres et al. 2010) we considered the concept proposed by Dumesic and coworkers (Roman-Leshkov et al. 2006) to develop and evaluate a continuous process for the production of HMF from fructose, the hexose that provides the highest HMF yield in acid aqueous media. As shown in Fig. 1 (a) this process consists of a biphasic reactor in which the HMF produced in the aqueous phase is selectively extracted by the organic phase thus minimizing its degradation. A liquidliquid extractor improves HMF recovery and an evaporator is used for its purification. In that case, we found that HMF costs comparable to its oil-derived analogue were difficult to obtain and concluded that alternative processes together with lower fructose prices and more selective kinetics were needed in order to reduce the cost of HMF. In this work, we focus on extractive-reaction processes as an alternative for the production of HMF. These processes, which consist of a single unit combining reaction and extraction, have been broadly used in the production of metals, and the possibility of their application in other chemical industries has motivated research in this area (see for example Minotti et al. 1998). A possible scheme for the production of HMF using this approach is presented in Fig. 1 (b). Here, the dehydration of fructose and the extraction of HMF take place in a tubular biphasic reactor-extractor where the aqueous solution containing fructose and catalyst and the stream containing the organic solvent are fed countercurrently. As in the previous process, HMF is separated from the organic solvent by evaporation, and both the evaporated solvent and the aqueous solution
Biomass to chemicals: Design of an extractive-reaction process for the production of 5hydroxymethylfurfural 237
(a)
(b)
Figure. 1: (a) CSTR based process studied in Torres et al. 2010 (b) Extractive reaction process. Blue streams represent aqueous phase flows, green streams, organic phase flows. vk denotes molar flow rate in the stream k; FJk molar flow of component J in stream k; J= A: fructose, BPA: byproducts form fructose, B: HMF, C: levulinic acid, D: formic acid, BPB: other decomposition products from HMF, W: water, S: solvent .
containing unreacted fructose are recycled back to the reactor. The goal of this paper is to find the operating conditions that minimize the cost of production of HMF using this extractive-reaction process. The effect of different solvents, fructose prices and kinetics is also addressed.
2. Modeling and optimization of the process The reactor-extractor was envisioned as a RTL contactor which as shown in Fig. 2 (a) consists of a series of circular baffles that rotate on a horizontal axis in a cylindrical stator. The two phases flow countercurrently through the gap defined by the baffles and the cylinder. Open buckets placed between the baffles mix the phases by collecting and releasing portions of one phase into the other (Lo et al. 1983). For modeling purposes the compartment thus defined is assumed to behave as an ideal CSTR. All chemical reactions are assumed to occur only in the aqueous phase and as shown in Fig. 3 the simplified first order model proposed by Kuster and Temmink 1977 is considered. Transfer of HMF from the aqueous to the organic phase is modeled using the correlations for mass transfer coefficient (K) from Godfrey and Slater 1994 and the average mass transfer area (a) reported in Alper 1988. With the additional assumption of isothermal process, the following equations describe the steady state of the process presented in Fig. 1 (b) and represent the constraints of the optimization problem: FJ0+FJ2 · (1-z) –FJ1=0 v0+v2·(1-z)-v1=0 Ri-1-Ri+ȈJȈm ȖJm ·rmi-NBi=0 FRJi-1 -FRJi + Ȉm ȖJm ·rmi=0 Ei+1-Ei + NBi=0 FEBi+1 -FEBi +NBi=0 v4-v5-v6=0 FB4 -FB6=0 vS0+v5-v3=0 NBi=K·a· (ȡaq·R·yBi-ȡorg·wBi) ·Acomp·ǻl Here, vi, FJi, Ri, Ei, NBi, FRJi and FEBi are molar flowrates of species J in stream i (as defined in Figs. 1 (b) and 2 (b)); z is the fraction of the purge; Acomp and ǻl are the cross sectional area and length of each compartment of the contactor; ȡaq and ȡorg are the molar densities of the aqueous and organic phases respectively; R, is the partition coefficient of HMF between the organic and the aqueous phases; ȖJm is the stoichiometric coefficient of component J in reaction m, and rmi the reaction rate in compartment i, with rmi defined as follows: r3,42i =k3,4 · yBi ·MAi MAi= ȡaq * (1-X)* ·Acomp·ǻl r1,2i =k1,2 · yAi ·MAi xJi= FJi/ vi yJi= FRJi/ Ri wBi= FEBi/ Ei
A. Torres et al.
238
(a)
(b)
Figure. 2: Schematic of a RTL contactor. (a) Side and front view. (b) Flows in the i compartment. Ri and Ei indicate aqueous and organic phase global molar flows, FRJi and FEJi species J aqueous and organic phase molar flows (J as defined in Fig. 1), NBi amount of HMF transferred from the aqueous to the organic phase; MAi molar holdup.
In addition, the following bounds were added to account for design recommendations or values reported in the literature for the operation of RTL contactors: x Ratio between the aqueous and organic volumetric flows: 1/6 (v3/ ȡorg)/ (v1/ ȡaq) 6 (Lo et al. 1983) x Residence time in each compartment: 20 s IJaq, IJorg 35 s (Alper 1988) x Dimensions of the contactor: 1 m L 8 m, 0.1 m D 2 m (Lo et al. 1983) minimum ratio between the length and the diameter of the reactor: L/D 2 (Jarudilokkul et al. 2000) The objective function to be minimized is the cost of HMF that balances raw material costs (fructose and solvent), energy cost due to evaporation of the solvent and capital costs (RTL contactor, evaporator and condenser): min f = $fructose·FAo + $solvent·FSo + $fuel· Ȝ·v5/Ș +CCF·TCI FB6 In this equation, TCI represents the total capital investment and CCF is the annualization factor. The procedure given by Seider et al. 2009 and the correlations for equipment costs from Seider et al. 2009 (condenser), Couper et al. 2010 (evaporator) and Lo et al. 1983 (contactor) were used to estimate the total capital investment. The cost function and constraints described above define a NP optimization problem where all the flowrates (except v6 and FB6, which define the production rate and purity), the fraction of the purge (z), the holdups and all the other variables needed to size the equipments (and thus to compute capital costs) are the optimization variables. The flowrates were allowed to vary freely, z was bounded between 0 and 1 and the sizes of the equipments were constrained to the ranges for which the cost correlations in Seider et al. 2009, Couper et al. 2010 or Lo et al. 1983 are valid. The values of the partition coefficient (R), the price of fructose ($fructose) and the kinetic constants (km) were used to generate different case studies and thus varied between optimization runs; all the other parameters were kept constant. The ranges of variation for R, $fructose and km, as well as the values of the parameters used in the simulations are mentioned and justified in the following section. Finally, the optimization problem was solved using GAMS/SNOPT.
3. Results The base case optimal result was obtained considering an inlet aqueous stream containing 30 % w/w of fructose and 0.25 M hydrocloric acid and an organic inlet
Biomass to chemicals: Design of an extractive-reaction process for the production of 5hydroxymethylfurfural 239
Table 1: Parameters used in the base case simulation Parameter Partition coefficient Kinetic constants (T=453 K) Figure 3: Simplified reaction model used in the simulations. F: fructose, BPA: byproducts from fructose, LA: levulinic acid, FA formic acid, BPB other products from HMF.
Mass transfer coefficient Mass transfer area
Value R=1.65 k1= 1.536·10-2 s-1 k2= 1.536·10-2 s-1 k3= 9.136·10-4 s-1 k4= 1.163·10-3 s-1 K=4.85·10-2 m·s-1 a=117 m2/m3compartment
stream composed by a 7:3 w/w mixture of methylisobutylketone (MIBK) and 2-butanol. The process was assumed to operate at 453 K, the value for the partition coefficient was taken from Roman-Leshkov et al. 2010 and the kinetic parameters were estimated from Kuster and Temmink 1977 (see Table 1). The cost of fructose was assumed to be 25 ¢/lb, and the desired HMF purity and the production level were respectively fixed to 95% (molar basis) and 7000 tons/yr (between 2% and 5% of the ethanol production of a current biorefinery). Under these assumptions the minimum HMF cost, 0.24 $/mol, is achieved with a 1.96 m3 contactor with 12 compartments and represents a 22% reduction when compared to the minimum cost of 0.306 $/mol reported for the CSTR based process (Torres et al. 2010). As shown in Fig. 4 three facts explain this reduction in cost. First, as expected this process operates at higher fructose yields due to an increase in both conversion and selectivity (~100% and 46% vs 91% and 43%, respectively) thus reducing the main cost, fructose, which accounted for 83% of the previous cost. Second, the evaporation costs are reduced to a third which is consistent with the fact that we expect this process to provide a more efficient utilization of the solvent, which consequently reduces the amount of heat needed to reach the same HMF purity. Finally, capital costs are also largely reduced, mainly due to the absence of an expensive liquid-liquid extractor. However, as capital costs accounted for less than 5% of the total cost, its reduction has a lower impact on HMF cost. As reported in Roman-Leshkov and Dumesic 2009, higher selectivities towards HMF can be obtained if extracting solvents providing better partition coefficients than 7:3 MIBK-2-butanol are used. Simulations using tetrahydrofuran (THF), the solvent reported to have the largest partition coefficient (R=7.1), were performed finding that an extra 6% reduction in cost is possible. As seen in Fig. 4 (b), for both solvents fructose dominates the cost, accounting for almost 90% of it. Motivated by this, a set of simulations using the lowest possible price for fructose, i.e. glucose price 15 ¢/lb, were performed. The results showed that for the base case kinetics (Table 1) and extracting solvent (7:3 MIBK:2-butanol), the minimum achievable cost is 0.15 $/mol. Finally, as more recent experimental data published in Roman-Leshkov et al. 2006 reported HMF yields higher than those predicted by the kinetic constants presented in Table 1, the sensitivity of HMF cost to these constants was also considered. An estimation of kinetic constants that predict the yields in that paper can be obtained by fitting the reported conversion and selectivity to the first order model presented in Fig. 3 (more details on this can be found in our previous work, Torres et al. 2010). Simulations considering the kinetics that correspond to the 68 % conversion and 70% selectivity published for run 4 in Roman-Leshkov et al. 2006, resulted in HMF costs
A. Torres et al.
240
between 0.21 $/mol and 0.23 $/mol, representing at most a 10% improvement over the base case. Larger improvements are only obtained when considering more selective, hypothetical kinetics.
(a)
(b)
(c)
Figure. 4: Optimal results under different secenarios. CSTR+LLE: results from our previous study (Torres et al. 2010); ER: BC: Extractive-reactor base case; ER: THF:Extractive-reactor simulation using THF as the extracting solvent; ER: LP: Extractive-reactor simulation using the lowest possible price of fructose (15¢/lb). (a) Minimum HMF cost ($/mol). (b) Cost distribution ($/mol). $F: fructose cost; $Qev: evaporation cost; $S: solvent cost; $Cap: capital cost. (c) Conversion (C); selectivity (S) and yield (Y).
4. Conclusion The extractive-reaction process studied in this paper, represented an improvement over the one published in Torres et al. 2010, achieving HMF costs between 0.21 $/mol and 0.24 $/mol. These results correspond to simulations where either the kinetics reported by Kuster and Temmink 1977 or those estimated from Roman-Leshkov et al. 2006 were considered. Further reduction of HMF cost could come from alternative reaction pathways or from lower prices of fructose.
Acknowledgments The authors would like to thank the financial support from the National Science Foundation (grant CBET-0855863).
References E. Alper, 1988, Chemical Engineering Research & Design, 66, 147-151 T. Lo, M. Baird, C. Hanson, 1983, Handbook of Solvent Extraction J. Couper, W. Penney, J. Fair and S. Walas, 2010, Chemical Process Equipment - Selection and Design S. Jarudilokkul, E. Paulsen, D. Stuckey, 2000, Biotechnology Progress, 16, 1071-1078 B. Kuster and H. Temmink, 1977, Carbohydrate Research, 54, 185 - 191 B. Kuster, 1990, Starch, 42, 314 J. Lewkowski, 2000, ARKIVOC, i, 17 M. Minotti, M. Doherty, M. Malone, 1998, Industrial & Engineering Chemistry Research, 37, 4748-4755 Y. Roman-Leshkov, J. Chedda, J. Dumesic, 2006, Science, 312, 1933-1937 Y. Roman-Leshkov, J. Dumesic, 2009, Topics in Catalysis, 52, 297 W. Seider, J. Seader, D. Lewin, S. Widagdo, 2009, Product and Process Design Principles Synthesis, Analysis and Evaluation J. Godfrey and M. Slater, 1994, Liquid-Liquid Extraction Equipment A. Torres, P. Daoutidis, M. Tsapatsis, 2010, Energy and Environmental Science, 3, 1560-1572 H. Van Dam, A. Kieboom and H. Van Bekkum, 1986, Starch, 38, 95-101
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A strategy to extend reactive distillation column performance under catalyst deactivation Rui M. Filipe,a,b Henrique A. Matos,b,c Augusto Q. Novaisd a
Área Departamental de Engenharia Química, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emídio Navarro, 1, 1959-007 Lisboa, Portugal b Centro de Processos Químicos, Av. Rovisco Pais, 1049-001 Lisboa, Portugal c Departamento de Engenharia Química e Biológica, Instituto Superior Técnico, Av. Rovisco Pais, 1049-001 Lisboa, Portugal d Unidade de Modelação e Optimização de Sistemas e Energia, Laboratório Nacional de Energia e Geologia, Est. do Paço do Lumiar, 1649-038 Lisboa, Portugal
Abstract This work addresses the effects of catalyst deactivation and investigates methods to reduce their impact on the reactive distillation columns performance. The use of variable feed quality and reboil ratio are investigated using a rigorous dynamic model developed in gPROMS and applied to an illustrative example, i.e., the olefin metathesis system, wherein 2-pentene reacts to form 2-butene and 3-hexene. Three designs and different strategies on column energy supply to tackle catalyst deactivation are investigated and the results compared. Keywords: reactive distillation, catalyst deactivation, feed quality, modeling, simulation.
1. Introduction Reactive distillation (RD) is a successful case of process intensification. It combines reaction and separation into the same physical vessel, with economic and environmental gains (Taylor and Krishna, 2000) leading to systems with significantly greener engineering attributes (Malone et al., 2003). In previous work, the authors’ developed a framework combining feasible regions and optimization techniques for the design and multi-objective optimization of complex reactive distillation columns (RDC) (Filipe et al., 2008). This led to the consideration of RDC with distributed feeds, involving the combination of superheated and subcooled feeds that provide a source or a sink of heat at specified trays of the columns, which favors reaction while reducing the total reactive holdup requirements. It was also found that higher conversions could be obtained with the same reactive holdup by using these feed qualities outside the traditional range, which led to the consideration of using this technique to overcome catalyst deactivation during column operation. Catalyst deactivation represents both an operational and a design problem. The reaction conversion achieved at each tray is reduced, which may limit column performance and product specifications. However, if catalyst deactivation is addressed at the design stage, an early assessment is possible and an operational strategy set in place to deal with the catalyst life-cycle. Little attention has been paid to the catalyst deactivation in
R.M. Filipe et al.
242
RDC by the research community. Wang et al. (2003) addresses the control of RDC when the production rate changes or the catalyst deactivates and proposes a control scheme able to maintain high purity and high conversion under such conditions. This work addresses the effects of catalyst deactivation and investigates methods to reduce their impact on the RDC performance. In previous work (Filipe et al., 2009) the use of variable feed quality and reboil ratio were investigated, and their positive effect in dealing with catalyst deactivation assessed. This work further extends previous analysis by adding two new designs and different strategies to tackle catalyst deactivation. A rigorous dynamic model developed in gPROMS and applied to an illustrative example, the olefin metathesis system, wherein 2-pentene reacts to form 2-butene and 3-hexene, is used to investigate how the feed quality and reboil ratio changes can maintain product purity while the catalyst deactivates A comparison of the results is also provided.
2. Dynamic model The rigorous dynamic model, developed using gPROMS, was built modularly and allows for different numbers of trays and feeds, as well as of feed qualities. Mass and energy balances are used at each element of the column. Pressure drop over the column is considered and calculated from the vapor flow speed and liquid height at each tray. Deviations from phase equilibrium can be accounted for through the built-in Murphree stage efficiency equation although they were neglected in this work. Physical properties are estimated using the included package IPPFO for ideal systems. The reaction is considered to occur only in the liquid phase at specified trays of the column, and the reactive holdup, rather than the catalyst amount, is specified. Three different designs taken from the Pareto front built using the previously reported design and optimization framework (Filipe et al., 2008) are used: case A represents the typical solution with one feed and low reactive holdup, while cases B and C represent designs with the same number of trays but with different reactive holdup and feed configurations (Table 1). The reactive holdup is equally distributed between the reactive trays. Catalyst deactivation is simulated through the inclusion of a negative exponential decay factor in the reaction rate constant. Although different decay laws could be used, the results shown here can be expected to be representative of a typical system behavior. Table 1. Design specifications Case Number of stages | Feed trays Reboil | Reflux ratio Feed rate (mol/s) Feed temperatures (K) Condenser | Reboiler duty (kW) Purity (mol %) Reactive trays Total reactive holdup (kmol)
A 14 | 8 3.2 | 6.02 5.56 401 -460.68 | 256.58 97.77 6-10 24.7
B 23 | 8,17 1.85 | 4.14 2.86, 2.70 298, 560 -337.58 | 148.38 96.82 7-11 22.55
C 23 | 9,14 1.23 | 2.48 2.18, 3.38 298, 416 -229.32 | 99.60 96.24 9-19 60.28
A strategy to extend RD column performance under catalyst deactivation
243
Figure 1 depicts the variation of product purity with catalyst activity for the three scenarios considered. The lower decrease rate observed for case C is justified by its larger reactive holdup.
3. Manipulation of the feed quality and reboil ratio The use of feed qualities outside the traditional range (q < 0 and q > 1) has proved to be instrumental in reducing the total reactive holdup (Hoffmaster and Hauan, 2006; Filipe et al., 2008). It is therefore expected that the feed quality can also be effective in dealing with catalyst deactivation. Feed quality is related to the energy content of the feed and dictates how the feed is distributed between the liquid and vapor streams. Figure 2 depicts the variation of the feed temperature with the feed quality for case B. The explicit specification of the feed quality is not possible in the model developed as it would involve the specification of the feed enthalpy, a dependent variable. Alternatively, the feed specification is made indirectly by assigning values to temperature, pressure and composition, which for one single component stream limits the conditions to below (q > 1) and above (q < 0) the boiling point. In order to obtain a mixed feed with quality comprised between 0 < q < 1, different combinations of two feeds with temperatures above and below the boiling point can be used, thus achieving different energy contributions to the total flow. Streams with constant pressure, fixed composition (pure 2-pentene) and variable temperature were used to assess the effect of the feed quality outside that range. Figure 3 depicts the variation of product purity with the feed temperature. Note that in cases B and C, only the vaporized “hot” feed was subject to changes in the temperature. Case A displays low sensitivity to feed temperature. In this design the small reactive holdup associated to the also small size of the column contributes to this limitation, since it restricts the scope to handle any further decreases in the already low catalyst availability. A slightly more flexible behavior is found in larger columns, such as in cases B and C, where a more marked influence of the feed temperature is found. Comparing cases B and C, which have the same number of trays, it can be noted that case C, while exhibiting the lowest reboil ratio of 1.23, has a reactive holdup almost three times higher than B, displaying greater flexibility to deal with catalyst deactivation. Ϭ͘Ϭ
ϵϴ
&ĞĞĚƋƵĂůŝƚLJ;ƋͿ
WƵƌŝƚLJ;йͿ
ͲϬ͘ϯ ϵϲ
ϵϰ
ϵϮ
ͲϬ͘ϲ ͲϬ͘ϵ Ͳϭ͘Ϯ Ͳϭ͘ϱ
ϰϬ
ϲϬ
ϴϬ
ϭϬϬ
ĐƚŝǀŝƚLJ;йͿ
Figure 1. Variation of product purity with catalyst activity.
ϯϬϬ
ϯϱϬ
ϰϬϬ ϰϱϬ ϱϬϬ dĞŵƉĞƌĂƚƵƌĞ;<Ϳ
ϱϱϬ
Figure 2. Case B: variation of the feed temperature with feed quality.
ϲϬϬ
R.M. Filipe et al.
244
ϲϱϬ
ϵϬ
ϲϬϬ
ĐƚŝǀŝƚLJ;йͿ
WƵƌŝƚLJ;йͿ
ϵϲ
ϵϯ
ϱϱϬ ϳϬ
ĐƚŝǀŝƚLJ dĨĞĞĚ dĨĞĞĚ
ϲϬ
ϵϬ
ϴϬ
ϱϬϬ ϰϱϬ
ϱϬ
ϴϳ
ϰϬ ϰϬϬ
ϰϱϬ
ϱϬϬ ϱϱϬ ϲϬϬ &ĞĞĚƚĞŵƉĞƌĂƚƵƌĞ;<Ϳ
ϲϱϬ
Figure 3. Variation of purity with feed temperature
&ĞĞĚƚĞŵƉĞƌĂƚƵƌĞ;<Ϳ
ϭϬϬ
ϵϵ
ϰϬϬ Ϭ
ϱϬ
ϭϬϬ dŝŵĞ;ŚͿ
ϭϱϬ
ϮϬϬ
Figure 4. Cases B and C: activity decay and feed temperature evolution.
For cases B and C, where the feed temperature was found to be more effective, a new control loop was devised to manipulate the feed temperature: the purity is measured and the feed temperature is manipulated to maintain product purity. To this effect, a new scenario where the catalyst activity decays from 100% to 50% in about 200 hours is devised, as seen in Figure 4. Within this range, product purity is reduced by 3.4% and 2.6% for cases B and C respectively, if no action is taken (not shown). The larger increase in temperature observed in Figure 4 for case C (80K as against 55K for case B) is justified by the lower feed temperature and its inherent lower heat capacity, as confirmed by the accumulated variation in the feed energy of 7.5% and 7.0% for cases B and C, respectively (Table 2). Table 2. Cases B and C: average product purity and cumulative energy consumption when catalyst deactivates from 100% to 50%. Case
B
Strategy
Average purity (mol %)
No control
95.75 96.79 (+1.1%) 96.79 (+1.1%)
Feed temperature Reboil ratio
C
Energy supplied (GJ) Reboiler 108.4 108.4 (0%) 115.5 (+6.6%)
Feed Total 145.3 253.6 156.2 264.6 (+7.5%) (+4.3%) 145.3 260.8 (0%) (+2.8%)
Average purity Energy supplied (GJ) (mol %) Reboiler Feed Total 94.97 72.5 67.0 139.5 96.07 72.5 71.7 144.2 (+1.2%) (0%) (+7.0%) (+3.3%) 96.07 76.8 67.0 143.8 (+1.2%) (+5.9%) (0%) (+3.0%)
A more conventional approach is the manipulation of the reboil ratio which provides an indirect way to vary the energy supply to the column as it is directly related to the energy supplied to the reboiler. Using the same scenario for catalyst deactivation, a control loop is used to manipulate the reboil ratio while controlling product purity. This control loop is effective in maintaining the specification, with the final value of the reboiler duty being increased by 19% and 13% respectively for cases B and C. Again, case C demonstrates its higher flexibility, justified by the larger reactive holdup and total number of stages.
A strategy to extend RD column performance under catalyst deactivation
245
4. Comparison of alternatives Table 2 also compares the cumulative energy consumption and the average purity obtained in each of the three scenarios previously analyzed for catalyst deactivation in cases B and C: no control, manipulation of the feed temperature and manipulation of the reboil ratio when catalyst deactivates from 100% to 50%. For each strategy the relative changes of the variables to their reference values (“no control”) are shown in parenthesis. When no corrective action is taken, the average purity decays 1.1% for case B and 1.2% for case C, while with the proposed control strategies the purity remains at its initial value. Analyzing the variations in the supplied energy at the reboiler and feed stream, case B appears as more demanding, with higher percent energy increases being observed in both strategies. However, when considering the total energy consumption, both control strategies lead to similar values, with the one based on feed temperature being less satisfactory in case B, certainly related to the higher temperature used in the feed.
5. Final remarks This paper investigates the effect of catalyst deactivation in RDC. A rigorous dynamic model developed in gPROMS is used to simulate the reduction of catalyst activity and assess its effects on column performance. Besides identifying column behavior under situations of reduced reaction conversion, strategies to overcome catalyst deactivation are also addressed, namely through manipulation of the feed temperature and the reboil ratio. This procedure extends the operating time of the column without having to interrupt production and replace the catalyst load. The effectiveness of these actions is largely dependent on column design, but satisfactory results were obtained with the proposed strategies to handle situations where catalyst activity is decreased down to 50%, at the expense of increased energy consumption. The results clearly show that the manipulation of the feed quality can be successfully used, although at the expense of a higher increase in energy consumption when compared to the manipulation of the reboil ratio. In practice, the adoption of these strategies should be preceded by a correct economic evaluation accounting for the incurred extra energy costs and the savings associated to the extended life-cycle of the catalyst and reduced number of column shut-down and start-up operations. While specific to the actual operating environment this matter is being given further consideration.
References R.M. Filipe, S. Turnberg, S. Hauan, H.A. Matos, and A.Q. Novais, 2008, Multiobjective Design of Reactive Distillation with Feasible Regions, Industrial & Engineering Chemistry Research, 47, 7284-7293. R.M. Filipe, H.A. Matos, and A.Q. Novais, 2009, Catalyst deactivation in reactive distillation, Computer-Aided Chemical Engineering, 27, 831-836. W.R. Hoffmaster and S. Hauan, 2006, Using feasible regions to design and optimize reactive distillation columns with ideal VLE, AIChE Journal, 52, 1744-1753. M.F. Malone, R.S. Huss, and M.F. Doherty, 2003, Green chemical engineering aspects of reactive distillation, Environmental Science & Technology, 37, 5325-5329. R. Taylor and R. Krishna, 2000, Modelling Reactive Distillation, Chemical Engineering Science, 55, 5183-5229. S.J. Wang, D.S.H. Wong, and E.K. Lee, 2003, Control of a reactive distillation column in the kinetic regime for the synthesis of n-butyl acetate, Industrial & Engineering Chemistry Research, 42, 5182-5194.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Separation Circuits Analysis and Design, Using Sensitivity Analysis Freddy Lucay,a,b Mario E. Mellado,b Luis A. Cisternas,a,b Edelmira D. Gálvezb,c a
Departamento de Ingeniería Química, Universidad de Antofagasta, Chile Centro de Investigación Científico tecnológico para la Minería, CICITEM, Chile c Departamento de Ingeniería Metalurgica, Universidad Católica del Norte, Chile b
Abstract Sometimes, the recovery and/or concentration of value components can not be done in one operational stage, which is why separation circuits are used. Usually, the component to be separated is distributed with different concentrations into different particle sizes, showing different levels of recovery by size and concentration. Other times, we want to selectively remove more than one value component, taking advantage of differences in the components floatability at different values of pH and pulp potential. In literature, several methods for the design of these circuits have been presented. These methodologies can be classified among those that use heuristics and those that use mathematical optimization techniques. However, none of these options is used actually in industry. This is, the former is very simple to incorporate the complexities of the problem and the latter requires more specialized training for the designer. In this work, we use a sensitivity analysis to analyze and design separation circuits. We study the effect of each stage on the general circuit, identifying relationships between the recovery of each stage and the global recovery of the circuit. Based on these results, we propose a novel methodology to analyze and design separation circuits. This new method can be regarded as hybrid, since it uses a mathematical analysis, coupled with the experience of the designer. An example is given for flotation separation in copper mining. Keywords: Concentration circuits, sensitivity analysis.
1. Introduction Mineral concentration is a method to separate valuable minerals based on differences in properties of the particles from milled mineral mixtures. Mineral concentration technologies include gravity concentration, magnetic concentration and flotation. These technologies are also used in the separation of plastic waste (Alter, 2005; Fraunholcz 2004). Usually the process operation conditions are fixed to control the balance between high recovery of the desired metal and a high grade value of the metal in the product outflow (Mendez et al., 2009). Past experience has shown that a single step in separation is inefficient. The inclusion of a greater number of complementary and supportive steps is required, which must be supplemented by auxiliary operations. Taking into account the large volume of material to be treated and its associated costs, the choices related to the configuration of the separation system are critical. Assuming the existence of prior reducing stages for mineral mix particle size (crushing and milling), the optimum concentration circuit problem can be summarized as the conversion of a quantity of valuable mineral from a mining operation into a concentrate product with maximum utility and minimum impact on the environment.
Separation Circuits Analysis and Design Using Sensitivity Analysis
247
One of the first ways a designer has to solve a synthesis problem (in any process) is the trial-and-error method. If the trial and error method is accepted, there are many ways to arrange a concentration circuit, although a number of them can be incorrect, ineffective or highly expensive, which is shown when feedback of an existing process is obtained. That is, either circuit can be found which is different to the existing ones, but producing similar metallurgical and/or economic results in which optimization is not truly obtained or added circuits which remediate those defects in the existing circuits, but which introduce their own operational conflicts. In fact, it is common to see that concentration circuits change over time solving some problems, but causing new ones. The aim of this work is to show how a sensitivity analysis can be used for analyzing, designing and retrofit concentration circuits.
2. Mathematical Framework To describe the main elements of our procedure, let us consider the example in Figure 1. It is clear that same procedure can be done with other concentration circuits. The circuit of Figure 1 includes three processing stages (R, C, S). At each stage, the feed of the species j, fj, is processed producing a concentrated Tij fj and a tail (1-Tij ) fj. Here, Tij is the transfer function of the species j in stage i, and it can be determined experimentally or through models based on the separation technology used in stage i. These transfer functions can be adjusted by changing aspects of design (e.g., number of cells in a flotation bank) or operational conditions (e.g., pH, particle size and pulp potential).
Figure 1. Concentration circuit.
By mass balance, the global recovery of specie j, R j , in the concentration circuit can be determined as a function of the transfer functions, this is,
R j ( T jR ,T jS ,T jC )
T jR T jC 1 T jR T jR T jC T jS T jR T jS
.
(1)
Setting T jS and T jC , we seek for conditions such that T jR d R j , T jR t R j , and
T jR
R j . This is, the regions where the overall recovery may be higher or lower than
the recovery of the rougher stage according to the values of T jS and T jC , as shown in Figure 2. As in a concentration process, the different sizes and composition fractions are grouped into classes j, which have similar transfer functions. It is clear that the species or classes you want to concentrate must belong to region T jR d R j , and species you want to remove must belong to region T jR t R j .
F. Lucay et al.
248
Figure 2. Regions where the overall recovery may be higher or lower than the recovery of the rougher stage.
Moreover, the sensitivity functions of the above circuit are given by the following equations,
wR j
T jC ( 1 T jS )
wT jR
( T jR T jS T jS T jR T jC T jR 1 ) 2
wR j
T jC T jR ( 1 T jR )
wT jS
( T jR T jS T jS T jR T jC T jR 1 ) 2
wR j
T jR ( 1 T jR T jS T jR T jS )
wT jC
( T jR T jS T jS T jR T jC T jR 1 ) 2
(2)
(3)
(4)
The above expressions show the effect of the recovery of each stage on the overall recovery for a species or class j. Then, it is possible to identify which stage is important to control the behavior of a particular species in the process.
3. Application example Consider that the circuit discussed in section 2 corresponds to a flotation circuit in which a material composed of gangue (specie 2) and a valuable species (species 1). The operating conditions of this circuit are such that the transfer functions for species 1 are, T1R =0.75, T1S =0.85, T1C =0.73, and for specie 2 are T2R =0.25, T2S =0.40, T2C =0.30. Each stage corresponds to flotation bank with 5, 8 and 7 cells in the stages R, S and C, respectively. The transfer function for each stage is given by (Cisternas et al., 2006)
T ji
1
1 ( 1 k W i ) Ni i j
,
(5)
where Ni is the number of cells, k ij is a kinetic constant of species j and W i the retention time in one cell in the bank i. Under the conditions of Figure 2, we have that R1 ( T1R ,0.86,0.73 ) t T1R and
R2 ( T2R ,0.40,0.30 ) d T2R . Also, as T1R =0.75 for species 1 and T2R =0.25 for species 2,
Separation Circuits Analysis and Design Using Sensitivity Analysis we
have
the
following
overall
249
recoveries
R1 ( 0.75,0.86,0.73 ) =0.94 and R2 ( 0.25,0.40,0.30 ) =0.143 (i.e., 94% for species 1 and 14.3% for species 2.) A sensitivity analysis can help us to improve the operational conditions of the stages involved in the process, and, therefore, reduce the percentage of gangue recovery without affecting too much the percentage recovery of useful material. The recovery and sensitivity of the above circuit are given by the equations 1 to 4. Then, setting the transfer functions T jS and T jC in the eq. 2, the behavior of global recovery sensitivity with respect to the transfer function T jR can be studied as shown in figure 3. Figure 3a shows wR j wT jR versus T jR for values of ( T jS ,T jC ) close to the species 2. It can be seen that sensitivity increases as the value of T jR increases. The opposite behavior is shown in Figure 3b which corresponds to values of ( T jS ,T jC ) near species 1. This means that the behavior of the sensitivity is inversely between the valuable specie and the gangue.
a. specie 2
b. specie 1
Figure 3. Sensitivity analysis for the example.
Figure 4a shows wR j wT ji versus T ji for values of ( T jR ,T jS ,T jC ) close to species 2. We can observe that for species 2, the highest sensitivity to the global recovery is given in ( 0.25,0.4,T2C ) and ( T2R ,0.40,0.3 ) , which indicates that it is more sensitive to the transfer functions in the stage C ( T2C ) and R ( T2R ). Looking at the graphs in Figure 4b, we can observe that for species 1, the highest sensitivity in the global recovery is given in ( 0.75,T1S ,0.73 ) , which indicates that it is more sensitive to the transfer function in the stage S ( T1S ). The same analysis can be performed for the other derivatives in equations 3 and 4. Performing the sensitivity analysis, it is possible to reach the following conclusions: 1) for species 1, the highest sensitivity to the global recovery for the stages R, S and C are given in ( 0.75,T1S ,0.73 ) , ( 0.75,T1S ,0.73 ) - ( 0.75,0.86,T1C ) and ( 0.75,T1S ,0.73 ) - ( 0.75,0.86,T1C ) , respectively. On the other hand, for species 2, the highest sensitivity to the global recovery for the stages R, S and C are given in
F. Lucay et al.
250
( 0.25,0.4,T2C ) - ( T2R ,0.4,0.3 ) , respectively.
( 0.25,0.4,T2C ) - ( T2R ,0.4,0.3 )
and
( T2R ,0.4,0.3 ) ,
a b Figure 4. Sensitivity analysis for the example as function of transfer functions. With this information about the behavior of global recovery, the sensitivity with respect to the transfer functions, we can intuitively change the values of the transfer functions of species 1 and 2 in the stages where they are most influential. Then, by reverse simulation with equation 5, it is possible to determine new designs (N) and/or operating conditions ( k ij and/or W i ) to achieve a better system performance. Thus, by changing the number of cells at 11 and 9 for stages S and C respectively, it is possible to obtain the following results: R1 ( 0.75,0.80,0.68 ) =0.91 and R2 ( 0.25,0.40,0.20 ) =0.077.
4. Conclusions In this paper, a sensitivity analysis was used as a tool to analyze and design separation circuits. This work studied the effect of each stage on the general circuit, identifying relationships between the recovery of each stage (transfer functions) and the global recovery of the circuit. Based on these results, we propose a methodology to analyze and design separation circuits. This method can be regarded as hybrid, since it uses mathematical analysis coupled with the experience of the designer. Acknowledgments The authors wish to thank CONICYT for its support through Fondecyt Project 1090592.
References H. Alter, 2005, The recovery of plastics from waste with reference to froth flotation, Resources, Conservation and Recycling, 43, 119–132. L. Cisternas, D. Méndez, E. Gálvez, R. Jorquera, 2006, A MILP model for design of flotation circuits with bank/column and regrind/no regrind selection, Int. J. Miner. Process. 79,253–263 N. Fraunholcz, 2004, Separation of waste plastics by froth flotation––a review, part I, Minerals Engineering, 17, 261–268. D. Mendez, E. Gálvez, L. Cisternas, 2009, State of the art in the conceptual design of flotation circuits, International Journal of Mineral Processing, 90(1-4), 1-15.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Feasibility of reactive pressure swing batch distillation in a double column configuration Gabor MODLA Budapest University of Technology and Economics, Department of Building Services and Process Engineering, H-1521 Budapest, Muegyetem rkp. 3-5, [email protected]
Abstract Ethyl-acetate is generally produced by the esterification reaction of ethanol with acetic acid. Since the reaction is equilibrium-limited, the use of reactive distillation is an attractive option. A new double column system is suggested for ethyl-acetate production application of reactive pressure swing batch distillation. The system is investigated by feasibility study based on the analysis of reactive and non-reactive residue curve maps and by rigorous simulation using a professional dynamic simulator. Keywords: Ethyl-Acetate Production, Reactive Residue Curve Map, Pressure Swing Batch Distillation, Reactive Batch Distillation, Dynamic Simulation
1. Introduction Ethyl-acetate is one of the most widely used commodity chemicals. It is generally produced by the esterification reaction of ethanol with acetic acid. Since the reaction is equilibrium-limited, the use of reactive distillation, which combines the reaction and separation functions in one column, is an attractive option. The operation can be performed in continuous (e.g. Lai et al. 2007, Kloker at al. 2004), semi-continuous (e.g. Adams and Seider 2009) or batch modes (e.g. Patel et al. 2007). Pressure swing distillation is an efficient separation method to separate pressure sensitive azeotropic mixtures. At the reactive pressure swing distillation the benefits of both reactive and pressure swing distillation can be found. Reactive pressure swing distillation in continuous mode was investigated by Bone et al. (2007). The pressure swing batch distillation was analyzed by a feasibility study by Modla and Lang (2008). They suggested two new column configurations operated in open mode (double column batch rectifier and double column batch stripper) for separate pressure sensitive azeotropes. Modla (2010) suggested two new column configurations for binary pressure swing batch distillation operated in closed mode. The goals of this paper are: - to present an alternative batch process for ethyl-acetate production, - to investigate by feasibility study the ethyl-acetate production process by reactive pressure swing batch distillation in a double column system, - to verify the feasible process by rigorous simulation.
252
G. Modla
2. Feasibility study The aim of this feasibility study is to find a pressure swing batch distillation method for removal of at least one product component (ethyl-acetate or water) from reactants during the course of the reaction in order to shift the chemical equilibrium forward, resulting in a higher conversion. This feasibility study is based on the analysis of residue curve maps (2D maps for ternary systems, 3D map for quaternary system) and reactive residue curve maps at two pressures. When making feasibility studies we suppose that maximal (perfect) separation can be achieved. This involves the following assumptions: 1) large number of stages, 2) large reflux/reboil ratio, 3) negligible liquid plate hold-up, 4) negligible vapour hold-up. Furthermore for the calculation reactive residue curve maps we suppose that: 1) the reaction is instantaneous in the reaction zone, 2) the heat of reaction is negligible. 2.1. Reaction Kinetics Ethyl-acetate (EA) is generally produced by liquid-phase esterification reaction of ethanol (E) with acetic-acid (AA) in the presence of an acid catalyst (e.g. sulphuric acid, or a sulfonic acid ion exchange resin): Acetic-acid (AA) + Ethanol (E) l Ethyl-acetate (EA) + Water (W) This esterification is a typical example of reversible and equilibrium-limited reactions which are beneficial for applying a catalytic distillation system as continuous removal of products can shift the chemical equilibrium forward, resulting in a higher conversion. 2.2. VLE conditions In this work non-reactive systems are investigated by residue curve maps (2D maps for ternary systems, 3D map for quaternary system), and reactive systems by reactive residue curve maps, respectively. Two types of graph are necessary since reactive distillation is normally a sequence that involves separation with and without reaction. The non-reactive system has four pure components, which form four ternary systems. The residue curve maps of these ternary systems are presented in Figure 1 at different pressures. The classification of residue curve maps by extended M&N method (Modla et al. 2010) can be found in Table 1. Two minimum-boiling homogeneous azeotropes (W-E and E-EA), one minimum-boiling heterogeneous azeotrope (W-EA), and one minimum-boiling homogeneous ternary azeotrope (E-EA-W) can be found in this system. All these points are marked in Fig. 1. All binary azeotropes are slightly pressure sensitive and the ternary azeotrope is considerably pressure sensitive. In the quaternary system (Fig. 2) the plot shows a few 3D residue curves to indicate the existence of only one distillation region at both pressures. The residue curves start from ternary (E-EA-W) azeotrope node and end in acetic-acid (AA) node. The reactive residue curve maps of this system are presented in Fig. 3, a few lines are drawn.
Feasibility of reactive pressure swing batch distillation in a double column configuration
253
Figure 1. Ternary residue curve maps in 2D a. 1.01 bar b. 10 bar
Figure 2. Quaternary residue curve map in 3D a. 1.01 bar b. 10 bar
Figure 3. Reactive residue curve map at different pressures a. 1.01 bar b. 10bar Ternary system Classification AA-E-EA 1P-0-0 AA-W-E 1P-0-0 E-W-EA 2P-2P-2P-mP EA-W-AA 1P-0-0 Table 1. Classification of ternary sytems
For presenting reactive residue curves the transformed composition variables are used. There is a quaternary reactive azeotrope (R-AZ) which is pressure sensitive and there are two non reactive azeotropes (E-EA and E-W), the other non-reactive azeotropes are eliminated by the reaction.
G. Modla
254
2.3. Feasibility results The initial charge composition will be xAA=0.3; xE=0.4; xEA=0.0; xW=0.3 (signed as CH). From a practical point of view the initial charge contains water because the dry (pure) acetic-acid and ethanol are expensive. Based on the analysis of the different maps we suggest the following separation methods. The reactive distillation would be in a batch rectifier column operated at 1.01 bar. At the top of the reactive zone we can reach the reactive azeotrope node (R-AZ: 0.010; 0.444; 0.510; 0.036, Fig. 4a.). If there is a non-reactive zone above the reactive one, we can reach the ternary azeotrope (E-EA-W: 0.126; 0.584; 0.290) as distillate.
Figure 4. Column profiles a. Feasible process
P=1.01 bar
b. Infeasible process
pressure reduction
P=10 bar
E-EA-W
NfD=10
ND=40
rectifying zone
NfE=10
D
R-AZ
reactive zone
R-CH
stripping zone pump
RE=10
E
NE=40 xspec,EA=0.99
CH
EA
Figure 5. Sketch of the feasible column configuration
The distillate is fed into the second column (Fig. 5) operated at 10 bar. In the second column with a stripping column profile we can reach the EA node, it means that the bottom product of the second column is EA (Fig. 4a). The process is feasible, because at least one product component (EA) can be withdrawn from reactants during the course of the reaction in order to shift the chemical equilibrium forward, resulting in a higher conversion. During the process the still composition (sp, Fig. 4a) tends toward W node. If there is not rectifying zone above the reactive rectifying zone in the first column then the process is infeasible because the bottom of the second column is AA (Fig. 4b).
Feasibility of reactive pressure swing batch distillation in a double column configuration
255
3. Rigorous simulation The aim of this calculation is to verify the results of the feasibility study and to present the process. The charge composition is: xAA=0.3; xE=0.4; xEA=0.0; xW=0.3 and the total quantity of the charge is 62.41 kmol. The composition of the reaction tank is changing considerable until 255 min (Fig. 6a), when the conversion of acetic-acid (AA) has reached 86 mol% (Fig. 6b). After this point the process is very slow. x [mol/mol] 1
[%] 100 90
water
0.8
conversion of AA to EA
80 70
0.6
60 50
0.4
40 30
acetic-acid
0.2
20
ethanol
10
ethyl-acetate
0 0
100
200
0 300
400
500
600
700
800 t [min]
0
100
200
300
400
500
600
700 t [min] 800
Figure 6a. Evolution of the composition in the reactive tank Figure 6b. Evolution of the conversion of acetic-acid (AA) to ethyl-acetate (EA)
Based on the result the ethyl-acetate production by reactive pressure swing batch distillation in a double column configuration is feasible and an attractive process way.
Acknowledgement This work was financially supported by the Hungarian Scientific Research Fund (OTKA) (No: K-82070) and by the Janos Bolyai Research Scholarship of the HAS.
References Adams T. A. II, Warren D. Seider, (2009), Semi-continuous reactive extraction and reactive distillation, Chem. Eng. Res. & Des. 87, pp. 245–262. Bonet J., R. Thery, X-M. Meyer, M. Meyer, J-M. Reneaume, M-I. Galan, J. Costa (2007), Infinite/infinite analysis as a tool for an early oriented synthesis of a reactive pressure swing distillation, Comp. Chem. Eng. 31 pp.487–495. Kloker M.. Kenig, E. Y., Gorak, A., Markusse, A. P., Kwant, G., Moritz, P. (2004), Investigation of different column configuration for the ethyl acetate synthesis via reactive distillation, Chem. Eng. and Proc., 43, pp. 791-801. Lai I.-K., Hung, S.-B., Hung, W.-J., Yu, C.-C., Lee, M.-J., Huang, H.-P., (2007), Design and control of reactive distillation for ethyl and isopropyl acetates production with azeotropic feeds, Chem. Eng. Sci., 62, (3), pp. 878-898. Modla G. and Lang P. (2008). Feasibility of new pressure swing batch distillation methods, Chem. Eng. Sci., 63, (11) pp. 2856-2874. Modla G., P. Lang, F. Denes, (2010), Feasibility of separation of ternary mixtures by pressure swing batch distillation, Chem. Eng. Sci, 65, (2) 870-881 Modla G. (2010), Pressure swing batch distillation by double column systems in closed mode. Comp. & Chem. Eng. pp. 1640-1654. Patel R., Singh, K., Pareek, V., Tadé, M.O. al. (2007), Dynamic simulation of reactive batch distillation column for ethyl acetate synthesis, Chem. Prod. and Proc. Mod. 2, 2.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Lipid Processing Technology: Building a Multilevel Modeling Network Carlos A. Diaz-Tovar,a Azizul A. Mustaffa, a Amol Hukkerikar, a Alberto Quaglia, a Gürkan Sin, a Georgios Kontogeorgis, b Bent Sarup, c Rafiqul Gani a a
CAPEC, Department of Chemical and Biochemical Engineering, Technical University of Denmark, Soltofts Plads Building 229, Kgs. Lyngby DK-2800, Denmark. b CERE, Department of Chemical and Biochemical Engineering, Technical University of Denmark, Soltofts Plads Building 229, Kgs. Lyngby DK-2800, Denmark. c Vegetable Oil Technology Business Unit, Alfa Laval Copenhagen A/S, Maskinvej 5, Soborg DK-2860, Denmark
Abstract The aim of this work is to present the development of a computer aided multilevel modeling network for the systematic design and analysis of processes employing lipid technologies. This is achieved by decomposing the problem into four levels of modeling: i) pure component property modeling and a lipid-database of collected experimental data from industry and generated data from validated predictive property models, as well as modeling tools for fast adoption-analysis of property prediction models; ii) modeling of phase behavior of relevant lipid mixtures using the UNIFACCI model, development of a master parameter table; iii) development of a model library consisting of new and adopted process models of unit operations involved in lipid processing technologies, validation of the developed models using operating data collected from existing process plants, and application of validated models in design and analysis of unit operations; iv) the information and models developed are used as building blocks in the development of methods and tools for computer-aided synthesis and design of process flowsheets (CAFD). The applicability of this methodology is highlighted in each level of modeling through the analysis of a lipid process that has significant relevance in the edible oil and biodiesel industries since it determines the quality of the final oil product, the physical refining process of oils and fats. Keywords: Lipid technology, multilevel modeling, property prediction models, process design, computer aided flow sheet design.
1. Introduction Over the past few decades, the world’s fats and edible oils production has been growing rapidly, far beyond the need for human nutrition (Diaz-Tovar et al., 2010). This overproduction combined with the growing consumer preferences for healthier food products and the interest in bio-fuels, had led the oleo chemical industry to face major challenges in terms of design and development of better products and more sustainable processes. However, although the oleo chemical industry is mature and based on wellestablished processes, the complex systems that lipid compounds form, the lack of accurate predictive models for their physical properties and unit operation models for their processing have limited a wide application of computer-aided methods and tools for process synthesis, modeling and simulation within this industry. In consequence, the aim of this work is to present the development of a computer aided multilevel modeling network consisting a collection of new and adopted models
Lipid Processing Technology: Building a Multilevel Modeling Network
257
(properties and processes), methods and tools for the systematic design and analysis of processes employing lipid technologies. This is achieved by decomposing the problem into four levels of modeling (see Figure 1): 1.) Pure component properties; 2.) Mixtures and phase behavior; 3.) Unit operations; and 4.) Process synthesis, design, and retrofit. The applicability of the proposed methodology is highlighted through a case study involving a physical refining (deodorization) process of oils and fats (Ceriani et al., 2010). This lipid process has significance in the edible oil and biodiesel industries since it determines the quality of the final oil (Maza et al., 1992). Hence, optimization of process parameters is critical for the production of any acceptable oil product.
Figure 1 Overview of the proposed multilevel modeling network
2. The Multilevel Modeling Network 2.1. Pure Component Modeling (First Level): The development of the first modeling level of the multilevel network framework for the analysis/design of any process involving lipid technology is achieved by: a) identifying the most significant and widely produced edible oils/fats, as well as their corresponding representative families of chemical species; b) molecular description of the identified chemical species in terms of the property model (e.g. the Marrero and Gani (2001) method); c) creating a list of the physical-chemical properties needed for model-based design and analysis of edible oil and biodiesel processes; d) collecting the available experimental data from different sources for the identified lipid compounds and their corresponding properties; and, e) selecting and adopting the appropriate models to predict the necessary properties, to fillout the gaps in the lipid-database (CAPEC_Lipids_Database) and to make it suitable for applications with other computer-aided tools. The natural fats and oils are complex chemical mixtures composed of different families of chemicals. Fatty acids (from C4-C24) esterified to glycerol (mono-, di-, and triglycerides), are the main constituents of these mixtures; while tocopherols, sterols, carotenes, and phospholipids are minor compounds that are considered as high-value by-products of the refining processes. The CAPEC_Lipids_Database (Diaz-Tovar et al., 2010) contains a total of 235 compounds divided into three major families: glycerides
258
C.A. Diaz-Tovar et al.
(65 triglycerides, 41 diglycerides, and 15 monoglycerides), fatty acids (29) and esters (29 methyl and 29 ethyl), and minor compounds (4 tocopherols, 4 tocotrienols, 2 phospholipids, 9 terpenes, 4 sterols, 2 sterol esters, and 2 sterol glycosides). For these compounds a total of 15 physical-chemical properties have been identified: 10 single value (critical, basic, and heats of formation) and 5 temperature dependent (vapor pressure, liquid heat capacity, liquid density, liquid viscosity, and surface tension). And a total of 2560 experimental data points for 12 pure component properties have been collected and included within the database. 2.2. Mixtures and Phase Behavior Modeling (Second Level): Understanding the phase behavior is essential in edible oil/lipid processing, which involves separation processes such physical refining and deodorization of edible oils. The information obtained from the equilibrium calculations helps to design unit operations and to dimension the equipments. In the proposed modeling network, the use of the UNIFAC-CI model (Mustaffa et al., 2010) is used to predict the activity coefficients of the multicomponent chemical systems under study. The advantage of the UNIFAC-CI model is that phase equilibria for any lipid system can be predicted especially when the group interaction parameters (GIPs) for the reference UNIFAC model are not available. 2.3. Unit Operations Modeling (Third Level): The production (from crude oil production to the final product) of edible oils/fats involves a variety of processing steps and unit operations such as: fluid handling, heat transfer, separation processes such as adsorption, two-phase separation (liquid-solid, liquid-liquid, and liquid-gas), crystallization, filtering, chemical reactions (interesterification, hydrogenation), and steam stripping under vacuum. In this level of the modeling network, a model library consisting of new and adopted models of the involved unit operations is developed. This modeling level is applied as a stepwise methodology. In the first step, the unit operations involved in selected lipid process are listed and the availability of built-in models in commercial simulators is verified. In the second step, the required physical property data of the chemical species involved, equipment specifications, and operating data of existing process plants are retrieved. In the third step, a detailed computer aided modeling of selected unit operations is carried out based on the type of the model (e.g. steady state/dynamic, meso scale/micro scale) and intended goals (design / optimization / retrofit). In the fourth step, the operating data is reconciled with respect to quality of the data and measurement errors. The developed models are validated using reconciled operating data of the existing process plant. The validated models are incorporated in the model library as new and/or adapted models for use in process simulation tools such as PRO/II®. Once validated, the developed model library is made available for use in CAFD studies involving lipids. 2.4 Fourth Level: Process Synthesis and Design: In this level, computer-aided synthesis and design of process flowsheets (CAFD) is carried out. The methodology is divided into two main stages (Lutze et al., 2010), each composed of three steps (see Figure 1): i) the CAFD problem is defined and identified (1-3) and solved sequentially using simplified models to screen out unfeasible alternatives and reduce the search space; ii) rigorous models (4-6) are used to perform the final selection of the best alternative.
3. Application of the Multilevel Modeling Network Level 1: For the deodorization case study, composition of the crude oil represents a typical crude palm oil (Ceriani et al., 2010). Compounds and model parameters used to predict property behavior were retrieved from the CAPEC_Lipids_Database (see section 2.1).
Lipid Processing Technology: Building a Multilevel Modeling Network
259
Level 2: The selected compounds are described according to their corresponding UNIFAC functional groups. The major constituents, triglycerides, like most of the lipid compounds are described by a small set of functional groups: CH3, CH2, CH (UNIFAC main group 1), CH2COO (UNIFAC main group 11), and CH=CH (UNIFAC main group 2). Di- and mono- acylglycerides and free fatty acids are also described by these groups plus the groups OH and COOH (UNIFAC main groups 5 and 20), respectively. For the selected compounds, all GIPs for the reference UNIFAC model are available and therefore the parameters are fine-tuned using the UNIFAC-CI method to validate the applicability of this model. As this process occurs at high temperature and low pressure, the vapor phase is set as ideal and only the non-ideality of the liquid phase is considered. For each compound, activity coefficients were calculated at 250°C and 3.5 mmHg often used operation conditions. Results showed that for TAGs activity coefficients are almost equal to unity. However, this is not the case for the FFAs were the calculated activity coefficients are far from unity. Level 3: The unit operations involved in the process are listed as given by Ceriani et al. (2010). The commercial simulator PRO/II® had been selected to perform the simulation since it has the built-in models needed to model the process. A version of the database had been made available as a user-added database to the selected commercial simulator for the retrieval of the required physical property data. The operating data for each one of the major unit operations are given in Table 1. Table 1 Operational conditions in the deodorization process. Property
Economizer
Temp. Feed (°C) 105 Temperature (°C) Pressure Top(mbar) ǻP(mbar) 1000 NTS* *Number of theoretical stages
Final Heater
Stripper
High Temp Scrubber
Low Temp Scrubber
230 10 -
250 230-250 3.5 1.5 5
250 170-180 2.9 0.6 2
180 60-65 2.4 0.25 2
Level 4: For the case study, the CAFD problem definition is given Table 2. Simulations have been performed for the considered flowsheets. Detailed simulation results cannot be fully described in this manuscript due to a lack of space; however, upon request they can be provided by contacting the correspondent author. Overall results showed on one hand that the amount of stripping stream and the operation conditions in the deodorizer (temperature and pressure), are the main design variables that affect the quality of the final product (acidity, tocopherol retention, and neutral oil loss); while on the other hand, the double scrubber system is a novel addition to the process to add a higher commercial value to the by-product stream.
4. Conclusions and Future Work In this work the development of a multilevel modeling network for the design/analysis of processes involved in the edible oil and biodiesel industries has been presented. A comprehensive database containing the most representative chemical species, physical properties predictive models, and binary interaction for phase behavior description (liquid-liquid and vapor-liquid) has been created (Levels 1 and 2). In the third level of the modeling network, the created database was further improved by adding new and adopted models of typical unit operation present in the lipid industry and an external version of it has been made available as a user-added database for the commercial simulator PRO/II®. In the last level, alternatives to the original configuration of the
C.A. Diaz-Tovar et al.
260
process have been proposed and analyzed in terms of cost and energy efficiency. The applicability of the proposed network has been highlighted through the analysis of a lipid process involving the physical refining (deodorization) of palm oil. Results showed that, even though the process is fulfilling its aim, process variables and the configuration of the unit operations can be modified to improve the oil yield, the retention of tocopherol, and acidity removal. Current and future work is heading towards a better understanding of the phenomena that takes place in the stripping column (e.g. hydrolysis, degradation, etc.) and the design/analysis of the deodorization distillate treatment process, as this was identified as a critical unit operation of the process, by means of a double scrubber system. Table 2 CAFD problem definition for the deodorization process. Step
Task/Action
1
Synthesis/design problem definition: Retrofit of the deodorization process and selection of the Net Present Value (NPV) as the objective function. Establishment of the appropriate link to Levels 1-3 of the multilevel modeling network - Generate the superstructure to represent different flowsheet alternatives for the selected process. - List of logical constraints (e.g. Food safety, EHS regulations, etc.). - Collect performance metrics of existing and competing oil deodorization processes for benchmark purposes. - Reconciliation and consistency of data as well as systematization into an efficient database structure, accessible from different project phases. Generate the correspondent superstructure - Collect the needed submodels for the multiscale model formulation. - Complement the models contained with the CAPEC_Lipids_Database (physical properties and unit operations) with all cost/value and sustainability models needed to calculate the objective function from the design variables. - Apply systematic model development framework ensuring consistency among different scales. Generate feasible flow sheet alternatives - Fix sets of binary variables in the superstructure. - Perform a fast screening to eliminate unfeasible alternatives as well as redundant options Reduce the search space - Screen out options that violate operational constraints (shortcut models are used). - Select the most promising alternative based on the value of the maximized objective function. Define the optimal process design - Perform a detailed/rigorous modeling and optimization of the previously selected alternative - Calculate from model output, indicators used as benchmarks and as inputs for project and portfolio management decisions (e.g. financial and portfolio management indicators). - Perform sensitivity analysis and design of experiments to reduce the uncertainty by estimating the contribution of each parameters and models to output uncertainty.
2
3
4 5 6
References R. Ceriani, A.J. Meirelles, R. Gani, 2010, Simulation of thin-film deodorizers in palm oil refining, J. Food Process Eng., 33, 208-225. C.A. Diaz-Tovar, R. Gani, B. Sarup, 2010, Lipid technology: Lipid technology: Property prediction and process design/analysis in the edible oil and biodiesel industries, Fluid Phase Equilibria. 2010, doi:10.1016/j.fluid.2010.09.011. P. Lutze, R. Gani, J. M. Woodley, 2010, Process intensification: A perspective on process synthesis, Chemical Engineering and Processing, 49, 547–558. J. Marrero, R. Gani, 2001, Group-contribution based estimation of pure component properties, Fluid Phase Equilibria, 183, 183-208. A. Maza, R.A. Ormsbee, L.R. Strecker, 1992, Effects of deodorization and steam-refining parameters on finished oil quality, JAOCS, 69,10, 1003-1008. A.A. Mustaffa, G.M. Kontogeorgis, R. Gani, Analysis and application of GCPlus models for property prediction of organic chemical systems, Fluid Phase Equilibria, doi:10.1016/j.fluid.2010.09.033.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Enhancement of Productivity of Distillate Fractions by Crude Oil Hydrotreatment: Development of Kinetic Model for the Hydrotreating Process Aysar T. Jarullah, Iqbal M. Mujtaba*, and Alastair S. Wood School of Engineering, Design and Technology, University of Bradford, Bradford BD7 1DP, UK, * E-mail: [email protected]
Abstract Crude oil hydrotreatment enhances the productivity of distillate fractions due to chemical reactions. A trickle bed reactor (TBR) is used in this work for hydrotreating (HDT) crude oil. In order to obtain a useful model for the reactor which can be confidently applied to design, operation and control, accurate estimation of kinetic parameters of the relevant reactions are required. A kinetic model for those chemical reactions is proposed here. An optimization technique is used to obtain the best values of the kinetic parameters based on pilot plant experiment. The predicted hydrotreated product composition shows very well agreement with the experimental data for a wide range of operating conditions with absolute average errors less than 5% and clearly shows enhancement of productivity of distillate fractions. Keywords: Kinetic lumping model, Hydrocracking, Trickle-bed reactor, Oil upgrading.
1. Introduction During the last decades, the worldwide demand for transportation fuels with a quality that satisfies environmental regulations has increased. At the same time, the availability of light crude oil has undergone a decline, which has gradually been offset by heavy crudes (Alvarez and Ancheyta, 2009). These difficult tasks have led refiners to find solutions for processing higher amounts of heavy oils for increasing production of transportation fuels. Among all oil-upgrading technologies, catalytic hydroprocessing has the ability to increase the productivity of distillate cuts (Ancheyta et al., 2002), which is a hydrogen addition process that permits to obtain middle distillates and in general low boiling point products from high boiling point fractions at high operating conditions (Ancheyta et al., 2005). For performing preliminary design and optimization of hydroprocessing reactors, but also for any new process, as well as studies on the effect of operation parameters, type of feed, type of catalyst, etc. upon the reaction conversion and selectivity, a suitable reactor model needs to be developed (Ancheyta, 2007). In parallel with the hydroprocesses development of upgrading of heavy oils, different kinetic models have also been reported. Catalytic hydroprocessing kinetics of heavy feedstocks has been namely modeled by discrete lumping although other models have also been utilized (Ancheyta et al., 2005). The model parameters are highly dependent on feedstock type, thus the experimental data and kinetic parameter estimations must be carried out in order to obtain useful models that can be confidently applied to reactor design, operation and control (Ayasse et al., 1997). In the present study, a kinetic model for a hydrotreating process (TBR) which enhances oil distillates productivity is proposed based on the experimental data obtained in a pilot plant TBR at different operation conditions using the discrete kinetic
262
A. Jarullah et al.
lumping approach. The kinetic model of crude oil hydrotreating is assumed to include five lumps: gases (G), naphtha (N), heavy kerosene (H.K), light gas oil (L.G.O) and reduced crude residue (R.C.R). Here, the modeling, simulation and optimization are carried out by using gPROMS software. . Note, individual components of the crude oil involving aromatics compounds have not been considered here.
2. Experimental Work The experiments were conducted in a continuous flow trickle bed reactor (TBR) with operating conditions: 335 to 400 oC temperature, 0.5 to 1.5 hr-1 liquid hourly space velocity (LHSV), 250 L/L H2/Oil ration and 10 MPa pressure. The heart of the pilot plant is an isothermal reactor (the reactor tube inside diameter 2 cm and length of 65 cm). The temperature of the reactor is controlled utilizing a five zone electrical furnaces that provide an isothermal temperature along the active reactor section. A commercial cobalt-molybdenum on alumina (Co-Mo/Ȗ-Al2O3) catalyst was used for all experiments (0.67 g/cm3 bulk density, 180 m2/g surface area, 0.5 cm3/g pore volume, 1.8 mm mean pore diameter, 4 mm mean particle diameter). 90 ml of the fresh catalyst was charged to the hydrotreating reactor and in situ activated by a solution of 0.6 vol% of CS2 in commercial gas oil. Iraqi crude oil was employed as a feed for HDT process and has the following properties: 2.0 wt% sulfur, 1.2 wt% asphaltene, 0.1 wt% nitrogen, 26.5 ppm vanadium and 17 ppm nickel. A laboratory distillation unit at atmospheric and vacuum pressure was employed for feedstock and hydrotreated product composition. The feed and products composition were defined as follows: gases, naphtha (IBP-150 oC), heavy kerosene (150-230 oC), light gas oil (230-350 oC) and reduced crude residue (350 oC+).
3. Kinetic Model The proposed kinetic model is shown in Figure 1, which consists of 5 lumps (R.C.R, L.G.O, H.K, naphtha and gases) and 14 kinetic parameters (k1,…….k10, n1,….n4). For each reaction, a kinetic reaction rate expression (ri) was formulated as a function of the product composition (yi), reaction order (ni) and kinetic constants (ki).
Figure 1. Proposed Kinetic Model for Increasing of Oil Distillates Productivity Product compositions were estimated with pilot plant mass balances. The reaction rates expressions of the proposed mode are: (1) R.C.R: rR = −(k1 + k 2 + k 3 + k 4 ) y Rn1 n1 n2 L.G.O: (2) rLGO = k1 y R − ( k 5 + k 6 + k 7 ) y LGO H.K:
n2 n3 rHK = k 2 y Rn1 + k 5 y LGO − (k 8 + k 9 ) y HK
(3)
Naphtha:
rN = k 3 y + k 6 y
n4 N
(4)
Gases:
n2 n3 rG = k 4 y Rn1 + k 7 y LGO + k 9 y HK + k10 y Nn 4
(5)
n1 R
n2 LGO
+ k8 y
n3 HK
− k10 y
Enhancement of Productivity of Distillate Fractions by Crude Oil Hydrotreatment: 263 Development of Kinetic Model for the Hydrotreating Process The kinetic model was incorporated into an isothermal reactor model (Jarullah et al., 2011). The following mass balance was utilized for evaluating the product composition from a set of kinetic parameters: dyi dyi (6) = =r dτ
d (1/ LHSV )
i
To estimate the best values of kinetic parameters for all experiments, the sum of the squared errors (SSE) between the experimental product compositions (yiexp) and predicted values of compositions (yical) is minimized using optimization technique.
SSE =
¦ [y
N Data i =1
2
exp i
− yical
]
(7)
4. Optimization Problem Formulation The optimization problem can be stated as follows: Given Optimize Minimize Subject to
Crude oil composition, reaction temperature, LHSV. Reaction orders (ni) and reaction rate constants (ki) for each reaction at different temperatures. The sum of squared errors (SSE). Process constraints and linear bounds on all optimization variables.
Mathematically, the optimization problem can be presented as: Min
SSE
ni,kj (i= 1-4, j=1-10) s.t f(z, x(z), ˴x(z), u(z), v) = 0 ni L ni ni U kij L kij kij U
(model, equality constraints) (inequality constraints) (inequality constraints)
5. Results and Discussions The purpose of this study is to obtain the kinetic models for the trickle bed reactor which enhances oil distillates productivity during crude oil hydrotreatment by using discrete lumping kinetic approach. In discrete lumping model, both feed and products are divided into several lumps based on their boiling temperature ranges. Table 1 shows the experimental results in product compositions (also with model prediction). It is clearly observed that the conversion of high boiling point molecules (such as those contained in the residue fraction) into lighter molecules increase when the reaction temperature is increased and LHSV is decreased. This behavior can be attributed to the severity of the reaction and large molecules will be decomposed into smaller molecules at high temperature. At low temperature 335 oC and different LHSV, there is no conversion of product compositions and they remain almost unchanged. It is also seen from Table 1 that the main conversion reaction or the reaction severity is generally oriented from R.C.R toward the production of L.G.O, H.K, naphtha and gases. This high conversion from R.C.R to oil distillates is clearly observed at high reaction temperature (400 oC) and low LHSV (0.5 hr-1). The estimated kinetic parameters together with activation energies for each reaction can be summarized in Table 2.
264
A. Jarullah et al.
Table 1 Experimental and predicted product compositions for 5 lumps discrete model Exp. wt% 49.4 19.2 12.9 14.9 3.6
0.5 Pred. wt% 49.38 19.41 12.82 14.80 3.59
Error % 0.04 1.09 0.62 0.67 0.27
Exp. wt% 49.85 19.02 12.82 14.83 3.48
1.0 Pred. wt% 49.65 19.20 12.81 14.80 3.53
Error % 0.40 0.94 0.07 0.20 1.43
Exp. wt% 49.88 19.01 12.81 14.82 3.48
1.5 Pred. Error wt% % 49.74 0.28 19.14 0.68 12.80 0.07 14.80 0.13 3.51 0.86
370 C
R.C.R L.G.O H.K Naphtha Gases
49.93 19.0 12.8 14.8 3.47
45.5 20.0 13.8 16.7 4.0
46.76 20.18 13.16 15.99 3.89
2.76 0.90 4.63 4.24 2.75
47.30 19.50 13.50 15.90 3.80
48.29 19.62 12.98 15.41 3.69
2.09 0.61 3.85 3.08 2.89
48.60 19.20 13.30 15.20 3.70
48.83 0.47 19.42 1.14 12.92 2.85 15.21 0.06 3.62 2.16
R.C.R L.G.O H.K Naphtha Gases
49.93 19.0 12.8 14.8 3.47
42.3 21.1 14.8 17.5 4.3
41.85 21.27 15.17 17.25 4.46
1.06 0.80 2.50 1.42 3.72
45.00 20.50 14.00 16.50 4.00
45.54 20.40 13.97 16.08 4.01
1.20 0.48 0.21 2.54 0.25
47.10 19.90 13.60 15.60 3.80
46.92 0.38 20.00 0.50 13.58 0.14 15.66 0.38 3.84 1.05
o
o
o
335 C
R.C.R L.G.O H.K Naphtha Gases
0 Feed wt% 49.93 19.0 12.8 14.8 3.47
400 C
LHSV, hr-1 T Lumps
The kinetic parameters listed in Table 2 illustrate that naphtha cracking is insignificant in the range of operating conditions studied in this work because the values of k10 are equal to zero. Gas production is formed exclusively from the conversion of R.C.R and L.G.O according to the values of k9 and k10, which equal to zero. The kinetic parameters observation of R.C.R conversion indicates high selectivity toward L.G.O, followed by naphtha, gases and H.K. Conversion selectivity changes at various temperatures. For example, at 335 oC there is no formation of naphtha from L.G.O and H.K because the values of k6 and k8 are zero. On the other hand, at 370 oC and 400 oC the values of these parameters are different from zero. Another observation is that the rate constants rise in value as temperature increases. This increase in the rate constant with temperature indicates the increase in reactivity with increasing reaction severity. Table 2 Kinetic parameters of the proposed model o
Reaction order (nj)
Kinetic constant (wt%)-n hr-1
n1 = 1.994
Temperature ( C) o
335 C
370 C
400 C
Activation energy Ea (kJ/mole)
o
o
R.C.R k1 k2 k3 k4
0.00822 0.00034 0.0 0.00261
0.03724 0.00253 0.01883 0.00895
0.12000 0.01185 0.03754 0.02327
140.38 185.14 82.77 114.54
k5 k6 k7
L.G.O 0.0 0.0 0.0
0.02161 0.00162 0.00010
0.10319 0.00595 0.00044
187.55 155.55 171.65
k8 k9
H.K 0.0 0.0
0.01344 0.0
0.03268 0.0
106.52
k10
Naphtha 0.0
0.0
0.0
n2 = 1.300
n3 = 1.114
n4 = 1.000
Enhancement of Productivity of Distillate Fractions by Crude Oil Hydrotreatment: 265 Development of Kinetic Model for the Hydrotreating Process For each lump, the Arrhenius based dependence of the kinetic model with respect to temperature is demonstrated in Figure 2. Because some values of the kinetic parameters were found to be zero, not all of the activation energies could be estimated. The values of the activation energies are presented in Table 2. Table 1 and Figure 3 compare the experimental product compositions and those estimated by solving equations 1-5 together with reactor model equations. It has been noted that product compositions are quite well predicted and showed excellent fit to the experimental data with an average absolute error of less than 5%, which means that the proposed kinetic model is adequate to represent increasing of oil distillates productivity. Note, simultaneous estimation of activation energy and pre-exponential factor could also be considered.
k1 k3 k5 k7
-3
k2 k4 k6 k8
ln ki
-4 -5 -6 -7 -8 -9 -10
Predicted product composition (wt%)
-1 -2
52 Gases
Naphtha
H.K
L.G.O
R.C.R
42
32
22
12
2
1.45
1.5
1.55
1.6
1.65
1000/T(K)
Figure 2 Arrhenius plot for the different kinetic parameters
1.7
2
12
22
32
42
52
Experim ental product com position (w t%)
Figure 3 Comparison between the exp. and predicted product compositions
6. Conclusions A 5 lumps kinetic model (as a function of temperature) for a trickle bed reactor enhancing oil distillates productivity during crude oil hydrotreatment has been developed here. From the results presented, the following observations can be made: a) Gases cannot be produced from naphtha and heavy kerosene. b) In the range of operating conditions in this study, cracking of naphtha was not observed. c) There is no increase in product yield at low operating conditions because some values of the kinetic parameters were found to be zero. d) The kinetic parameters of R.C.R conversion indicate high selectivity toward L.G.O, naphtha, gases and H.K. e) It should be highlighted that increasing of oil distillated can be carried out at moderate operating conditions in order to avoid sludge and sediment formation. f) The proposed kinetic model is capable of predicting the weight fraction of R.C.R, L.G.O, H.K, naphtha and gases with average absolute error less than 5%.
References Alvarez, A. Ancheyta, J. (2009). Ind. Eng. Chem. Res., 48, 1228-1236. Ancheyta, J. (2007). Reactors for Hydroprocessing. In Hydroprocessing of Heavy Oils and Residua; Ancheyta, J. Speight J. Eds., Taylor & Francis: New York, 345 pp. Ancheyta, J. Sánchez, S. Rodriguez, M.A. (2005). Catal. Today, 109, 76-92. Ancheyta, J. Betancourt, G. Marroquı´n, G. Centeno, G. Castan˜eda, L.C. Alonso, F. Mun˜oz, J.A. Go´mez, M.T. Rayo, P. (2002). Appl. Catal. A, 233, 159-170. Ayasse, A.R. Nagaishi, H. Chan, E.W. Gray, M.R. (1997). Fuel, 76, 1025-1033. Jarullah, A.T. Mujtaba, I.M. Wood, A.S. (2011). Chem. Eng. Sci., 66, 859-871.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modeling and design of reacting systems with phase transfer catalysis Chiara Piccoloa, George Hodgesb, Patrick M. Piccioneb, Rafiqul Gania a
CAPEC-Department of Chemical and Biochemical Engineering, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark (e-mail: [email protected]) b Process Studies Group, Syngenta, Jealott's Hill International Research Center, Bracknell, Berkshire RG42 6EY, United Kingdom
Abstract Issues related to the design of biphasic (liquid) catalytic reaction operations are discussed. A chemical system involving the reaction of an organic-phase soluble reactant (A) with an aqueous-phase soluble reactant (B) in the presence of phase transfer catalyst (PTC) is modeled and based on it, some of the design issues related to improved reaction operation are analyzed. Since the solubility of the different forms of the PTC in the organic solvent affects ultimately the catalyst partition coefficients, therefore, the organic solvent plays an important role in the design of PTC-based reacting systems. A model-based strategy for the selection of the best organic solvent/catalyst that improves the reaction operation is highlighted for the reacting system: benzyl chloride (A) and sodium bromide (B) reacting through tetrabutylammonium bromide (PTC). Keywords: Phase transfer catalysis, modeling, partition coefficients, solvent selection, reaction operation
1. Introduction Although the use of phase transfer catalysts (PTC) is well-known in organic synthesis, it is only in the last twenty years that engineering challenges related to efficient operation of these biphasic (liquid) reacting systems and better selection of the PTC and/or organic solvents have received attention from industry and academia. Much experimental work has been performed to understand the reaction mechanisms of the PTCs [1]. However a mathematical description of the PTC-based reacting systems and the corresponding operational analysis, which are necessary steps for process development and understanding, has received inadequate attention. Biphasic liquid PTC-based reacting system modeling involves the consideration of reaction in two (aqueous and organic) liquid phases, as well as the transfer of the PTC in its active and inactive forms between the two liquid phases. A large number of variables may affect the reaction and there are no simple guidelines for the design, evaluation and optimization of these reaction systems. In a recent study [2] involving the design of PTC-based reaction operations, a formal systematic approach that can be used in conjunction with the insights gained from experimental studies and existing empirical knowledge, is illustrated. Although from the conceptual design point of view, the study represents an excellent attempt to address some of the design issues of PTCbased reacting systems in a systematic way, a limitation of the study is the assumption of ideal behavior of the chemical system. Even though this assumption allows a simplification of the computational issues, it is unable to provide an accurate description of the system behavior. The objective of this paper is to provide a generic model-based
Modeling and design of reacting system with phase transfer catalysis
267
framework for conceptual design and analysis of PTC-based reacting systems applicable for ideal as well as non-ideal systems. The model should be able to represent the biphasic reaction operation under different PTC, organic solvents as well as conditions of operations (temperature, pressure and/or initial charge). To illustrate the modelbased design strategy, the chemical system [1] involving the reaction of benzyl chloride with sodium bromide to produce benzyl bromide with the help of tetrabutylammonium bromide in the presence of toluene as the organic solvent and water is used.
2. Modeling the PTC based reacting system 2.1 Fundamentals Consider the reaction mechanism of the biphasic organic-aqueous phase system in the presence of a PTC (TBABr), as shown in Figure 1. ݈ݕݖ݊݁ܤെ ݈ܥ ܶܣܤା ݈ݕݖ݊݁ܤ ֞ ି ݎܤെ ݎܤ ܶܣܤା ି݈ܥ ORGANIC PHASE ܰܽା െ ି݈ܥ ܶܣܤା ܽܰ ֞ ିݎܤା െ ି ݎܤ ܶܣܤା ି݈ܥ AQUEOUS PHASE Figure 1: PTC-based reaction mechanism
The “aqueous phase” contains the salts NaBr (reactant), NaCl, TBACl (inactive form of the PTC), TBABr (active form of the PTC), plus water (w) and the solvent (Toluene, Tol). The “organic phase” contains the solvent Tol, the reactant (Benzyl chloride, RCl) and the product (Benzyl bromide, RBr), which should be (by design) insoluble in water. The phase transfer catalytic mechanism helps to bring the anion Br- from the aqueous to the organic phase in the form of the active catalyst. The active and the inactive forms of the catalyst distribute between the two phases, according to their affinity towards each phase. Therefore, it is the ratio between active and inactive PTC in the organic phase that determines the extent of conversion or product yield. In Table 1 the chemical system information is summarized for the benzyl chloride-sodium bromide reaction with the tetrabutylammonium bromide PTC and toluene as organic solvent. Table 1: The chemical system information for the benzyl chloride-sodium bromide reaction. Phase 2 NOT Phase 1 IN OUT Species Role (Org) (Aq) leaving w Solvent media * * * * Tol Solvent media * * * * RCl Reactant * * RBr Product * * TBA+ Active catalyst *(ion-pair) * (ion) * BrActive catalyst *(ion-pair) * (ion) * ClInactive catalyst *(ion-pair) * (ion) Na+ Reactant/ Product * * *
As pointed out in Table 1, the catalyst exists as ion-pairs in the organic phase and as ions in the aqueous phase. For a consistent mathematical representation, apparent mole fractions are defined for the organic phase, following the approach of Samant et al. [2].
C. Piccolo et al.
268
For the given reaction kinetic model (Eq. 1), combining the rate limiting organic phase reaction, the ionic equilibria in the aqueous phase and the PTC partition between phases [3], the model for the PTC based reacting system (a steady state CSTR) consists of eight conservation equations, one for each species in Table 1 (Eq. 2-9); four equilibrium conditions for species present in both phases (Eq. 10-13); one electroneutrality condition (Eq. 14), and two normalization conditions for mole fractions (Eq. 15 a-b). The system is completely specified if values for the feed variables , , ݖ , and + + ݖ , where i=w, Tol, RCl, RBr, TBA , Br , Cl and Na , and the molar holdup (Horg) are given. ݎൌ ݇ଵ ߛோ ݔோ ߛ் ்ܭ ்ݔା ݔି െ ݇ିଵ ߛோ ݔோ ߛ் ்ܭ ்ݔା ݔି
(1)
݊ ்ை் ൌ ݖ ܨ ݖ ܨ ݅ ൌ ݓǡ ݈ܶ ݔ ݊ ்ை் ݔ
(2, 3)
ݔோ ݊ ்ை் ൌ ݖோ ܨ െ ܪ ݎ
(4)
ݔோ ݊ ்ை் ൌ ݖோ ܨ ܪ ݎ
(5)
ݔି ݊ ்ை் ݔି ்݊ை் ൌ ݖି ܨ ݖି ܨ െ ܪ ݎ
(6)
ݔି ݊ ்ை் ݔି ݊ ்ை் ൌ ݖି ܨ ݖି ܨ ܪ ݎ
(7)
்ݔା ்݊ை் ்ݔା ݊ ்ை் ൌ ்ݖା ܨ ்ݖା ܨ
(8)
ݔேା ݊ ்ை் ൌ ݖேା ܨ
(9)
ߛ
ݔ
ൌ ݔ ߛ ݔ ൌ ܭ ݔ ݅ ൌ ݓǡ ݈ܶ
(10, 11)
்ݔ ߛொ ൌ ்ݔା ݔି ߛേ ்ݔொ ൌ ்ܭ ்ݔା ݔି ݅ ൌ ݎܤǡ ݈ܥ
(12, 13)
ݔொା ൌ ݔି ݔି
(14)
σ ݔ
ൌ σ ݔ ൌ ͳ
(15 a-b)
In order to compute the partition coefficients of the species that distribute between the two phases (Kw, KBz, KTBABr, KTBACl), three different liquid-liquid phase equilibria (LLE) are considered: water-organic solvent, PTC-water, PTC-organic solvent. The LLE calculations for each system are briefly described below. 2.2 Solvent-Water Mutual Solubility To calculate the solvent-water mutual solubility at the system temperature, it is necessary to compute the liquid phase activity coefficients. These can be calculated with models like the NRTL [4] if the molecular binary interaction parameters are available. Otherwise, predictive group contribution based models may be used. For the reactive system considered in this paper, the NRTL model is used. 2.3 Catalyst Partitioning Typically, PTCs are completely dissociated in the aqueous phase, but show a more complex aggregated state behavior in the organic phase where they are present as free ions (since partial dissociation may occur) and/or as undissociated species like ion pairs
Modeling and design of reacting system with phase transfer catalysis
269
and quadrupoles [5]. In this work we assume that only ion pairs exist in the organic phase [6]. Two different approaches are investigated to model the PTC behavior in each phase. PTC activity coefficient in the aqueous phase: The electrolyte-NRTL model [7], with parameters correlated from binary data of PTC in aqueous solution [8] is selected to calculate the necessary mean activity coefficients of the dissociated forms of the catalyst (TBA+, Br-, Cl-) in the aqueous phase. The effect of the other ions in solution (Na+) has been neglected. PTC activity coefficient in the organic phase: Hansen solubility parameters are used to predict the activity coefficients of the binary systems of PTCs (TBABr and TBACl)organic solvent, within the framework of a Flory-Huggins model. A group contribution approach is used to calculate the parameters of the catalysts [9], while solvent parameters are obtained from ICAS [10]. The Flory-Huggins model is used for estimation of phase equilibria, and the model equation for activity coefficients (for a binary system) is written as: ݈݊ߛଵ ൌ ݈݊
థభ ௫భ
ͳെ
థభ ௫భ
߯ଵଶ ߶ଶଶ
(16)
Where Ԅ and x are the volume fractions and mole fractions, respectively, and ɖͳʹ
Ǧ Ǧ
ǣ ߯ଵଶ ൌ
௩భ ோ்
ሺሺߜଵǡௗ െ ߜଶǡௗ ሻଶ ͲǤʹͷሺߜଵǡ െ ߜଶǡ ሻଶ ͲǤʹͷሺߜଵǡ െ ߜଶǡ ሻଶ ሻ
(17)
Where v1 is the molar volume of the solute, įd, įp, įh [J/cm3]0.5are the dispersion, polar and hydrogen bonding Hansen solubility parameters [J/cm3]0.5, respectively. 2.4 Simulation of PTC-based CSTR Operation The specifications and the simulation results for the reacting system and model described above are reported in Table 2. Table 2: Specifications (feed composition and kinetic constants) and results (product composition and yield X) of the example system simulation (only the non-zero variable values are given) Aqueous Feed Organic Feed Product Aqueous Phase Product Organic Phase ܨ =1.25 mol/min ܨ =0.102 mol/min ݊ ்ை் =1.256 mol/min ݊ ்ை் =0.0958 mol/min ݖ௪ =0.8753 ݖ௪ =0.0016 ݔ௪ =0.871 ݔ௪ =1.684e-3 ்ݖ =4.73e-5 ்ݖ =0.9292 ்ݔ =4.707e-5 ்ݔ =0.989 ݖି =0.06235 ݖோ =0.00843 ݔି =6.418e-2 ݔோ =4.717e-3 ݖேା =0.06235 ݔି =3.244e-4 ݖି =0.0304 ݔோ =4.254e-3 ்ݖା =0.0304 ்ݔା =2.465e-3 ݔି =3.362e-5 ݔேା =6.204e-2 ݔି =2.623e-7 ்ݔା =3.388e-5 k1=212.7 mol-1min-1* Xexp=0.53 k-1=0 Xpred=0.474 Horg=12.6 mol * Regressed from experimental values [1].
3. Solvent Selection Guideline As discussed above, the partition coefficients of the PTC are affected by the choice of the solvent. The higher the solubility of the PTC as well as the reactant-A and the product, the greater is the yield of the product (and/or conversion of the reactant). Good solvents for a given PTC are located within a sphere of radius R2 (the so-called
C. Piccolo et al.
270
solubility radius) in a (2įd, įp, įh) coordinate system. Thus, complete miscibility is obtained when the solubility distance D12 is lower than the solubility radius R2 [11]: ܦଵଶ ൌ ටͶሺߜଵǡௗ െ ߜଶǡௗ ሻଶ ሺߜଵǡ െ ߜଶǡ ሻଶ ሺߜଵǡ െ ߜଶǡ ሻଶ ൏ ܴଶ
(18)
In Figure 2, three combinations of the Hansen solubility parameters are plotted: in each plot optimal solvents are located within a circle drawn around the catalyst. Based on this solubility criterion, a solvent screening is performed and a new solvent (pentyl acetate), which from the environmental, health and safety points of view is acceptable, is identified as a possible replacement of toluene. Also, benzene could be another alternative solvent to toluene, if we just consider it from the process performances point of view and disregard its toxicity issues. 20
20
18
18
16
16
ɷd
ɷp
ɷd
14 12 10 8 6 4 2 0
14
14
12
12
10 0
10
20
30
ɷh
10 0
10
20
ɷh
30
0
5
10
15
ɷp
Figure 2: Hansen solubility parameters of catalyst and different solvents: ¸ Toluene, Ŷ Benzene, × Methanol, Ɣ Acetone, + Pentyl acetate, Ƒ Chloroform, Ÿ Ethanol, ż TBAB.
4. Concluding remarks The model of a CSTR reactor for an idealized phase-transfer catalysis mediated substitution reaction has been developed. The complex multi-component and biphasic chemical system has been decomposed into a set of sub-systems through a consistent thermodynamic framework in order to calculate the portioning needed to simulate the reacting system behavior. Through this model, it is possible to identify opportunities for improving the reacting system performance and replacement of solvents. The application of this model-based approach has been demonstrated through a case study involving the production of benzyl bromide with the use of tetrabutylammonium bromide as a PTC. The developed model has been validated with available experimental data [1]. The modeling framework is now being applied to study and improve other reacting systems with their corresponding PTCs and optimal solvents.
References [1] H.S. Weng, W.C. Huang, 1987, J. Chin. Inst. Chem. Eng., 18, 109. [2] K.D. Samant, D.J. Dingh, K.M. Ng, 2001, AIChe J., 47, 8, 1832. [3] C.M. Starks, R.M. Owens, 1973, AIChE J., 95, 11, 3613. [4] H. Renon, J.M. Prausnitz, 1968, AIChE J., 14, 1, 135. [5] H.-S. Wu, M.S. Tseng, 2002, AIChe J., 48, 4, 867. [6] E.V. Dehmlow, S.S. Dehmlow, 1993, Phase transfer catalysis, VCH, Germany. [7] C-C Chen, H.I. Britt, J.F. Boston, L.B. Evans, 1982, AIChe J., 28, 4, 588. [8] S. Lindebaum, G.E. Boyd, 1964, J. Phys. Chem., 68, 911. [9] B. Roughton, CAPEC Internal Report, Technical University of Denmark, 2010. [10] CAPEC Database, Department of Chemical Engineering, DTU, Denmark, 2004. [11] C.M. Hansen, Hansen solubility parameters, 2000, CRC Press, USA.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A systematic methodology for the design of continuous active pharmaceutical ingredient production processes Albert E. Cervera,a Rafiqul Gani,b Søren Kiil,c Tommy Skovby,d Krist V. Gernaeya a
Center for Process Engineering and Technology (PROCESS), Department of Chemical and Biochemical Engineering, Technical University of Denmark, Building 229, 2800 Kgs. Lyngby, Denmark b CAPEC, Department of Chemical and Biochemical Engineering, Technical University of Denmark, Building 229, 2800 Kgs. Lyngby, Denmark c CHEC, Department of Chemical and Biochemical Engineering, Technical University of Denmark, Building 229, 2800 Kgs. Lyngby, Denmark d H. Lundbeck A/S, Oddenvej 182, Lumsås, 4500 Nykøbing Sj, Denmark
Abstract Continuous pharmaceutical manufacturing (CPM) has emerged as a powerful technology to obtain higher reaction yields and improved separation efficiencies, potentially leading to simplified process flowsheets, reduced total costs, lower environmental impacts, and safer and more flexible production. However, the change from batch-wise production towards continuous operation and the definition of flexible design spaces requires a high degree of process knowledge. Process Systems Engineering (PSE) offers multiple methods and tools which can assist in efficient knowledge acquisition, structuring and representation, as well as on how to employ this knowledge for process (re-)design. The aim of this paper is to introduce a methodology that systematically identifies already existing PSE methods and tools which can assist in the design of CPM processes. This methodology has been applied to a process for the production of an API developed by H. Lundbeck A/S, demonstrating the mentioned potential benefits that CPM can offer. Keywords: continuous pharmaceutical production, PAT, QbD, process design.
1. Introduction The pharmaceutical industry has undertaken a profound conceptual change in the way drug products are developed, manufactured and distributed [1]. Pharmaceutical manufacturers have understood that long-term viability requires increased flexibility and lower manufacturing costs [2]. Through the PAT initiative [3], regulatory authorities have encouraged the introduction of new technologies as a means to reduce production costs and improve safety and product quality. Manufacturers have also gained flexibility to optimize their processes provided that they can demonstrate scientific understanding and sound engineering judgement [4]. The concept of design space [4] has been interpreted as an opportunity to further promote the adoption of process systems engineering (PSE) methods and tools for the design and operation of pharmaceutical processes. Many of these have been widely used by other industries, and the challenge is to adapt them to the inherent complex nature of pharmaceutical products and processes, with very strict quality and regulatory requirements [5].
A.E. Cervera et al.
272
One of the key strategies for improving safety and product quality while decreasing waste generation and manufacturing costs has been the transition from batch-wise production towards continuous pharmaceutical manufacturing (CPM) [1,6,7]. The objective of this paper is to discuss some of the challenges and possibilities that are encountered when designing CPM processes, and to introduce a methodology that systematically explores opportunities for improvement. The methodology identifies and selects relevant PSE methods and tools which can be used for a particular design task. The application of this methodology is demonstrated using as a case study a process for the production of an active pharmaceutical ingredient (API) developed by H. Lundbeck A/S - zuclopenthixol.
2. Methodology 2.1. Scope The scope of this methodology is limited to the production of APIs or their intermediates (i.e. excluding product formulation operations), and may be employed for the design of new process flowsheets or modification of existing (batch operating) processes (retrofit problem). Figure 1 shows a conceptual representation of the multiscale nature of the CPM design problem. Pharmaceutical products are typically synthesized by multiple synthetic steps, each one optionally employing one or more solvents (S1, S2, etc.). Consequently, in order to have a continuous flow from raw materials to end-product (i.e. avoid intermediate isolation and storage), solvent exchange operations are required. Within one synthetic step there are several reaction tasks (1..r), separation tasks (1..s), washing tasks (1..w) and others. The design of each task poses different design subproblems, such as ensuring that a chemical reaction takes place, offering sufficient mass and heat transfer and keeping the system under control. Finally, each subproblem to be solved requires gathering relevant experimental data and building models relating raw materials and process parameters with outputs. S1 Synthesis step 1 Solvent 1
S2 Synthesis step 2 Solvent 2
S2
Reactions 1..r
Chemical reaction
Kinetics
S3
Separations 1..s
Phases? Solid feeding
Thermodynamics
Synthesis step 3 Solvent 3
Washing 1..w
Reactor configuration
Temperature
...
Mass and heat transfer
Pressure
Monitoring and control
Catalyst
Figure 1. A conceptual representation of the multiscale nature of the CPM design problem.
A systematic methodology for the design of continuous API production processes
271
1. Define Objective
2. Gather data
Synthesis route Physico-chemical data Quality specifications Equipment constraints
Flowsheet generation Solvent selection Process integration Monitoring and control
3. Conceptual design
Thermodynamics Kinetics Databases Property prediction tools EHS data
Model generation Reaction engineering Separation technology Membrane technology ...
4. Detailed design
Experimental data
Experimentation Simulation
5. Feasibility analysis
Feasibility (physical) constraints
Choose different option
no
Feasible? yes
Economic evaluation Environmental footprint Risk assessment Flexibility analysis
Choose different option
Economic data Environmental impact Thermodynamic data ...
6. Flowsheet validation
no
Satisfactory?
yes
Continuous process design space
Figure 2. Structure of the methodology for the design of continuous pharmaceutical manufacturing processes, indicating the methods used (left column), work flow (central column) and the data flow (right column).
2.2. Methodology structure The structure of the proposed methodology is shown in Figure 2 and consists of 6 main steps. In step 1, the design problem is defined. There are two possibilities: designing a new process flowsheet for a new product (in which case previous data and/or experience may not be available), or modifying an existing process (retrofit problem). In step 2, design specifications are gathered. These data will form the basis for the definition of the critical quality attributes (CQA) to be met by the process. In step 3, one or more near-optimal flowsheets will be generated. Different methods may be employed for flowsheet generation [8,9], solvent selection [10], solvent integration [11] and definition of monitoring and control tools [12]. This step requires a large amount of data in addition to property prediction tools. The data may be missing and will have to be obtained in the next steps. In step 4, for each unit operation obtained from step 3, a rigorous analysis must be performed and a detailed design must be proposed. A modelbased systems approach [13] should be followed, with optimal use of experimental data. A basic decision is whether the unit operation should be run in continuous mode or in batch mode. In general, it is more advantageous to use a continuous operation when extreme operating conditions are to be accessed (e.g. high temperature, cryogenic temperature or high pressure); when high heat and mass transfer rates are required [7];
A.E. Cervera et al.
274
when high volumes of potentially volatile, flammable and/or toxic compounds are used [6]; and when efficient on-line monitoring and real-time control are needed. In step 5, experiments are optimally combined with simulation in order to evaluate the feasibility of the proposed unit operations. In step 6, feasible flowsheets are evaluated under multiple performance evaluation criteria, including a combined economic evaluation and environmental footprint assessment [9], risk assessment, and flexibility analysis. If the result of this evaluation is satisfactory, a continuous process is obtained as well as a design space with the underlying inputs-process parameters-CQA relationships. a
Stage 1 Water Grignard Reagent
Alkylation
L-L separation
Hydrolysis
CTX
Stage 2 Allylcarbinol Toluene
Ethanol Water Impurities
THF Allylcarbinol THF Water Mg salts
Solvent Swap
Crystallization
Water Ethanol
Allylcarbinol
Toluene
Dehydration
Solvent Swap
Butadiene Toluene
Acid catalyst
Hydroamination
N746 EZ HEP
…
zuclopenthixol
HEP
b
Water Grignard Reagent
Alkylation
CTX
Hydrolysis
L-L separation
Allylcarbinol THF Water Mg salts
THF Water
Dehydration Acid Catalyst
Butadiene THF Water
Solvent Swap
Hydroamination
N746 EZ HEP
…
zuclopenthixol
HEP
Figure 3. Original (a) and simplified (b) process flowsheet for the production of zuclopenthixol. CTX, Allylcarbinol, Butadiene and N746EZ are short names for intermediate products.
3. Case study The proposed methodology has been applied to re-design a process employed by Lundbeck A/S for the production of zuclopenthixol: Step 1: The objective was to re-design the existing batch-wise API production process including continuous operations wherever relevant, with the objective to reduce footprint (e.g. reactor volumes), decrease the flowsheet size, increase safety and automation, and decrease the manufacturing costs and the environmental impact. Step 2: Information on the original process (Figure 3a) was gathered from standard operating procedures, quality control documentation, databases, etc. Step 3: A simplified process flowsheet was proposed (Figure 3b) based on the hypothesis that the allylcarbinol intermediate could be obtained with a lower impurity profile as compared to the original process, thereby making the crystallization and separation of allylcarbinol unnecessary. This could be realized by performing the alkylation reaction in continuous mode, controlling the reaction with
A systematic methodology for the design of continuous API production processes
275
low impurity formation using on-line near-infrared spectroscopy data. Furthermore, the use of an alternative solvent to THF (e.g. Me-THF or CPME) was proposed. Step 4: Experimental data was used to design an alkylation reactor with low impurity formation and solvent use. Kinetic data was obtained from the dehydration reaction to propose a continuous reactor with low impurity formation and low residence time, leading to small volume. The other unit operations were studied in a similar manner. Step 5: Experiments were performed using the proposed continuous reactors, identifying practical problems related to for example pumps, materials of construction, air tightness, etc. The intermediate products were analyzed with standard quality control methods as used by Lundbeck A/S. Step 6: A detailed flowsheet validation is currently in process, however solvent exchange operations and solvent use has been minimized as a result of using efficient continuous operations with on-line monitoring and control. Minimization of the number of unit operations implies a reduction in fixed and operating cost as well as waste generated [14].
4. Conclusions and future work The pharmaceutical industry is making efforts in different directions to reduce manufacturing costs. A key strategy is to move certain batch processes to continuous operation. A methodology for the design of CPM processes -in particular for the production of organic synthesis based APIs- has been introduced. The methodology identifies already existing PSE methods and tools which can assist in the design of CPM processes. The application of the methodology has been exemplified with an API production process developed by H. Lundbeck A/S, showing a reduction in flowsheet size and complexity as well as a potential reduction in total costs, environmental footprint and production-associated risks. Future work will focus on the extension of the proposed methodology as well as a fully detailed demonstration of its use.
Acknowledgments The authors would like to acknowledge the Technical University of Denmark and H. Lundbeck A/S for financial support and fruitful discussions.
References 1. K. Plumb, 2005, Chem. Eng. Res. Des., 83, A6, 730-738. 2. A. Behr, V.A. Brehme, C.L.J. Ewers, H. Grön, T. Kimmel, S. Küppers, I. Symietz, 2004, Eng. Life Sci., 4, 1, 15-24. 3. U.S. Food and Drug Administration (FDA), 2004, PAT. 4. International Conference on Harmonisation, Quality Guidelines. 5. G.V.R. Reklaitis, 2007, ESCAPE17, Comput. Aided Chem. Eng., 24, 35-38. 6. T.L. LaPorte and C. Wang, 2007, Curr. Opin. Drug Disc. Dev., 10, 6, 738-745. 7. P. Pollet, E.D. Cope, M.K. Kassner, R. Charney, S.H. Terett, K.W. Richman, W. Dubay, J. Stringer, C.A. Eckert, C.L. Liotta, 2009, Ind. Eng. Chem. Res., 48, 15, 7032-7036. 8. C.A. Jaksland, R. Gani, K.M. Lien, 1995, Chem. Eng. Sci., 50, 3, 211-530. 9. A. Chakraborty and A.A. Linninger, 2002, Ind. Eng. Chem. Res., 41, 4591-4604. 5*DQL3$UHQDV*yPH]0)ROLü&-LPpQH]-González, D.J.C. Constable, 2008, Comput. Chem. Eng., 32, 2420-2444. 11. B.S. Ahmad and P.I. Barton, 1999, Comput. Chem. Eng., 23, 1365-1380. 12. R. Singh, K.V. Gernaey, R. Gani, 2010, Comput. Chem. Eng., 34, 1108-1136. 13. K.V. Gernaey and R. Gani, 2010, Chem. Eng. Sci., 65, 5757-5769. 14. D.I. Gerogiorgis and P.I. Barton, 2009, PSE2009 - Comput. Aided Chem. Eng., 27, 927-932.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Synthesis tool for separation processes in the pharmaceutical industry Ana I. C. Morãoa,b,*, Edwin Zondervanb, Gerard Krooshofc, Rob Geertmand, André B. de Haanb a
Institute for Sustainable Process Technology, P.O Box 247, 3800 AE Amersfoort, The Netherlands b Eindhoven University of Technology, P.O. Box 513, 5600MB Eindhoven, The Netherlands c DSM Research, P.O. Box 18, 6160 MD Geleen, The Netherlands d MSD, Merck Manufacturing Division, API-PD, P.O. Box 20 5340 BH Oss, The Netherlands
Abstract A synthesis tool to select promising separation routes to purify active pharmaceutical ingredients is introduced in this paper. It combines heuristics with flow sheet simulations to systematically generate and compare alternatives. The application of this framework to the purification of progesterone led to new insights that must be confirmed by experiments. Keywords: Process synthesis, pharmaceutical industry, process design.
1. Introduction The key factor for the success of pharmaceutical industry is the short time to market of new products, while meeting strict quality regulations at a low production cost. We aim for a methodology to quickly screen alternative process routes for multi-component separation/purification that fulfills the specific demands of active pharmaceutical ingredients (APIs) production. The usual methodologies for the synthesis of bulk chemical processes are based either on heuristics, mathematical approaches [1] or thermodynamic insights [2]. Regarded as very powerful, the superstructure optimization-based techniques rely heavily on the availability of accurate unit operation models and comprehensive characterization of the feed, which does not exist in early stages of process design of APIs. Typically the pharmaceutical industry uses multipurpose plants operated in batch, deals with the separation of similar species and with relatively complex molecules whose physical and chemical properties are often not known. Purification is carried out at mild conditions using technologies that exploit the differences in the physical properties between drug and impurities, namely crystallization, extraction, chromatography and membranes. In order to achieve the required product quality several purification stages must be used, depending on the purity target. A very similar situation is found in the purification of bio-products, for which an experimental based process synthesis method was recently proposed [3]. However such an approach relies on the realization of comprehensive experiments at the beginning of the design process, based on pre-selected routes. We have started the development a new systematic approach for selection and sequencing of purification processes, based on a combination of heuristics with *
Corresponding author: [email protected]
Synthesis tool for separation processes in the pharmaceutical industry
277
mathematical modeling and experiments, to address the challenges faced by the purification of APIs produced by organic synthesis. Simple models are used to identify promising separation routes, which will have to be further explored, through more detailed modeling or, in most cases, by experiments. Though our goal is to develop a general framework, the tool is in the article applied to one case study - purification of progesterone.
2. Framework for the selection of separation routes The framework proposed follows the steps depicted in Figure 1. The Selection of separation sequences goes through several levels of detail, where the initial assumptions and constraints are refined based on the results from the experimental evaluation of the initially selected separation routes.
Increase number of separation steps
1. Problem definition: - Characterization of the feed (composition and properties); - Selection of purification technologies; - Maximum number of purification steps; - Process constraints; - Target product quality and production rate. 2. Generation of separation routes (Excel|VBA tool). -Systematic generation of separation routes - Rules of thumb and/or thermodynamic properties are applied to eliminate candidates. 3. Simulation of the promising flowsheets using short-cut models based on thermodynamic equilibrium (Aspen Custom Modeler).
4. The separation routes are ranked according to yield, cost and batch time.
5. Experimental evaluation of product quality for 5-10 selected sequences No
Yield and purity achieved?
Yes Selected separation route
Figure 1 – Systematic synthesis framework for APIs purification.
It starts with a clear problem definition (step 1) including: feed characterization, separation technologies that should be considered and the target purification factor or yield. The number of unit operations per flow sheet is first set by the user and depends on the problem itself. Typically, several purification steps ending with a final crystallization are required. From the list of unit operations available, those exploiting differences in the physical properties of the drug product and impurity are selected. The possible combinations of unit operations are generated and forbidden sequences (based on heuristics) eliminated
A.I.C. Morão et al.
278
(step 2). At this stage only models for extraction, cooling, anti-solvent and evaporative crystallization are included in the framework. These simple models are based on the thermodynamic equilibria, accounting for non-ideality of solutions (NRTL-SAC model [4]). To incorporate other unit operations such as chromatography, adsorption or membrane processes, the separation factors must be given. In the next step (3) the mass balances and energy demands of all possible flow sheets are calculated. In the earliest phase of process development, operating conditions within the typical industrial ranges are used. Because the solvents used in the pharmaceutical industry are restricted by health and safety guidelines only ICH class 2 and 3 solvents are considered [5]. The selection of solvents can be made based on the maximum theoretical yield: Y (%)
1
S1 S2
100
(1)
For cooling crystallization: S1 and S2 are the solubility (mass fractions) values at the high and low temperatures; for evaporative crystallization: S1 and S2 are the solubility values at the high (after evaporation of solvent) and low concentration (before solvent evaporation); and for anti-solvent crystallization: S1 and S2 are the solubility values before and after the addition of the anti-solvent. A good crystallization solvent shows high theoretical yield and high solubility at the highest temperature. A good antisolvent, on the other hand, exhibits a high theoretical yield, but the solubility should be low over the whole temperature range. The anti-solvent must be miscible with the solvent at all process conditions. To avoid crystallization of impurities, a solvent that solubilizes the impurities over the whole range of conditions is sought [6-7]. The modeling level of detail does not involve prediction of particle size and polymorphism, which are critical factors for efficient solid/liquid separation. In a later phase of process design the operating conditions of the most promising routes must be optimized, and the steps downstream to crystallization – filtration and drying, must also be included. Ultimately, the purity of crystals will always have to be determined experimentally (step 5), because undesired impurities might be incorporated into the crystals during growth, trapped inside agglomerates of crystals, absorbed on crystal surfaces or incompletely removed during filtration and washing of the crystals. Yet, it is not possible to predict these phenomena from first principles [8]. If the purity target is not achieved, the number of purification steps included must be increased and the procedure should be repeated from step 2.
3. Application example The framework described here (Figure 1) is applied to select alternative process routes in the final stage of progesterone purification. After a sequence of reaction steps solid progesterone containing four identified impurities plus two unspecified impurities is obtained. The structure of progesterone and the main impurity (pregnanedione) is shown in Table 1. Table 1 shows the phase transition temperatures of both molecules. Regarding the similarity between the drug molecule and impurity along with the large differences in the melting point of these compounds, crystallization is the obvious choice for purification of progesterone. However, various types of crystallization can be considered. Based on experimental solubility data of progesterone in six solvents (temperature range 13 -78 oC) the NRTL-SAC parameters were regressed. It is assumed that the entropy of
Synthesis tool for separation processes in the pharmaceutical industry
279
melting and the NRTL-SAC parameters of both molecules are the same. From the list of 53 class 2 and class 3 solvents, the following solvents were tested: acetone (solvent leading to highest theoretical yield) and isopropanol. Water was the anti-solvent selected. Table 1 – Properties of the main components of the feed. O
O
H3C
CH3
H3C H3C
H C
O
O
Molecular weight (g mol-1) Melting point (oC) Boiling point (oC) Enthalpy of melting (J mol-1)
CH3
Progesterone 314.5 130± 447* 25900±
Pregnanedione 316.5 200** 429** 30420
± Experimental values; * ChemSpider SyntheticPages, 2011, http://www.chemspider.com/5773; ChemSpider SyntheticPages, 2011, http://www.chemspider.com/83782.
3.1. Problem definition Given a multi-component feed with known composition of progesterone and pregnanedione (0.5%) select the flow sheets which match the following target: reduce the impurity concentration to meet the final product requirements (99.9% purity). Taking into account that various types of crystallization may be applied, several flow sheet alternatives may be envisaged that combine different methods of crystallization. An additional constraint to the process is that the solids fraction must be lower than 25% (m/m), to avoid slurry handling problems downstream. The goal is to select the best routes (5-10) based on maximum theoretical yield evaluation, so that further work can be concentrated on the process optimization. 3.2. Results and discussion The eight separation routes which lead to the highest theoretical yields are shown in Figure 1Error! Reference source not found.. For each solvent tested, 39 routes were evaluated. As shown, with three crystallizations steps 98% and 95% theoretical yield can be achieved with acetone and isopropanol, respectively. Eliminating one crystallization step, yields of 93% are calculated with acetone. Because isopropanol/water is reported in the literature as a suitable solvent combination to crystallize progesterone [9], routes 4 to 7 are good candidates for the process, even though it is with acetone/water that the highest yield is achieved. With the current industrial process with a different solvent the yield is achieved is lower. Thus, the replacement of the current solvent either by acetone or isopropanol and the anti-solvent by water might improve the process as product yield is expected to increase. Moreover, the process is simplified by the elimination of one organic solvent (anti-solvent).
4. Conclusions Through the application of our methodology to the purification of progesterone eight potential alternative separation flow sheets are identified, which can lead to significant
A.I.C. Morão et al.
280
process improvement. The next step is experimental verification to determine crystal purity and filterability of the slurry. Using this framework a better understanding of the separation of the different compounds by a number of unit operations is achieved with limited data, being therefore suitable for a preliminary selection of separation processes. It should be pointed out that the analysis presented here can easily be done in one day. Thus, it may facilitate the implementation of innovative process options as they are compared side by side with more traditional processes and technologies. Table 2 – Ranking of the best separation routes to purify progesterone. 1 – Cooling crystallization, 2 – Anti-solvent crystallization, 3 – Evaporative crystallization.
Yield Route 1 (Acetone)
1
Route 2 and 3 (Acetone)
1
Route 4 and 5 (isopropanol)
1
Route 6 and 7 (isopropanol) Route 8 (Acetone)
3
2
98% Product
3 or 2
2 or 3
98%
Product 3
2 or 3
95%
Product 1 or 2
1 or 2
3
93% Product
2
1
93% Product
Acknowledgements This is an Institute for Sustainable Process Technology (ISPT) project.
References [1] L.T. Biegler, I.E. Grossmann, A.W. Westerberg, Systematic methods of chemical process design. Prentice Hall PTR, 1997. [2] C.A. Jaksland, R. Gani, K.M. Lien, Chem. Eng. Sci. Vol. 50 no3 (1995) 511. [3] T. Winkelnkemper, G. Schembecker, Separation and Purification Technology 71 (2010) 356. [4] C.-C. Chen, Y. Song, Industrial & Engineering Chemistry Research 43 (2004) 8354. [5] ] U.S. Food and Drug Administration. Guidance for Industry, Q3C Impurities: Residual Solvents. December 1997. [6] K.K. Nass, Industrial & Engineering Chemistry Research 33 (1994) 1580. [7] T.C. Frank., J.R. Downey, S.K. Gupta, Chem. Eng. Prog. 95 (1999) 41. [8] Y.S. Cheng, K.W. Lam, K.M. Ng, C. Wibowo, AIChE Journal 56 (2010) 633. [9] D. Ragab, S. Rohani, M.W. Samaha, F.M. El-Khawas, H.A. El-Maradny, J. Pharm. Sci. 99 (2010) 1123.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
New algorithm for the determination of product sequences in azeotropic batch distillation Laszlo Hegely, Peter Lang Budapest University of Techonology and Economics, Dept. of Building Services and Process Engineering, H-1521 Budapest, Muegyetem rkp. 3-5, Hungary
Abstract In multicomponent azeotropic mixtures (e.g. waste solvents), the products obtainable by batch distillation and their maximal amount is highly dependent on the charge composition. A method is presented for the determination of product sequences for any number of components based only on the boiling points of pure components and azeotropes, and azeotropic compositions, without the knowledge of further VLE data. The method is suitable for heteroazeotropes as well. The stability of fixed points are determined with the assumption that the Serafimov-type of the mixture occurs in Reshetov’s statistics. On the basis of the stabilities, we enumerate all feasible product sequences using the algorithm of Ahmad et al. (1998). Finally, the amount of products are determined assuming maximal separation for the given charge composition. The results are presented for the system acetone-chloroform-methanol-ethanol-benzene. Keywords: process synthesis, batch distillation regions, product sequence.
1. Introduction Batch distillation is a common solvent recovery technology in the pharmaceutical and specialty chemical industries. The components of waste solvents usually form multicomponent azeotropic mixtures therefore the sequence of cuts and the maximum recovery in each cut is highly dependent on the charge composition. For the successful recovery it is essential to predict the feasibility of recovering components in pure form. For ternary mixtures, the distillation boundaries were determined on the basis of only the boiling points (BP) of pure components and azeotropes, and azeotropic compositions by Foucher et al. (1991). For mixtures with arbitrary number of components methods were suggested by Safrit and Westerberg (1997) and Ahmad et al. (1998). All these latter methods require the knowledge of the type of all stationary points demanding the application of a VLE model (based on VLE data). The method of Safrit and Westerberg (1997) may give false results for globally undetermined systems, which, however, hasn’t been found yet in the practice. Ahmad et al. (1998) assumed that the maximum number of unstable and stable nodes (UN, SN) is two. Pöpken and Gmehling (2004) proposed a simple method for determining the location of distillation boundaries in quaternary systems. This article presents a rapid and automatic method • for generating product sequences for batch distillation of a mixture containing any number of components, • based only on the BPs of pure components and azeotropes, and azeotropic compositions (including the compositions of liquid phases in the case of heteroazeotropes), • making unnecessary the knowledge of further VLE data (e.g. binary interaction parameters), test runs in pilot columns and detailed dynamic simulations,
Laszlo Hegely and Peter Lang
282
• which is suitable also for mixtures containing heteroazeotropes. The algorithm was implemented in Visual Basic for Applications. Microsoft Excel was used as a data interface. Calculations were performed among others for the system acetone-chloroform-methanol-ethanol-benzene (Ahmad et al., 1998).
2. Description of the algorithm The following simplifying assumptions are applied: • Maximal separation. • The boundaries are straight lines. • The mixture does not contain quinary azeotrope. • The quaternary azeotrope cannot be SN and it is the common point of all distillation regions in a quaternary mixture. • In an n-component mixture there is maximum one azeotrope of n-component. • The ternary submixtures occur in practice (present in Reshetov’s statistics).
Figure 1. The overall structure of the algorithm
The overall structure of the algorithm is presented in Fig. 1. The main steps are the following: 1. Determination of the type of all stationary points. 2. Completing the adjacency matrix of fixed points which represent the topological structure of the composition simplex by the method of Ahmad et al. (1998). 3. Determination of all product sequences and the relative amount of cuts for the given charge. Step 1, which is the new part of the algorithm, consists of the following steps: a. Determination of the type of stationary points in ternary submixtures. b. Unification of ternary submixtures intto quaternary ones. c. Unification of quaternary submixtures into quinary ones etc. The main steps of the algorithm are described with more details below. In the case of a heteroazeotrope, the azeotropic composition is replaced with that of the product phase before Step 1.
New algorithm for the determination of product sequences in azeotropic batch distillation
283
2.1. Step 1: determination of the type of all stationary points In this step, the stability of fixed points in all ternary submixtures is determined, then these stabilities are revised as the ternary submixtures are joined into quaternary ones, quaternary submixtures into quinary ones, etc. 2.1.1. Step 1a: ternary submixtures From the pure components, all ternary submixtures are created, and the azeotropes are assigned to the corresponding ternary submixtures. After that, the stability of all fixed points is determined within the ternary submixtures. The stability of pure component vertices is obtained by comparing their BPs with those of the neighbour fixed points on the edges, which can be pure components or binary azeotropes. If after the identification of the fixed points with minimum and maximum BP, there is no point with unknown stability, this step is ended for the given ternary submixture. The next step depends on the presence and the stability of a ternary azeotrope. If there is a ternary azeotrope, which is UN, the binary ones are saddles. If the ternary azeotrope is a S, the type of binary azeotropes depends on their number: • 1: if any of the neighbour vertices is UN (SN), the binary azeotrope will be a SN (UN); • 2: if there are two vertices, which are UNs (SNs), the binary azeotropes will be SNs (UNs), in the opposite case, one of the azeotropes is an UN, the other one is a SN; • 3: in this case there is only one vertex which is a node, whose stability determines those of binary azeotropes. Two of the azeotropes will have the same stability as the vertex, the third one will be a node of opposite stability. If there is no ternary azeotrope, the stability of the binary azeotropes depends on their number: • 1: it is a S, • 2: the stability of the two azeotropes depends on the stability of the common (pure) component. If it is UN, the azeotrope with lower BP is a S, the other one is SN. If it is SN, the azeotrope with lower BP is UN, the other one is S. If it is S (Serafimovtype: 2.0-2a), the stability of binary azeotropes can not be determined on the basis of temperatures only, • 3: the stability depends on that of the pure components. If all vertices are SNs, the azeotrope with the lowest BP is UN, the other ones are saddles, otherwise, the binary azeotropes form an UN, a S and a SN, respectively. 2.1.2. Step 1b: quaternary submixtures From the ternary mixtures all quaternary submixtures are created. The stability of all pure components and binary azeotropes are updated. If a point is S in any ternary mixture, it will be a S otherwise it will remain a node. The next step depends on the presence of the quaternary azeotrope. If there is a quaternary azeotrope and it has the lowest BP of the submixture, it is the single unstable node, otherwise it is S. If there is no quaternary azeotrope, but there is a ternary UN and still another UN, Step 1b is continued, otherwise it is ended for this submixture. We investigate whether this other UN is a pure component, and if there are boundaries on all the three ternary submixtures not containing the ternary UN. If both above conditions are satisfied the stabilities are not modified, otherwise the UN with higher BP becomes S. 2.1.3. Step 1c: submixtures of higher dimension The n-component mixture is decomposed into submixtures of n-1 components by omitting one component for each submixture. This procedure is continued until we have a set of five component submixtures, then to all submixtures of each level, the stationary points are assigned. The stability of the stationary points in the quinary
284
Laszlo Hegely and Peter Lang
submixtures are determined by comparing their stabilities in the quaternary submixtures. If a point is S in any quaternary mixture, it will be a S otherwise it will remain a node. If in the quaternary submixtures to be united, there are more than one quaternary azeotropes, which are UNs, the azeotrope with the lowest BP remains an UN, while the other(s) become(s) S. This comparison is repeated at every higher level until the stabilities in the n-component mixture are determined. 2.2. Step 2: Completion of adjacency matrix (Ahmad et al., 1998) In this step we implement the procedures OmegaAll and Omega by Ahmad et al. (1998). Since we study only the mixtures occurring in practice, the number of topological structures to be considered decreases from the 13 distinguished by Ahmad et al. (1998) to 9. A further difference is, that for the evaluation the stabilities of fixed points in the common unstable boundary limit set, we have to use the same method as in Step 1 (calling the function “stability”) because of the lack of the knowledge of the number positive and negative eigenvalues, respectively. 2.3. Step 3: Determination of product sequences All possible sequences of cuts must be enumerated satisfying the criteria 1 and 2 of Ahmad et al. (1998), that is, a sequence consists of n fixed points, and each subsequent product cut has to be an element of the unstable boundary limit sets of all the preceding product cuts. The list of possible sequences of cuts has to be checked if they fulfil also criterion 3, that is, the fixed points form an n-1 simplex. If the first point is a linear combination of the others, they form an n-1 simplex. This check is performed by the function “eqsolv”. In the case of ternary and quaternary mixtures, it may occur that some of the batch distillation (BD) regions overlap. In this case it must be checked if the azeotrope containing all components is located in the interior of a BD region. If so the region, which is a subset of another one, is not a real BD region, and it must be omitted. By this omission, the region which contains the non-real region becomes concave, therefore if the charge happens to lie in the omitted, non-real region, it will actually lie in a third, convex region. The composition of the charge is the linear combination of those of the fixed points of its corresponding BD region. If this set of equations is solved for all regions (by the function “eqsolv”), and the coefficients obtained are between 0 and 1, the charge is located in the corresponding region, and the coefficients give the relative amount of the cuts.
3. Results Calculations are presented for the five-component system acetone (A) – chloroform (C) – methanol (M) – ethanol (E) – benzene (B), studied by Ahmad et al. (1998). We have used the BPs and compositions (Table 1) published by Ahmad el al. (1998), with the exception of the BP of the acetone-chloroform (AC) azeotrope, which was taken from CRC Handbook of Chemistry and Physics. The stabilities calculated without using a VLE model agree with the ones obtained by the method of Ahmad et al. (1998) requiring the knowledge of VLE parameters. For testing Steps 2 and 3, we determined all feasible product sequences and the relative amount of the cuts for four different charge compositions located in different BD regions (Table 2). The 25 product sequences calculated agree with the ones obtained by the method of Ahmad et al. (1998). The above results verify that our algorithm is suitable for the determination of the sequence of the cuts without using a VLE model.
New algorithm for the determination of product sequences in azeotropic batch distillation
285
Table 1. Boiling points and compositions of the test system Fixed point
A
C
M
E
B
BP (K)
CM 0 0.6579 0.3421 0 0 326.51 AM 0.7780 0 0.2220 0 0 328.01 A 1 0 0 0 0 328.90 ACMB 0.3164 0.4363 0.4363 0 0.0195 329.9 ACM 0.3396 0.2322 0.4282 0 0 329.91 MB 0 0 0.6070 0 0.3930 330.76 CE 0 0.8450 0 0.1550 0 332.06 C 0 1 0 0 0 333.85 ACE 0.3570 0.4637 0 0.1793 0 335.48 M 0 0 1 0 0 337.30 AC 0.3455 0.6545 0 0 0 337.85 EB 0 0 0 0.4442 0.5558 340.50 E 0 0 0 1 0 351.12 B 0 0 0 0 1 352.86 Table 2. The sequence and relative amounts (%) of products for different charge compositions Charge composition
Cuts
0.2, 0.2, 0.2, 0.2, 0.2 0.4, 0.15, 0.15, 0.15, 0.15 0.15, 0.15, 0.4, 0.15, 0.15 0.15, 0.15, 0.15, 0.15, 0.4
CM: 3.0 AM: 33.4 AM: 6.4 AM: 2.3
ACMB: 43.5 ACMB: 17.4 ACMB: 47.4 ACMB: 32.6
ACE: 17.5 ACE: 23.8 MB: 28.2 ACE: 13.1
EB: 34.5 EB: 24.2 EB: 5.4 EB: 28.5
E: 1.6 B: 1.2 E: 12.6 B: 23.5
4. Conclusion A method is presented for the determination of product sequences of batch distillation for any number of components based only on the BPs of pure components and azeotropes, and azeotropic compositions. The method is suitable for heteroazeotropes as well. The stability of fixed points is determined in every ternary submixture, assuming they occur in Reshetov’s statistics. The stabilities are updated as the ternary submixture are joined into quaternary ones, quaternary submixtures into quinary ones, etc. On the basis of the stabilities, we enumerate all feasible product sequences using the algorithm of Ahmad et al. (1998). Finally, the relative amount of cuts is determined assuming maximal separation for the given charge composition. The results are presented for the system acetone-chloroform-methanol-ethanol-benzene. The stabilities and the set of product sequences calculated without using a VLE model agree with the ones obtained by Ahmad et al. (1998). The relative amounts of cuts were calculated for four charge compositions belonging to different batch distillation regions. These results verify that our algorithm is suitable for the determination of the sequence of the cuts without using a VLE model.
Acknowledgement This work was supported by the OTKA (project No.: T-049184) and by the KMOP (project No.: 1.1.1-07/1-2008-0031).
References B. S. Ahmad,. Y. Zhang, P. I. Barton, 1998, AIChE Journal, 44, 5, 1051-1070. E. R. Foucher, M. F. Doherty, M. F. Malone, 1991, Ind. Eng. Cbem. Res., 30, 760. T. Pöpken, J. Gmehling, 2004, Ind. Eng. Chem. Res., 43, 777-783. B. T Safrit, A. W. Westerberg, 1997, Ind. Eng. Chem. Res., 36, 1827-1840.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Designing multi-product biopharmaceutical facilities using evolutionary algorithms Ana S. Simariaa, Ying Gaob, Richard Turnerb and Suzanne S. Farida a
The Advanced Centre for Biochemical Engineering, Dept. of Biochemical Engineering,
University College London, Torrington Place, London WC1E 7JE, UK. b
MedImmune Limited, Milstein Building, Granta Park, Cambridge, CB1 6GH, UK
Abstract An evolutionary algorithm based approach is presented to address the problem of designing flexible and cost-effective multi-product biopharmaceutical facilities. For a given portfolio of products with different demands, upstream yields and impurity levels, the proposed approach is able to tackle multiple decisions simultaneously so as to minimise cost of goods, namely the: optimal ratio of upstream to downstream trains, sequence of purification operations to be used for each product and equipment sizing strategy of each operation. The evolutionary algorithm is linked to a detailed process economics model to evaluate the multiple financial and operational outputs of each string. An industrially-relevant case study is presented that focuses on the design of manufacturing facilities for the production of monoclonal antibodies at different phases of clinical development. The evolutionary algorithm was found to search the decision space efficiently, identify the most promising solutions and provide novel insights on competing sequences. Keywords: bioprocess systems engineering, combinatorial optimisation, evolutionary algorithms, multi-product facility design, business decision-support.
1. Introduction Increasing pressures exist in the biopharmaceutical sector for the design of flexible and cost-effective multi-product facilities [1] that can cope with diverse drug candidate characteristics and process variations. This is due to the high risks of clinical failure of biopharmaceuticals, pressures to contain costs and enhance capacity utilisation as well as the greater variability of biopharmaceuticals in comparison to chemical drugs. In the specific case of monoclonal antibodies (mAbs), platform processes relying on 3 chromatography operations for purification have become widely established enabling efficiencies in terms of development time and resource consumption [2]. While radical changes from the current platforms are unlikely to occur in the near future, innovations that fit in the current facility paradigm, e.g. high capacity ion exchangers, mixed-mode resins and membrane adsorbers, are being extensively researched by the biopharmaceutical industry [1-3] in an attempt to overcome existing purification
Designing multi-product biopharmaceutical facilities using evolutionary algorithms 287 bottlenecks due to increasing upstream yields. In order to address the current challenges, this paper presents a decision-support approach in which multiple decisions, trade-offs and constraints are handled simultaneously so as to determine how best to design multiproduct facilities accounting for different optimal purification sequences per product.
2. Methodology 2.1. Decision levels The design of multi-product biopharmaceutical facilities is defined as a combinatorial optimisation problem where the decision variables represent choices at different levels (facility, product and unit operation) (Fig. 1). For a given portfolio of products with different demands, upstream yields and impurity levels, the problem consists of determining the best ratio of upstream (USP) to downstream (DSP) trains, the sequence of purification operations to be used for each product and the equipment sizing strategy of each operation, such that the cost of goods is minimised whilst demand and purity targets are met. The large decision space is explored using a prototype model comprising of an evolutionary algorithm linked to a detailed process economics model for evaluation of the multiple financial and operational outputs of each string.
Fig. 1. Decision levels of the multiproduct facility design problem and steps of the proposed procedure. USP=upstream, DSP=downstream, SEQ=sequence, H=bed height, DIAM=diameter, NrCYC=number of cycles, NrCOL=number of columns.
2.2. Proposed evolutionary algorithm (EA) The right hand side of Fig. 1 illustrates how the proposed approach handled each of the decision levels. At the unit operation level, the procedure used EAs [4] to search for the best equipment sizing strategies for each sequence and product. The main steps of the EA involved: (a) generation of a random initial population of chromosome solutions, representing the decision variables of each chromatography operation (H, DIAM, NrCYC, NrCOL), (b) evaluation of each chromosome using as objective function the cost of goods per gram (COG/g) [5] which comprised both direct costs based on resource utilisation (e.g. resin costs) and indirect costs (e.g. maintenance costs), (c) crossover of a set of chromosomes to form new ones, (d) application of a replacement strategy to select which chromosomes would remain in the population, (e) loop back to step (c) until the stopping criteria were met. The output of the EA at this decision level was not one single ‘optimal’ equipment sizing strategy, but a set of alternative strategies
288
A. Simaria et al.
with similar values of COG/g to improve the decision-making process. At the product level, each possible 3-step purification sequence was generated, given a set of available unit operations to choose from. Combining this with the unit operation level resulted in a very large decision space, comprised of different permutations of product, sequence, and equipment sizing strategy with the corresponding performance measures. Finally, at the facility level, the previous steps were repeated for different ratios of USP to DSP trains. 2.3. Case study setup An industrially-relevant case study is presented that addresses the design of a late stage manufacturing facility for the production of 3 mAbs (mAb1, mAb2, mAb3) at different phases of development (mAb1 - Phase III clinical trials, mAb2 and mAb3 - Market), with different titres (3, 4, and 5 g/L), and demands (20, 80 and 400 kg/year). The set of chromatography-based operations available to build the purification sequences and corresponding characteristics (e.g. binding capacity, yield, lifetime, price, host cell protein (HCP) removal capability) was generated using information from commercially available resins/membranes as well as advice sought from industrial experts. In total there were 17 available resins in the database that were able to form 208 different purification sequences. The range of variation of packed-bed chromatography equipment sizing decision variables was the same for all available steps: 15-25 cm bed height, 50-200 cm diameter, 1-4 columns, 1-10 cycles.
3. Results and discussion The discussion focuses on the characteristics of the best solutions generated by the EA, whose convergence was proved by the results of multiple runs using a population size of 100 chromosomes and a stopping criterion of 20 generations. Fig. 2 presents a comparison of the different sequences generated by the procedure in terms of % change in COG/g relative to the platform and purity measured in terms of HCP log reduction value (LRV). The platform is a standard process for mAb purification which consists of 3 chromatography steps: affinity (AFF) for capture, cation-exchange (CEX) for intermediate purification and anion-exchange (AEX) for polishing. Fig. 2 presents the results for the products with the lowest (mAb1) and highest demand (mAb3) and for two different ratios of USP:DSP trains. Overall, the sequences with no AFF step (U) are cheaper representing COG/g savings up to -7% for mAb3 and 6USP:1USP. However not all these sequences are effective at removing HCPs when the impurity loads are high. Changing the AFF step from capture to intermediate also results in cost savings, due to the lower volume required of this expensive resin. Similar trends are observed in the different graphs with an ‘amplification’ effect caused by the increased number of batches. An exception however is seen in a particular type of non-AFF sequence that offers cost savings for mAb1 but become expensive for mAb3. These are sequences containing hydroxyapatite resin with low lifetimes, which requires frequent replacement when the number of batches is high.
Designing multi-product biopharmaceutical facilities using evolutionary algorithms 289 Fig. 2d highlights the sequences that satisfy an HCP LRV requirement of 3 logs and represent a trade-off between cost savings and development effort. The sequence without the AFF step is the one with highest cost saving but also the one that would require the longest development time to achieve the process targets. The other extreme is the sequence with AFF as capture, which is closer to the current platform, with low development time but also low cost savings. Having the AFF step as intermediate represents a compromise between cost savings and development effort.
Fig. 2. Comparison of % change in COG/g relative to the platform process and HCP LRV for the different sequences: with AFF as capture (), with AFF as intermediate (),without AFF (U)., for different scenarios of demand and USP:DSP ratio. COG/g is based on the average COG/g of the 20 cheapest equipment sizing strategies provided by the EA. The dotted area represents insignificant differences in COG/g relative to the platform. CEX=cation-exchange, AEX=anionexchange, AFF=affinity, MM= mixed-mode.
Fig. 3 presents further results for these sequences with a more detailed analysis of the equipment sizing strategy decision level and its impact on cost and time. Two objective functions were used by the EA (COG/g and DSP time) and the 20 best strategies found for each objective function and sequence are shown in Fig. 3a. For a 6USP:1DSP ratio the time available to perform the purification operations is only 2 days, meaning that some of the cheapest strategies could not be implemented due to their excessive time. Fig. 3b provides more insights into the equipment sizing strategies that meet the DSP window and offer at least 2% cost savings for sequence CEX-AFF-AEX. The decision-
A. Simaria et al.
290
maker can chose between having single large columns running for few cycles or multiple smaller columns running for a larger number of cycles, as all the presented options have similar COG/g and meet the DSP window. Hence individual preferences can be used to select the strategy to be implemented.
Fig. 3. a) Comparison of % change in COG/g relative to platform process and DSP time of the 20 best equipment sizing strategies identified by the EA when using (filled) COG/g or (hollow) DSP time as objective function for each of the selected sequences: AFF-AEX-CEX (), CEX-AFFAEX (),CEX-AEX-MM (mixed-mode) (S), for mAb3 and 6USP:1DSP. b) Characteristics of the equipment sizing strategies for the capture step of sequence CEX-AFF-AEX that meet the DSP window of 2 days and represent cost savings of more than 2%. The bubble size is proportional to the column diameter and the number of cycles is less than 5 ({), greater or equal to 5 (z).
4. Conclusions The most cost-effective purification sequences and equipment sizing strategies that meet demand and purity targets were identified for each product in the facility. The evolutionary algorithm was found to efficiently search the decision space, identifying the most promising solutions and providing novel insights into the design of multiproduct biopharmaceutical facilities, particularly in respect to the use of alternative purifications sequences.
5. Acknowledgements Financial support from the EPSRC (UK), the TSB (UK) and MedImmune is gratefully acknowledged. Industrial input from MedImmune is gratefully acknowledged.
References [1] [2] [3] [4] [5]
B. Kelley, 2009, mAbs, 1, 5, 443-452. A. Shukla and J. Thommes, 2010, Trends Biotechnol. 28, 5, 253-261. D. Low, R. O’Leary and N.S. Pujar, 2007, J. Chrom.B, 848, 48-63. A.S. Simaria and S.S. Farid, 2010, ESCAPE-20, S. Pierucci, G. Buzzi Ferraris. (eds.), Amsterdam: Elsevier B.V., 27, pp 1557-1562. S.S. Farid, J. Washbrook and N.J. Titchener-Hooker, 2007, Comput. Chem.Eng., 31, 11411158.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A retrofit strategy to achieve “Fast, Flexible, Future (F3)” pharmaceutical production processes Ravendra Singh,a Raquel Rozada-Sanchez,b Tim Wrate,b Frans Muller,b Krist V. Gernaey,a Rafiqul Gani,a John M. Woodleya a
Department of Chemical and Biochemical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark b AstraZeneca Limited, Charter Way, Silk Road Business Park, Macclesfield, Cheshire SK10 2NA, UK
Abstract In the work reported here, a substrates adoption methodology for a series of similar substrates has been developed as part of a retrofit strategy. The objective is to achieve “fast, flexible and future” pharmaceutical production processes by adapting a generic modular process-plant template. Application of the methodology is illustrated through a case study from the pharmaceutical industry. Use of computer-aided models, methods and tools as part of the methodology is also highlighted. Keywords: retrofit strategy, substrates adoption methodology, fast, flexible.
1. Introduction In order to be efficient, the future pharmaceutical industries will need to change. For example one option is to develop manufacturing processes based on continuous modular systems. In order to minimize the process development cost, a flexible generic continuous modular plant (which by definition should be capable of being adapted for operation with a series of related substrates and products), will be particularly attractive. However, the design of a new continuous modular plant is a challenging task. Therefore, systematic model-based computer-aided methods and tools are needed through which a generic plant for a given chemistry can be adapted for different substrates in the most efficient way. The proposed substrates adoption methodology identifies the necessary changes to a process-plant template in order to adapt it for a given substrate. The changes can be related to reagents (e.g. reducing agent, solvent, catalyst), process operational conditions (e.g. operating temperature, flow rates), as well as in the physical arrangement (configuration) of the process equipment. The overall goals are to (i) achieve a shorter time-to-market, (ii) reduce the cost of process development in terms of resources and time, (iii) increase the throughput of potential products and (iv) improve process robustness while maintaining product quality. The key idea is that a process and/or plant can be adapted to accommodate the chemical transformation of at least 80% of available substrates with common molecular functionality. In this manuscript a systematic framework including the method and tools through which a generic process/plant can be adapted to a specific substrate is presented.
2. Systematic framework The overview of the proposed framework for substrates adoption is shown in Figure1. The starting point for substrates adoption is the problem definition in terms of substrate
Singh et al.
292
specifications and generic process and plant characteristics. This information will be used as the input to the developed system for substrates adoption. A model library and a knowledge base act as supporting tools for substrates adoption. As shown in Figure 1, the developed methodology relates the input information to the available supporting tools, and subsequently generates a proposal for an adapted plant. If the obtained plant is experimentally validated then it is selected as the final adapted plant. The substrates adoption methodology, a knowledge base and a model library are three important parts of the substrates adoption framework.
Figure 1. Framework overview 2.1. Supporting tools 2.1.1. Ontological knowledge base The ontological knowledge base contains useful information/data needed for substrates adoption. A methodology reported in Singh et al. [1] has been used to design and develop the ontology. According to the architecture there is a main section of the knowledge base which is connected with the different more specific sections. The different classes of the main section of the knowledge base are: Main reactions (generic), products, reactants, reducing agents, catalysts, and solvents. Specific sections of the knowledge base consist of specific knowledge/data domains, including properties of substances (reactants, products, reducing agent, solvents, and catalysts), reaction characteristics and the characteristics of the unit operations. Each specific section (e.g. reactants properties section) also consists of different classes (e.g. quantitative properties, qualitative properties), subclasses (e.g. fixed quantitative properties, dependent quantitative properties) and corresponding instances (e.g. density). 2.1.2. Model library The model library is used to generate the missing/additional data needed for substrates adoption. It contains thermodynamic models as well as process operational models. Thermodynamic models are used for the generation of missing or dependent properties. Process operational models are employed to provide process understanding. The importance of models in pharmaceutical processes is highlighted in the literature [2, 3]
A retrofit strategy to achieve “Fast, Flexible, Future (F3)” pharmaceutical production processes 293 2.2. Substrates adoption methodology The substrates adoption methodology is shown in Figure 2. There are 6 hierarchical steps through which any substrate is adopted in a generic process-plant template. The first step is to check whether the substances involved in the process are hazardous or not. The necessary reagents (reducing agent, solvent, catalyst) are identified in steps 23. The solvent selection methodology is reported in Gani et al. [4]. In step 4.1, the desired process operational scenario is generated. Step 4.2 generates a process flowsheet that satisfies the desired process operational scenario. The process conditions are identified in step 4.3. The feasibility of the generated process and/or plant is analysed in step 5 and if it is feasible then reagents, equipment configuration and process operational conditions necessary for new product synthesis are proposed in step 6. Those can then be further validated through experiments.
Figure 2. Substrates adoption methodology
3. Conceptual case study: Nitro reduction process A generic flowsheet (conceptual) for a nitro reduction process is shown in Figure 3.
Figure 3. Generic flowsheet (conceptual) for a nitro reduction process [Į, ȕ, Ȗ, ȥ=1, 0 (exist, not exist); ȗ= 1, 2, 3 (Feed to mixing, reaction, completion section)]
294
Singh et al.
As shown in Figure 3 the Nitro compound is dissolved in solvent in a stirred tank. Then the solution is passed through the polishing filtration step to remove undissolved nitro compound before feeding to the reactor. Simultaneously, the reducing agent is also fed to the reactor. The suspension of catalyst is also prepared in a stirred tank and then the catalyst is continuously fed to the reactor. The outlet stream from the reactor contains product (amino compound), catalyst, unreacted reducing agent and byproduct. The subsequent steps are to separate catalyst, reducing agent, products and solvent. Problem definition: Adapt a “generic nitro reduction plant template (Figure 3)” for the synthesis of “2-Amino-4-chlorodiphenylamine” through the nitro reduction of “2-Nitro4-chlorodiphenylamine”. Generic case:
R-NO2
Reducing agent
+ Cl
Specific case:
+ N NO2 H
Catalyst Solvent
Catalyst Reducing agent Solvent
R-NH2
+
Byproduct
Cl + Byproduct N NH2 H
Substrates specifications: Substances involved in the process are: Product (2-Amino4-chlorodiphenylamine), Reactants (2-Nitro-4-chlorodiphenylamine, Reducing agent), Catalyst, Solvent. Generic process and plant characterization: The generic process-plant template (see Figure 3) has been characterized as shown in Figure 4.
Figure 4. Generic process and plant characterization 3.1. Hazard assessment (Step 1): 2-Nitro-4-chlorodiphenylamine is safe for adoption. 3.2. Initial solubility screen (Step 2): THF and N, N-dimethyl formide are obtained as solvents from the initial solubility screening. 3.3. Reactivity & selectivity assessment (Step 3): In this step reducing agent (H2), catalyst (Pt/C) and solvent (THF) are obtained from available knowledge and model based analysis coupled with experimental data. 3.4. F3 process conditions (Step 4): The following sub-steps are used. 3.4.1. Generate desired process operational scenario (Step 4.1): The procedure for this step is shown in Figure 5. Generic process operations are: substrate dissolution, filtration, heating, reaction etc. ‘Substrate dissolution’ is considered for analysis. The substrate (2-Nitro-4-chlorodiphenylamine) is in solid phase (NMP = 114.47 oC) while the reaction needs to be in liquid phase, and therefore this operation is necessary. Key elements involved in dissolution are: mixing tank, 2-Nitro-4-chlorodiphenylamine, and THF. One element ‘mixing tank’ is selected and the corresponding characteristics (stirred tank, cooling/heating required etc.) and limitations (e.g. desired temperature range) are identified. Steps 4.1.2.–4.1.6 are repeated for each operation.
A retrofit strategy to achieve “Fast, Flexible, Future (F3)” pharmaceutical production processes 295 OP
NOP
NOP m
NOP
Figure 5. Generate desired process operational scenario (q: no. of process operations) 3.4.2. Generate process flowsheet (Step 4.2): In this step the desired process operational scenario is matched with the unit operational knowledge base and the desired process equipments are identified. For example, one process operation ‘substrate dissolution’ is considered and the corresponding process equipment (jacketed stirred tank) which satisfied the desired characteristics and limitations is identified from the knowledge base. Similarly other process equipments are identified and the flowsheet is developed (see Figure 3, highlighted part). 3.4.3. New process analysis (Step 4.3): In this step all the information/data as shown in Figure 4 are generated. Thermodynamic models (e.g. solubility model (UNIFAC)) are used for property calculations and process operational models are used for calculation of process operational variables (e.g. temperature, flow rate, yields etc.). 3.5. Feasibility analysis (Step 5): In this step plant feasibility, operational feasibility and reagents feasibility are checked. The generated process flowsheet is a subset of the generic process flowsheet meaning that the plant is feasible. The calculated process operational variables are within the specified operational limits meaning that the process operations are also feasible. Identified reagents satisfied the operational (e.g. substrate solubility in the solvent), environmental and economical constraints (e.g. cost). 3.6. Proposed adaptation (Step 6): On the basis of the outcomes of Steps 1 – 5, the necessary reagents, process flowsheet and operational conditions are proposed. The proposed synthesis process should then be further validated through experimentation. If the proposed process satisfies the desired process performance then it can be considered as the final adapted synthesis process.
4. Conclusions In this work, a systematic framework for process and plant retrofitting has been presented. Process and plant retrofitting provides the means to produce a range of chemicals efficiently with less cost, time, space and resources. The application of the framework has been demonstrated through a conceptual nitro reduction case study.
Acknowledgements: The work presented in this manuscript is funded by European Community‘s 7th Framework Program under grant agreement n° 228867, F3 Factory References 1. 2. 3. 4.
R. Singh, K. V. Gernaey, R. Gani, 2010, Comput. & Chem. Engng., 34, 7, 1137-1154. K. V. Gernaey, R. Gani, 2010, Chemical Engineering Science, 65, 5757-5769. C. Jimenez-Gonzalez, J. M. Woodley, Comput. & Chem. Engng., 2010, 34, 1009-1017 R. Gani, P. A. Gomez, M. Folic, C. Jimenez-Gonzalez, D. J. C. Constable, 2008, Comput. & Chem. Engng., 32, 2420–2444.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
0RGLILHG&DVH%DVHG5HDVRQLQJF\FOHIRU([SHUW .QRZOHGJH$FTXLVLWLRQGXULQJ3URFHVVGHVLJQ
(GXDUGR5ROGiQD6WpSKDQH1HJQ\D-HDQ0DUF/H/DQQD*XLOOHUPR&RUWpVE
D /DERUDWRLUHGH*HQLH&KLPLTXH805,137(16,$&(7DOOpH(PLOH0RQVR %3±7RXORXVH)UDQFH E 3RVWJUDGH'HSDUWPHQW,722UL]DED9HU0H[LFR
$EVWUDFW &XUUHQWO\FKHPLFDOHQJLQHHULQJUHVHDUFKHUVKDYHWRSURSRVHWRROVWRVXSSRUWGHVLJQLQ RUGHUWREHFRQVLVWHQWZLWKWRGD\¶VHFRQRPLFF\FOHVDQGLQGXVWULDOFRQVWUDLQWV2QHZD\ WR DFFHOHUDWH WKH GHVLJQ LV WR XVH SDVW H[SHULHQFHV DV VWDUWLQJ SRLQW EHFDXVH VRPH FKRLFHVRUGHFLVLRQVDUHQRWWRGR RUWRTXHVWLRQ DQGWKHQWR DGDSWDQGHQKDQFH WKHP IXUWKHU)RUWKLVDGDSWDWLRQVWHSQHZH[SHUWV¶NQRZOHGJHLVRIWHQPDQGDWRU\WRPRGLI\ WKH SDVW VROXWLRQ ,Q WKLV SDSHU ZH SURSRVH D FRXSOLQJ EHWZHHQ WZR NQRZOHGJH EDVHG PHWKRGV &DVH %DVHG 5HDVRQLQJ DQG &RQVWUDLQW 6DWLVIDFWLRQ 3UREOHP WR DFFHOHUDWH SUHOLPLQDU\GHVLJQE\RQOLQHNQRZOHGJHDFTXLVLWLRQGXULQJSUREOHPVROYLQJHSLVRGHV .H\ZRUGV'HVLJQ&DVH%DVHG5HDVRQLQJ&RQVWUDLQW6DWLVIDFWLRQ3UREOHP
,QWURGXFWLRQ ,QWKHFXUUHQWZRUOGPDUNHWFRQWH[WWKHGHVLJQDFWLYLW\LQFKHPLFDOHQJLQHHULQJDVZHOO DV LQ RWKHU HQJLQHHULQJ GRPDLQ PXVW EH VKRUWHQHG LQ RQH KDQG DQG PXVW SURSRVH LQQRYDWLYH SURGXFW LQ WKH RWKHU KDQG LQ RUGHU WR PDLQWDLQ FRPSHWLWLYH SRVLWLRQ 2QH ZD\WR GHDOZLWKWKHVH LVVXHV LV WR PDQDJH WKH JURZLQJDPRXQW RI NQRZOHGJHFUHDWHG GXULQJ WKH GHVLJQ DFWLYLW\ 7KH JRDO RI NQRZOHGJH PDQDJHPHQW LV WR VWUHVV RQ WKH JURZLQJ LPSRUWDQFH RI NQRZOHGJH UHXVH LQ RUGHU WR DVVLVW GHVLJQHU LQ WKH GHVLJQ SURFHVV &RQYHQWLRQDOO\ LQ FKHPLFDO HQJLQHHULQJ PRGHOLVDWLRQ LV WKH ZLGHVSUHDG PHWKRGIRU NQRZOHGJHFDSLWDOL]DWLRQ%XWPRGHOLVDWLRQUHDFKHVLWVOLPLWVZKHQZHWU\ WRDFTXLUHUHSUHVHQWDQGUHXVHH[SHUWWDFLWNQRZOHGJHOHDGLQJWRLWVFKRLFHVRQSURFHVV RUXQLWRSHUDWLRQWHFKQRORJLFDOFRQILJXUDWLRQ 7RGHDOZLWKWKLVLVVXHYDULRXV$UWLILFLDO,QWHOOLJHQFHPHWKRGVWU\WRWDNHLQWRDFFRXQW WKHWDFLWDQGH[SOLFLWH[SHUWNQRZOHGJH$PRQJWKHP&DVH%DVHG5HDVRQLQJ&%5 LV FRPPRQO\XVHGLQSURFHVVHQJLQHHULQJGHVLJQ$YUDPHQNRDQG.UDVODZVNL %XW &RQVWUDLQW6DWLVIDFWLRQ3UREOHP&63 PHWKRGVOHVVFRPPRQDUHDOVRDQDSSURSULDWHG DSSURDFK IRU NQRZOHGJH PDQDJHPHQW GXULQJ WKH GHVLJQ DFWLYLW\ ,Q D SUHYLRXV SDSHU 5ROGDQ HW DO ZH SUHVHQW D QHZ PHWKRG EDVHG RQ WKH FRXSOLQJ EHWZHHQ &DVH %DVHG5HDVRQLQJDQG&RQVWUDLQW6DWLVIDFWLRQ3UREOHP7KHRULJLQDOLW\RIRXUZRUNOLHV LQ WKH IDFW WKDW ZH DGG D QHZ ORRS LQ WKH &%5 DGDSWDWLRQ VWHS WR DFTXLUH H[SHUW NQRZOHGJH³RQOLQH´7RJRIXUWKHUZHGHFLGHWRH[SORLWGHHSHUWKHH[SHUW¶VLQWHUDFWLRQ ORRSE\SURSRVLQJDPHWKRGIRUDXWRPDWLFNQRZOHGJHH[SHUWDFTXLVLWLRQ
.QRZOHGJH%DVHG0HWKRG &DVH%DVHG5HDVRQLQJ 7KH &%5 DSSURDFK WULHV WR SURSRVH D VROXWLRQ WR D FXUUHQW SUREOHP E\ HVWDEOLVKLQJ VRPH VLPLODULWLHV ZLWKSUREOHPV SUHYLRXVO\ VROYHGLH FDVHV DQG VWRUHGLQDPHPRU\ FDVH EDVH LQVWHDG RI VWDUWLQJ IURP VFUDWFK 7KH PDLQ SULQFLSOH RI &%5 LV VLPLODU SUREOHPV KDYH VLPLODU VROXWLRQV &RPSDUHG WR &63 DSSURDFK GHWDLOHV LQ SDUW LW
0RGLILHG&DVH%DVHG5HDVRQLQJF\FOHIRU([SHUW.QRZOHGJH$FTXLVLWLRQGXULQJ 3URFHVVGHVLJQ
297
UHTXLUHVVLJQLILFDQWO\OHVVNQRZOHGJHH[WUDFWLRQWKHSULQFLSDOUHOHYDQWFKDUDFWHULVWLFVRI WKH SUREOHP DQG LWV DVVRFLDWHG VROXWLRQ DUH VXIILFLHQW (YHQ LI WKLV DSSURDFK KDV D OHDUQLQJ VWHS WR H[WHQG WKH QXPEHU RI FDVHV LQ WKH PHPRU\ LW QHHGV WR JDWKHU D PLQLPXP QXPEHU RI FDVHV LQ RUGHU WR EH HIIHFWLYH DQG WR KDYH VLJQLILFDQW UHVXOWV HVSHFLDOO\ GXULQJWKH VWDUW XSSKDVH %HFDXVH RI LWV PDQ\ DGYDQWDJHVWKLVDSSURDFK LV UHWDLQHG IRU WKLV ZRUN $PRQJ WKHP ZH FDQ XQGHUOLQH LWV UHGXFHG NQRZOHGJH DFTXLVLWLRQWDVNLWVIOH[LELOLW\LQNQRZOHGJHPRGHOOLQJLWVDELOLW\WROHDUQLWVSRVVLELOLW\ IRU UHDVRQLQJ ZLWK LQFRPSOHWH RUDQG LPSUHFLVH GDWD LWV YLFLQLW\ WR KXPDQ UHDVRQLQJ DQGLWVUDSLGLW\WRFUHDWHDQGWRPDLQWDLQDFRPSXWHUGHFLVLRQVXSSRUWWRROIRUGHVLJQHU 7KHUHKDYH EHHQ YDULRXV PRGHOVWRUHSUHVHQWWKH&%5 PHWKRG EXWFXUUHQWO\ WKHUHLVD JHQHUDO DFFHSWDQFH RI 5 PRGHO ZLWK WKH IROORZLQJ VWHSV 5HSUHVHQW 5HWULHYH 5HXVH 5HSDLUDQG5HWDLQLOOXVWUDWHGLQILJXUH
)LJXUH&%5SURFHVVZLWKWKHLQWHUDFWLYHORRS &RQVWUDLQW6DWLVIDFWLRQ3UREOHP ,Q &63 WKH NQRZOHGJH UHODWHG WR D SUREOHP LV VHJPHQWHG LQ HOHPHQWDU\ SLHFHV PRGHOLVHG E\ FRQVWUDLQWV ORJLFDO H[SUHVVLRQV PDWKHPDWLFDO HTXDOLWLHV RU LQHTXDOLWLHV UDQJHRIYDOLGLW\«%DVHGRQWKHVHSLHFHVDNQRZOHGJHPRGHODQGUHDVRQLQJLQIHUHQFH HQJLQH DUHEXLOW:KHQDQHZSUREOHPLVIDFHGLWLVVXEPLWWHGWRWKHNQRZOHGJHPRGHO DQGWKHQWKHUHDVRQLQJLVGULYHQWKURXJKWKHFRQVWUDLQWVLQRUGHUWRUHGXFHWKHUDQJHRI WKH SRVVLEOH YDOXHV IRU WKH YDULDEOHV 7KLV DSSURDFK KDV WZR SULQFLSDO VWUHQJWKV LWV DELOLW\WRUHDFKQHZLQQRYDWLYHVROXWLRQVDQGWRHVWDEOLVKWKDWSUREOHPKDVQRVROXWLRQ RYHU FRQVWUDLQW SUREOHP IRU H[DPSOH ,WV WKH PDMRU GUDZEDFN LV WKH KXJH ZRUN GHGLFDWHGWRWKHH[WUDFWLRQLQWHUSUHWDWLRQDQGIRUPXODWLRQRIWKHGRPDLQNQRZOHGJH &RQVWUDLQW 3URJUDPPLQJ LV D VHW RI PDWKHPDWLFDO PHWKRGV EDVHG RQ D GHFODUDWLYH GHVFULSWLRQRIDSUREOHPDVDVHWRIGHFLVLRQYDULDEOHVZLWKWKHLUGRPDLQVDQGDVHWRI FRQVWUDLQWVUHVWULFWLQJWKHFRPELQDWLRQVRIYDOXHV,WLVGHILQHGE\DWXSOH9'& • 9DILQLWHVHWRIYDULDEOHV9 ;«;Q • ' WKH GRPDLQV RI YDOLGLW\ ' '« 'Q (DFK YDULDEOH ;L RI 9 KDV DQ DVVRFLDWHGGRPDLQ'LLQ' • & WKH QHWZRUN RI FRQVWUDLQWV & &« &P (DFK FRQVWUDLQW &M LV D UHODWLRQ EHWZHHQ D VXEVHW RI YDULDEOHV RI 9 7KH W\SH RI FRQVWUDLQWV GHWHUPLQHV WKH FODVVHVRIWKH&63DQGWKHUHIRUHWKHVROYLQJVWUDWHJLHV 7KH TXHVWLRQ WR EH DQVZHUHG IRU DQ LQVWDQFH RI WKH PHWKRG LV ZKHWKHU WKHUH H[LVWV DQ DVVLJQPHQWRIYDOXHVWRYDULDEOHVVXFKWKDWDOOWKHFRQVWUDLQWVDUHVDWLVILHG&RPSDUHGWR RSWLPL]DWLRQ WHFKQLTXHV &63 LV FKDUDFWHUL]HG E\ D UHGXFWLRQ RI GRPDLQV RI WKH YDULDEOHV GXULQJ UHVROXWLRQ 7KH NH\ LGHD LV WR XVH DFWLYHO\ FRQVWUDLQWV WR UHGXFH WKH FRPSXWDWLRQDO HIIRUW QHHGHG WR VROYH KLJK FRPELQDWRU\ SUREOHPV &RQVWUDLQWV DUH QRW RQO\ XVHG WR WHVW WKH YDOLGLW\ RI DQ DVVLJQPHQW EXW DOVR LQ DQ DFWLYH PRGH WR UHPRYH
298
(5ROGDQHWDO
YDOXHV IURP GRPDLQV DQG GHWHFW LQFRQVLVWHQFLHV 7KLV SURFHVV RI DFWLYHO\ XVHG FRQVWUDLQWVLVFDOOHGFRQVWUDLQWSURSDJDWLRQRUILOWHULQJ6HYHUDOILOWHULQJWHFKQLTXHVDQG DOJRULWKPV DUH DYDLODEOH FDSDEOH WR UHPRYH PRUH RU OHVV YDOXHV LQ D PRUH RU OHVV LPSRUWDQW FRPSXWLQJ WLPH KHUH ZH XVH WKH $& DOJRULWKP QRW GHWDLOHG KHUH $V JHQHUDO FRQVWUDLQW SURSDJDWLRQ LV XVXDOO\ LQFRPSOHWH LW FDQ QRW GHWHFW DOO WKH LQFRQVLVWHQFLHV &RQVHTXHQWO\ FRQVWUDLQW SURSDJDWLRQ PXVW EH FRXSOHG ZLWK VHDUFK WHFKQLTXHV WR GHWHUPLQH LI D SUREOHP KDV RQH VROXWLRQ RU VHYHUDO RQHV RU QRW 7KH VHDUFKLVFRPPRQO\SHUIRUPHGZLWKDWUHHVHDUFKDOJRULWKP7KHJRDORIWKHVHDUFKLVWR JRWKURXJKWKHWUHHWLOODVROXWLRQLVIRXQGZKLOHWKHILOWHULQJFRQVLVWVLQSUXQLQJWKLVWUHH E\HOLPLQDWLQJORFDOLQFRQVWDQFLHV$OOWKHVHDVSHFWVDUHRXWRIWKHVFRSHRIWKLVSDSHU EXWPRUHGHWDLOVFDQEHIRXQGLQ$SW &63PHWKRGVDUHYHU\XVHIXOIRUWKHFRQFHSWXDOSKDVHRIWKHGHVLJQSURFHVVGXHWR ,WV IRUPDOLVP IRU DWWDLQPHQW UHSUHVHQWDWLRQ DQG VWUXFWXUDWLRQ RI NQRZOHGJH ZKLFK FRQWULEXWHWRSHUSHWXDWHWKHGHVLJQDFWLYLW\LQDILUP ,WVFDSDELOLW\WRWDNHLQWRDFFRXQWLPSUHFLVLRQ ,WV DELOLW\ WR FRQVLGHU PLVFHOODQHRXV DQG KHWHURJHQHRXV UHTXLUHPHQWV RQ WKH V\VWHP WKDWSHUPLWVWRFRQVLGHUWKHPVLPXOWDQHRXVO\DYRLGLQJLWHUDWLRQVGXULQJWKHGHVLJQ ,WV SRVVLELOLW\ WR VWDWH WKDW D SUREOHP KDV QR VROXWLRQ RU WR ILQG DOO WKH SRVVLEOH VROXWLRQVLHDOOWKHGHVLJQDOWHUQDWLYHVLQRXUFDVH ,WV DELOLW\ WR SUHVHUYH WKH LQLWLDO SUREOHP VWUXFWXUH LQVWHDG WR WUDQVODWH LW LQ DQRWKHU IRUPIRULQVWDQFHLQDULWKPHWLFIRUP DYRLGLQJORVVRILQIRUPDWLRQ &RXSOLQJIRU$GDSWDWLRQ ,Q RXU &%5 V\VWHPWKHSUREOHPLV VWDWHG LQ WKHIRUP RID &63 PRGHO&%5 DQG &63 PHWKRGV FDQ EH FRXSOHG EHFDXVH WKH\ DUH EDVHG RQ GLIIHUHQW NQRZOHGJH ,Q &63 WKH SUREOHP PXVW EH IRUPDOL]HG DQG WKH HOHPHQWV 9 ' & FOHDUO\ LGHQWLILHG &%5 LV SUHIHUHQWLDOO\ XVHG ZKHQ WKH IRUPDOL]DWLRQ LV QRW REYLRXV $OWKRXJK WKH UHVROXWLRQ SURFHVV DQG WKH QHFHVVDU\ NQRZOHGJH DUH GLIIHUHQW WKHUH DUH YDULRXV DGYDQWDJHV WR LQWHJUDWH WKHVH WZR PHWKRGV LQ WKH DGDSWDWLRQ VWHS &63 OHDGV WR D FRKHUHQW VROXWLRQ HYHQZKHQWKHSUREOHPLVFRPSOH[&%5LVZLGHO\XVHGZKHQZHKDYHHPSLULFDOGDWD &RXSOLQJ ERWK PHWKRGV DOORZV WDNLQJ SDUW RI WKHVH WZR VWUHQJWKV %HVLGHV LQ WKH DGDSWDWLRQVWDJHWKHXVHRIFRQVWUDLQW SHUPLWVWRLQWURGXFHHDVLO\H[SHUWSRLQWRIYLHZ DQG WR WDNH LQWR DFFRXQW WKH LQWHUDFWLRQ EHWZHHQ YDULRXV FDVHV ZKHQ WKH\ QHHG WR EH FRPELQHGIRUSURFHVVLQWHQVLILFDWLRQSXUSRVHIRUH[DPSOH %HFDXVHRIGLVFUHSDQFLHVEHWZHHQWKHLQLWLDODQGUHWULHYHGSUREOHPWKHODWWHU PXVWEH DGDSWHG$GDSWDWLRQLVFRQVLGHUHGDVDFUXFLDOSDUWRI&%5V\VWHPVEHFDXVHLWFRQIHUVWR WKHP WKHLU TXDOLW\ RI SUREOHP VROYHUV 0RUHRYHU RXU JRDO LV WR SURSRVH D WRRO WR WKH SUHOLPLQDU\ GHVLJQ VWDJH ZKHUH WKH QXPEHU RI SDVW H[SHULHQFHV LV OLPLWHG DQG DGDSWDWLRQ LV WKHUHIRUH GHFLVLYH $GDSWDWLRQ LV WKH PRVW LPSRUWDQW &%5 VWHS VLQFH LW DGGVLQWHOOLJHQFHWRZKDWZRXOGEHVLPSOHSDWWHUQ PDWFKHUVRUHYHQPDNLQJ&%5DVD VLPSO\ VWRUDJH DQG UHWULHYDO V\VWHP 7R EH VDWLVIDFWRU\ DQG HIIHFWLYH WKH DGDSWDWLRQ SKDVH QHHGV VRPH DGGLWLRQDO DGDSWDWLRQ NQRZOHGJH 7KLV QHZ NQRZOHGJH FDQ EH H[WUDFWHGIURPH[SHUWH[SHULHQFHVRUIURPWKHFDVHVWRUHGLQWKHEDVHDQGWKHQFRGHGLQ WKH&%5V\VWHP%XWDQRWKHUPHWKRGFRQVLVWVLQDFTXLULQJH[SHUWNQRZOHGJH GXULQJD SUREOHPVROYLQJHSLVRGHILJXUH&RQVHTXHQWO\ZHKDYHDGGHGDQH[SHUW¶VLQWHUDFWLRQ ORRS LQ RXU &%5 V\VWHP GHWDLOHG H[SODQDWLRQ LQ 5ROGDQ HW DO $IWHU WKH DGDSWDWLRQSKDVHWKHVROXWLRQLVSURSRVHGWRWKHH[SHUWKHUHWZRVLWXDWLRQVDUHSRVVLEOH 7KHVROXWLRQ LV VDWLVIDFWRU\WKHQWKHQHZFDVHLVVWRUHG7KHVROXWLRQ LV XQVDWLVIDFWRU\ WKHORRSZLWKWKHH[SHUWLVDFWLYDWHG7KHDGYDQWDJHRIWKLVDSSURDFKLVDUHGXFHGHIIRUW RINQRZOHGJHHQJLQHHULQJE\DQH[SHUWVSHFLILFUHTXHVW
0RGLILHG&DVH%DVHG5HDVRQLQJF\FOHIRU([SHUW.QRZOHGJH$FTXLVLWLRQGXULQJ 3URFHVVGHVLJQ
299
$GDSWDWLRQ0HWKRG 3UHVHQWDWLRQ /HW DVVXPH WKDW WKH DGDSWLRQ FDQ EH GHFRPSRVHG LQWR D VXFFHVLRQ RI VLQJOH VWHSV FRUUHVSRQGLQJWRHOHPHQWDU\DGDSWDWLRQRSHUDWLRQV($2 DVSUHVHQWHGE\&RUGLHUHWDO $Q DGDSWDWLRQ PHWKRG $0 LV VRUWHG VHW RI ($2 WKDW DOORZ WR WUDQVIRUP VWHSZLVHWKHVRXUFHVROXWLRQLQWRWKHWDUJHWRQH $0 ^($2L`L WRQ QLVWKHQXPEHURIQHFHVVDU\VWHSVWRUHDFKWKHWDUJHWVROXWLRQ )RU HDFK FDVH ZH DVVRFLDWH RQH RU PRUH DGDSWDWLRQ PHWKRG EHFDXVH \RX FDQ KDYH VHYHUDO ZD\ WR DGDSW D FDVH GHSHQGLQJ RQ WKH LQLWLDO SUREOHP DQG DLPV 7KH ($2 DYDODEOH WR PRGLI\ WKH UHWULHYHG &63 PRGHO DUH OLVWHG LQ WDEOH IRU WKH GRPDLQ RSHUDWRU\RXFDQDGGRUGHOHWHRQHYDOXHRUDUDQJHRIYDOXHV 7DEOH(OHPHQWDU\$GDSWDWLRQ2SHUDWRUV
($2
2Q9DULDEOHV $GGBYDU 'HOHWHBYDU
2Q'RPDLQ $GGBYDOXH 'HOHWHBYDOXH
2Q&RQVWUDLQWV $GGBFRQVW 'HOHWHBFRQVW 0RGLI\BFRQVW
.QRZOHGJH$FTXLVLWLRQ 7KHLGHDZLWK WKH RQOLQH NQRZOHGJHDFTXLVLWLRQLVWR H[SORLWWKHFRUUHFWLRQVGULYHQ E\ WKHH[SHUWWKURXJKDQLQWHUDFWLYHSURFHVV7KHPRVWVLPLODUFDVHSURSRVHGE\WKH&%5 V\VWHP LV DFFRPSDQLHG E\ $0 7KH H[SHUW FKRRVHV LWV DGDSWDWLRQ PHWKRG WKHQ WKH DGDSWHG &63 PRGHO LV VROYHG DQG WKH VROXWLRQ SUHVHQWHG WR KLP ,I WKH VROXWLRQ LV QRW VXLWDEOHWKHIDLOXUHRIWKHDGDSWDWLRQPHWKRGPD\FRPHIURPRQHRUVHYHUDO($27KH LPSOHPHQWHGSURFHVVLVWRWUHDWVHSDUDWHO\HDFK($2 6HOHFWRQH($2QRWWHVWHG &RQVWUXFWWKHQHZ$0 6ROYHWKHDGDSWHG&630RGHO 7KHVROXWLRQLVSUHVHQWHGWRWKHH[SHUW 6ROXWLRQYDOLGDWHGJRWRVWHSWRWHVWDQHZ($2 6ROXWLRQQRWYDOLGDWHGWKHH[SHUWPRGLI\WKH($2VKHRUKHFDQ GHOHWHLWLIQHFHVVDU\ DQGWKH$0LVXSGDWHG 2QFH DOOWKH($2LWHVWHGLI DQDFFHSWDEOHVROXWLRQLVVWLOOQRWUHDFKHG WKHH[SHUW FDQ DGGQHZRQHVLQRUGHUWRPDWFKWKHWDUJHWVROXWLRQWRLWVH[SHFWDWLRQV7KHQWKHQHZ$0 LVVWRUHGLQWKH&%5V\VWHPDQGOLQNHGZLWKWKHVRXUFHFDVH
&DVH6WXG\ 7KHH[DPSOHKDVEHHQDGDSWHGIURPWKHFRQILJXUDWLRQRIDQLQGXVWULDOPL[HUGHVFULEHG LQ9DQ9HO]HQ &630RGHO 7KHGHVFULSWLRQRIWKHPL[HULVVXEGLYLGHGLQWRWZRPDLQFRPSRQHQWVWKHPL[HUDQGWKH PL[LQJ WDVN )LUVW WKH PL[HU ZDV GHFRPSRVHG LQWR D KLHUDUFKLFDO VWUXFWXUH WKHQ WKH YDULDEOHVGRPDLQVDQGFRQVWUDLQWVDUHLGHQWLILHGWDEOH,PSXWGDWDRIWKHPRGHOLH 9LVFRVLW\9LV 3UHVXUH3UHV +HDW7UDQVIHU+WUDQV 9ROXPH9RO DUHQHFHVVDU\WR VROYHWKH&63DQGWKH\FRPHIURPWKHSUREOHPGHVFULSWLRQLQWKH&%5,QSUHOLPLQDU\ GHVLJQ ZH GR QRW KDYH D SUHFLVH LGHD RI VRPH SDUDPHWHUV ZH RIWHQ RQO\ NQRZ D PDJQLWXGH:HFDQDOVRWDNHLQWRDFFRXQWWKLVVSHFLILFW\E\JLYLQJDIX]]\QXPEHURUD UDQJHRIYDOXH RUDOLQJXLVWLFYDOXHIRUWKHLPSXWGDWD)RUH[DPSOHWKHIHDWXUHSUHVVXUH
300
(5ROGDQHWDO
FDQ EH GHVFULEHG E\ QXPHULFDO YDOXHV RU OLQJXLVWLF RQHV ORZ PHGLXP RU KLJK 7KLV LPSUHFLVLRQ ZLOO EH FODULI\ LQ WKH QH[W VWHS RI WKH GHVLJQ DFWLYLW\ ,Q WDEOH ZH RQO\ GHWDLO WKH VHW RI FRQVWUDLQW & LH LW HQFRPSDVVHV QXPEHU EHWZHHQ EUDNHWV UHODWLRQVKLSVEHWZHHQ9HVVHODQG3UHVVXUH 9DULDEOHV 0L[LQJ7DVN07
'RPDLQ &RQVWUDLQWV 6XVSHQVLRQ GLVSHUVLRQ &9 3UHV ^KHPLVSVKHULFDO EOHQGLQJ« KLJK HOOLSWLFDO KLJK F\OLQGULFDO ORZ F\OLQGULFDOKLJK ` 0L[HU0 5HDFWRUPL[HUVWRUDJHWDQN« 9HVVHO9 &\OLQGULFDO HOOLSWLFDO &079LV9RO3 &+WUDQV07' KHPLVSKHULFDO ,PSHOOHU, $[LDO 7XUELQH KHOLFDO ULEERQ &09RO SURSHOOHU GHQWHG GLVN UDGLDO &079LV9RO, &079LV9RO3 WXUELQHDQFKRUVWLUUHU« &0+WUDQV 3RVLWLRQ3 7RS6LGH2IIFHQWHU 7KHUPDO 'HYLFH &RROHU&RQGHQVHU+HDWHU« 7' 7DEOH'HVFULSWLRQRIWKH&63PRGHO
$GDSWDWLRQ0HWKRG :KHQWKLV&63 PRGHOLVUHWULHYHG LIWKH&63VROXWLRQV ZLWKLWVLPSXW GDWDGRHVQRW PDWFK LWV H[SHFWDWLRQV LW FDQ FRPSOHWH WKH PRGHO E\ DGGLQJ VRPH HOHPHQWV RQ WKH YHVVHOGHVLJQZLWK PRUH JHRPHWULFDOFRQVLGHUDWLRQVIRUH[DPSOH,QWKLVFDVHWKH $0 FDQ EH ($2 $GGBYDU 9KLJK ($2 &UHDWHB'RP 9KLJK ($2 $GGBYDU 9GLDP ($2 &UHDWHB'RP 9GLDP ($2 $GGBFRQVW &9 9RO9GLDP 9KLJK JHRPHWULFDO FRQVWUDLQWRQWKHWKHYHVVHOSURSRUWLRQV :KHQWKLVFDVHZLOOEHUHWULHYHGDJDLQWKH$0 ZLWKWKHQHZ($2ZLOOEHSUHVHQWHGWRWKHH[SHUWWKHQKHFDQFRPSOHWHPRGLI\LWRU SURSRVHDQHZRQHZLWKWKH($2LRIWDEOH
&RQFOXVLRQ ,Q WKLV SDSHU ZH SUHVHQW D PHWKRG WR DFFHOHUDWH WKH SUHOLPLQDU\ GHVLJQ LQ FKHPLFDO HQJLQHHULQJ7KLVPHWKRGLVEDVHGRQWKHFRXSOLQJRIWZRDULWLFLDOLQWHOOLJHQFHPHWKRGV GHGLFDWHG WR NQRZOHGJH PDQDJHPHQW LH &DVH %DVHG 5HDVRQLQJ DQG &RQVWUDLQW 6WDLVIDFWLRQ3UREOHP7KHODWWHUDOORZVWRUHSUHVHQWDFDVHZLWKRXWGHWHULRUDWHLWVLQLWLDO VWUXFWXUHIXUWKHUPRUHLWFDQFRQVLGHUPLVFHOODQHRXVDQGKHWHURJHQHRXVUHTXLUHPHQWVDQG LWFDQVWDWHWKDWDSUREOHPKDVQRVROXWLRQRUWRILQGDOOWKHSRVVLEOHVROXWLRQ7KHIRUPHU LV H[WHQGHG E\ DGGLQJ DQ H[WHUQDO ORRS ZLWK DQ H[SHUW IRU WKH DGDSWDWLRQ VWHS 7KLV FUXFLDOVWHSRIWHQQHHGQHZNQRZOHGJHWRHOLPLQDWHGLVFUHSDQFLHVEHWZHHQWKHUHWULHYHG DQGWKHLQLWLDOSUREOHPV:LWKWKLVORRSZHFDQDFTXLUHH[SHUWNQRZOHGJH³RQOLQH´ZLWK WKH KHOS RI (OHPHQWDU\ $GDSWDWLRQ 2SHUDWRUV DQG WKHQ FRQVWUXFW DQ DGDSWLRQ PHWKRG VWRUHGZLWKWKHFDVHWKDWFDQEHSURSRVHGDQGPRGLILHGLQDIXUWKHUGHVLJQSUREOHP
5HIHUHQFHV .5$SW3ULQFLSOHVRI&RQVWUDLQW3URJUDPPLQJ&DPEULGJH8QLYHUVLW\3UHVV&DPEULGJH <$YUDPHQNR$.UDVODZVNL&DVH%DVHG'HVLJQ6SULQJHU9HUODJ $ &RUGLHU % )XFKV - /LHEHU DQG $ 0LOOH $FTXLVLWLRQ LQWHUDFWLYH GHV FRQQDLVVDQFHV G DGDSWDWLRQLQWHJUpDX[VHVVLRQVGH5$3&WK&%5:RUNVKRSSS (5ROGDQ61HJQ\-0/H/DQQ*&RUWHV5REOHV&RQVWUDLQW6DWLVIDFWLRQ3UREOHPIRU &DVH %DVHG 5HDVRQQLQJ $GDSWDWLRQ $SSOLFDWLRQ LQ 3URFHVV 'HVLJQ &RPSXWHU $LGHG &KHPLFDO(QJLQHHULQJ 09DQ9HO]HQ$SLHFHRI&$.(FRPSXWHU$LGHG.QRZOHGJH(QJLQHHULQJRQ.'6LILHG &RQILJXUDWLRQWDVNV0DVWHU7KHVLV$PVWHUGDP8QLYHUVLW\
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
,QWHJUDWLQJSURFHVVVLPXODWLRQDQG0,1/3 PHWKRGVIRUWKHRSWLPDOGHVLJQRIDEVRUSWLRQ FRROLQJV\VWHPV -XDQ$5H\HV/DEDUWDD5REHUW%UXQHWE-RVp$&DEDOOHURD'LHWHU%RHUF /DXUHDQR-LPpQH]E D 'HSDUWPHQWRI&KHPLFDO(QJLQHHULQJ8QLYHUVLW\RI$OLFDQWH$S&RUUHRV $OLFDQWH6SDLQ E 'HSDUWDPHQWG¶(QJLQ\HULD4XLPLFD8QLYHUVLWDW5RYLUDL9LUJLOL$Y3DwVRV&DWDODQV 7DUUDJRQD6SDLQ F 'HSDUWDPHQWG¶(QJLQ\HULD0HFiQLFD8QLYHUVLWDW5RYLUDL9LUJLOL$Y3DwVRV&DWDODQV 7DUUDJRQD6SDLQ
$EVWUDFW 7KLVZRUNLQWURGXFHVDV\VWHPDWLFPHWKRGIRUWKHRSWLPL]DWLRQRIDEVRUSWLRQF\FOHVE\ FRPELQLQJ WKH FDSDELOLWLHV RI VLPXODWLRQ SDFNDJHV DQG RSWLPL]DWLRQ WRROV 7KH FDVH SUHVHQWHG LV D PL[HGLQWHJHU QRQOLQHDU SURJUDPPLQJ 0,1/3 SUREOHP WKDW LV GHFRPSRVHG IROORZLQJ WKH RXWHUDSSUR[LPDWLRQ VFKHPD 7KH SULPDO OHYHO HQWDLOV WKH VROXWLRQ RI WKH QRQOLQHDU SURJUDPPLQJ 1/3 VXESUREOHP ZKHUH WKH ELQDU\ YDULDEOHV DUHIL[HG7KHPDVWHULVDVSHFLDOO\WDLORUHGPL[HGLQWHJHUOLQHDUSURJUDPPLQJ0,/3 SUREOHP 7KH 1/3 VXESUREOHPV DUH VROYHG E\ FRPELQLQJ JUDGLHQWEDVHG 1/3 VROYHUV LHIPLQFRQ OLQNHGZLWKULJRURXVSURFHVVVLPXODWLRQ$VSHQ3OXV 7KHPHWKRGRORJ\ LV WHVWHG XVLQJ DQ DEVRUSWLRQ FRROLQJ V\VWHP 7KH UHVXOWV REWDLQHG VKRZ WKDW WKH REMHFWLYHIXQFWLRQFDQEHVLJQLILFDQWO\UHGXFHGZLWKWKHSUHVHQWHGPHWKRGRORJ\
.H\ZRUGV DEVRUSWLRQ FRROLQJ V\VWHP K\EULG VLPXODWLRQRSWLPL]DWLRQ RSWLPDO SURFHVVHVGHVLJQ
,QWURGXFWLRQ 7KHJURZLQJLQFUHDVHRIWKHFRROLQJGHPDQGKDVUHFHQWO\OHGWRPDMRUHQHUJ\SUREOHPV DV WKH SHDN FRQVXPSWLRQ KDV WUDQVIHUUHG IURP ZLQWHU WR VXPPHU WLPH ,Q WKLV FRQWH[W DEVRUSWLRQ UHIULJHUDWLRQ KDV JDLQHG ZLGHU DFFHSWDQFH LQ WKH DLU FRQGLWLRQLQJ VHFWRU FXUUHQWO\ GRPLQDWHG E\ FRPSUHVVLRQ UHIULJHUDWLRQ $EVRUSWLRQ V\VWHPV PD\ XVH ORZ JUDGHKHDWVRXUFHVDVHQHUJ\ LQSXWLQRUGHUWRSURGXFHFRROLQJWKHUHE\OHDGLQJWROHVV JOREDO ZDUPLQJ HPLVVLRQV )XUWKHUPRUH DEVRUSWLRQ V\VWHPV GR QRW FRQWULEXWH WR WKH R]RQHOD\HUGHSOHWLRQDQGKDYHDKLJKUHOLDELOLW\ORZPDLQWDLQDELOLW\DQGDVLOHQWDQG YLEUDWLRQIUHHRSHUDWLRQ>@ $EVRUSWLRQ FRROLQJ V\VWHP GHVLJQV DUH WUDGLWLRQDOO\ SHUIRUPHG E\ KHXULVWLF PHWKRGV 2SWLPL]DWLRQ DSSURDFKHV KDYH RQO\ UHFHQWO\ HPHUJHG DV D FOHDU DOWHUQDWLYH WR WKHVH VWUDWHJLHV 0RVW RI WKH RSWLPL]DWLRQ PHWKRGV LQWURGXFHG VR IDU LQ WKLV DUHD UHO\ RQ
302
5H\HV/DEDUWDHWDO
VLPSOLILHGPRGHOVRIWKHGHVRUEHUFROXPQDNH\XQLWRIWKHF\FOHZKLFKLQVRPHFDVHV PLJKW SURYLGH SRRU SUHGLFWLRQV 7R RYHUFRPH WKLV OLPLWDWLRQ ZH SUHVHQW D V\VWHPDWLF WRRO IRU WKH RSWLPDO GHVLJQ EDVHG RQ WKH FRPELQHG XVH RI SURFHVV VLPXODWLRQ DQG RSWLPL]DWLRQ WRROV 2QH RI WKH IHDWXUHV RI RXU DSSURDFK LV WKH XVH RI D GHWDLOHG UHSUHVHQWDWLRQRIWKHF\FOHWKDWLQFOXGHVDULJRURXVWUD\E\WUD\PRGHORIWKHGLVWLOODWLRQ FROXPQLPSOHPHQWHGLQDSURFHVVVLPXODWRU$VSHQ3OXV 7KLVK\EULGDSSURDFKWDNHV DGYDQWDJH RI WKH FDSDELOLWLHV RI WKH SURFHVV VLPXODWRU ZKLFK LQFOXGHV VSHFLILF DOJRULWKPVIRUWKHHVWLPDWLRQRIWKHUPRG\QDPLFSURSHUWLHVDQGPRGHOVRIDZLGHUDQJH RIXQLWRSHUDWLRQVDQGWKHRSWLPL]DWLRQFDSDELOLWLHVRIWKHH[WHUQDOVROYHUV
0HWKRGRORJ\ 0DWKHPDWLFDOIRUPXODWLRQ 7KHREMHFWLYHRIWKHSUREOHPLVWRPLQLPL]HWKHWRWDODQQXDOL]HGFRVWRIWKHDEVRUSWLRQ FRROLQJ V\VWHP ZLWK D IL[HG FRROLQJ FDSDFLW\ 7KH PDWKHPDWLFDO PRGHO GHULYHG WKLV SUREOHPLVD0,1/3DVVKRZQ(T PLQ I [ X [' [' VW K [ X [ = , ' K( [ X [G = ½ ¾ J ( [ X [G ≤ ¿ ZKHUH['DUHWKHGHVLJQYDULDEOHV[DUHSURFHVVYDULDEOHVFRPSXWHGE\WKHVLPXODWRUX FRUUHVSRQGVWRDVHWRIIL[HGSDUDPHWHUVWKDWDUHQRWPRGLILHGGXULQJWKHFDOFXODWLRQDQG LW LQFOXGHV DOO GLVFUHWH GHFLVLRQV QXPEHU RI WUD\V DQG IHHG VWDJH RI WKH GLVWLOODWLRQ FROXPQ K,DUHDOOWKHHTXDWLRQVVROYHGE\WKHSURFHVVVLPXODWRU(TXDWLRQVK(DQGJ( DUH H[SOLFLW H[WHUQDO FRQVWUDLQWV DGGHG WR WKH SUREOHP )LQDOO\ I LV WKH REMHFWLYH IXQFWLRQLQWKLVFDVHWKHWRWDODQQXDOL]HGFRVW 6ROXWLRQSURFHGXUH 7KH SURSRVHG DOJRULWKP LWHUDWHV EHWZHHQ WKH PDVWHU DQG SULPDO VXESUREOHPV XQWLO D WHUPLQDWLRQ FULWHULRQ LV VDWLVILHG $ VWRSSLQJ FULWHULRQ WKDW WHQGV WR ZRUN YHU\ ZHOO LQ SUDFWLFHFRQVLVWVRIVWRSSLQJDVVRRQDVWKHSULPDOVXESUREOHPV VWDUW ZRUVHQLQJ7KH SULPDOOHYHOHQWDLOVWKHVROXWLRQRIDQRQOLQHDUSURJUDPPLQJVXESUREOHPLQZKLFKWKH LQWHJHU YDULDEOHV DUH IL[HG 7KH VROXWLRQ RI WKLV VXESUREOHP UHTXLUHV WKDW WKH SURFHVV VLPXODWRU SHUIRUPV WKH FDOFXODWLRQV 2Q WKH RWKHU KDQG WKH WDVN RI WKH FXVWRPL]HG PDVWHUSUREOHPLVWRGHFLGHWKHYDOXHVRIWKHLQWHJHUYDULDEOHV 7KH1/3VDUHVROYHGE\LQWHJUDWLQJWKHSURFHVVVLPXODWRU$VSHQ3OXV ZLWKJUDGLHQW EDVHGVROYHUVLHIPLQFRQ LPSOHPHQWHGLQ0DWODE7KHPDVWHU0,/3VDUHGHULYHGLQ HDFKLWHUDWLRQE\OLQHDUL]LQJWKHREMHFWLYHIXQFWLRQDQGFRQVWUDLQWVRIWKH1/3VLQWKHLU RSWLPDOSRLQWDQGDUHFDOFXODWHGE\*$06XVLQJ&3/(;0DWODEDQG$VSHQ3OXVDUH OLQNHGE\D&20LQWHUIDFHDQGWKHFRPPXQLFDWLRQVEHWZHHQ*$06DQG0DWODEZHUH GHYHORSHGXVLQJWH[WILOHV
,QWHJUDWLQJSURFHVVVLPXODWRUVDQG0,1/3PHWKRGVIRUWKHRSWLPDOGHVLJQRI DEVRUSWLRQFRROLQJV\VWHPV
&DVHVWXG\$PPRQLDZDWHUDEVRUSWLRQFRROLQJV\VWHP
303
7KH FDSDELOLWLHV RI RXU PHWKRG DUH LOOXVWUDWHG WKURXJK D FDVH VWXG\ WKDW DGGUHVVHV WKH GHVLJQ RI DQ DEVRUSWLRQ DPPRQLDZDWHU FRROLQJ F\FOH 7KH V\VWHP SURYLGHV FKLOOHG ZDWHU IRU FRROLQJ DSSOLFDWLRQV DQG RSHUDWHV DV IROORZV WKH UHIULJHUDQW LQ YDSRU SKDVH FRPLQJ IRUP WKH VXEFRROHU 6& LV PL[HG 0 ZLWK WKH H[SDQGHG VWUHDP FRPLQJIURPWKHVROXWLRQKHDWH[FKDQJHU6+; 7KHUHVXOWLQJVWUHDP LVDEVRUEHG E\ WKH GLOXWHG OLTXLG VROXWLRQ : LQ WKH DEVRUEHU $ 7KH FRQFHQWUDWHG VROXWLRQ OHDYLQJWKHDEVRUEHULVSUHKHDWHGLQWKHVROXWLRQKHDWH[FKDQJHU6+; 7KHVROXWLRQ HQWHUV LQ WKH GLVWLOODWLRQ FROXPQ ' 9DSRU UHIULJHUDQW IURP WKH WRS RI WKH GLVWLOODWLRQ FROXPQ FRQGHQVHV FRPSOHWHO\ LQ WKH FRQGHQVHU & 7KH OLTXLG UHIULJHUDQW IURPWKHFRQGHQVHULVWKHQVXEFRROHG LQWKHVXEFRROHU6& E\WKHVXSHUKHDWLQJ VWUHDP WKDWFRPHVIURPWKHHYDSRUDWRU( 7KHUHIULJHUDQW SDVVHVWKURXJKWKH UHIULJHUDQW H[SDQVLRQ YDOYH 59 DQG HQWHUV WKH HYDSRUDWRU ( WKURXJKWKHUHIULJHUDQW H[SDQVLRQYDOYH59 7KHZHDNOLTXLGVROXWLRQ IURPWKHERWWRPRIWKHGLVWLOODWLRQ FROXPQ' LVFRROHGLQWKHKHDWH[FKDQJHU6+; DQGVHQWWRWKHH[SDQVLRQYDOYH69
)LJ$PPRQLDZDWHUDEVRUSWLRQF\FOHZLWKLQWHUQDOKHDWUHFRYHU\
,WLVDVVXPHGWKDWWKHV\VWHPZRUNVLQDVWHDG\VWDWHRSHUDWLRQ+HDWDQGSUHVVXUHORVVHV DUHQRWFRQVLGHUHG9DOYHVDUHDGLDEDWLFDQGWKHUHIULJHUDQWOHDYHVIURPWKHFRQGHQVHU WKHDEVRUEHUDQGWKHERWWRPRIWKHGLVWLOODWLRQFROXPQDVVDWXUDWHGOLTXLG
304
5H\HV/DEDUWDHWDO
5HVXOWVDQGGLVFXVVLRQ 7KHGHVLJQSUREOHPDGGUHVVHGFRQVLVWVLQGHWHUPLQLQJWKHRSWLPDORSHUDWLQJFRQGLWLRQV H[WHUQDO IOXLGV IORZV HTXLSPHQWV SUHVVXUH GLVWLOODWLRQ FROXPQ FKDUDFWHULVWLFV DQG HTXLSPHQWGLPHQVLRQV LQRUGHUWRPLQLPL]HWKHWRWDODQQXDOL]HGFRVWRIWKHDEVRUSWLRQ FRROLQJ V\VWHP ZLWK D IL[HG FRROLQJ FDSDFLW\ N: 7KH UHVXOWLQJ 0,1/3 RSWLPL]DWLRQ SUREOHP FRQWDLQV FRQWLQXRXV DQG GLVFUHWH GHFLVLRQ YDULDEOHV SOXV QRQOLQHDULQHTXDOLW\FRQVWUDLQWV $GGLWLRQDOO\ WKH PRGHO KDVWRGHDO ZLWKDOOYDULDEOHV DQG HTXDWLRQV GHVFULELQJ WKH IORZVKHHW LQ $VSHQ 3OXV 7KH DOJRULWKP SUHVHQWHG FRQYHUJHV DIWHU PDMRU LWHUDWLRQV DQG &38 VHFRQGV RQ D FRPSXWHU $0' 3KHQRP70%WULSOHFRUHSURFHVVRU*+]DQG*%5$0PHPRU\ ,Q7DEOHVWKHFRPSDULVRQRIWKHEDVHFDVH>@ ZLWKWKHRSWLPDO VROXWLRQLV VKRZQ 7KHLQIRUPDWLRQLQFOXGHVWKHWKHUPRG\QDPLFSURSHUWLHVRIHDFKVWDWHSRLQWRIWKHF\FOH WKHIORZUDWHVWKHGHVLJQDQGRSHUDWLQJYDULDEOHVRIWKHFRROLQJV\VWHPDQGWKHYDOXHRI WKHREMHFWLYHIXQFWLRQ$VREVHUYHGWKHWRWDODQQXDOL]HGFRVWRIWKHV\VWHPLVUHGXFHG E\FRPSDUHGWRWKHEDVHFDVH7KLVLVDFFRPSOLVKHGE\UHGXFLQJWKHQXPEHURI VWDJHV1VWDJHV LQWKHGLVWLOODWLRQFROXPQDQGIHHGLQJWKHFROXPQZLWKDIHHGWUD\)VWDJH FORVHUWRWKHUHERLOHU$OVRWKHUHIOX[UDWLR55 DQGWKHGLVWLOODWHWRIHHGUDWLR') DUH UHGXFHG 7KHVH FKDQJHV OHDG WR D VPDOOHU FROXPQ ZKLFK LV DGHTXDWH IRU WKH UHODWLYHO\ KLJK HYDSRUDWRU WHPSHUDWXUHV RI RXU FDVH VWXG\ 7KH WHPSHUDWXUH 7 DQG SUHVVXUHV 3+LJK3/RZ RIWKHV\VWHPDGRSWWKHVDPHYDOXHDVLQWKHEDVHFDVH7KHPDVVIORZRI WKHDEVRUEHQWUHIULJHUDQWVROXWLRQP LVVLJQLILFDQWO\LQFUHDVHGZKLOHWKHVWUHDPVRIWKH H[WHUQDOIOXLGVPZPZPZ UHPDLQYHU\VLPLODUWRWKHEDVHFDVH:LWKWKHUHVXOWLQJ VHWRIGHFLVLRQYDULDEOHVWKHWRWDODUHDRIWKHKHDWH[FKDQJHUVLVVLJQLILFDQWO\LQFUHDVHG 7KH FRHIILFLHQW RI SHUIRUPDQFH &23 UHVXOWV LQ DQG LV LQFUHDVHG E\ 7DEOH&RPSDULVRQEHWZHHQWKHGHFLVLRQYDULDEOHVLQWKHEDVHFDVHDQGWKHRSWLPDOVROXWLRQ %DVHFDVH 2SWLPDOFDVH
1VWDJHV
)VWDJH
')
55
9IUDF
7>&@
P>NJV@
Ȧ
3+LJK>EDU@ 3/RZ>EDU@ PZ>NJV@ PZ>NJV@ PZ>NJV@ ǻ76+;>&@ ǻ76&>&@ %DVHFDVH 2SWLPDOFDVH
7DEOH5HVXOWVRIWKHRSWLPDODEVRUSWLRQFRROLQJV\VWHPREWDLQHGIURPWKHSUHVHQWHGDSSURDFK %DVHFDVH 2SWLPDOFDVH
7RWDO$QQXDOL]H 2SHUDWLRQDOFRVW )L[HGFRVW FRVW>0¼\U@ >0¼\U@ >0¼\U@
$7>P@
6WHDP >NJK@
&23
,QWHJUDWLQJSURFHVVVLPXODWRUVDQG0,1/3PHWKRGVIRUWKHRSWLPDOGHVLJQRI DEVRUSWLRQFRROLQJV\VWHPV
305
7DEOH7KHUPRG\QDPLFSURSHUWLHVDQGPDVVIORZUDWHVRIWKHF\FOHEDVHFDVHRSWLPDOFDVH 6WDWHSRLQW 3>EDU@ 7>&@ [>NJV@ P>NJV@ : : : : : :
&RQFOXVLRQV
7KLVZRUNLQWURGXFHVDV\VWHPDWLFVWUDWHJ\IRUWKHRSWLPDOGHVLJQRIDEVRUSWLRQFRROLQJ V\VWHPV7KHSUHVHQWHGPHWKRGUHOLHVRQD0,1/3DOJRULWKPWKDWLQWHJUDWHVFRPPHUFLDO SURFHVVVLPXODWRUVDQGRSWLPL]DWLRQWRROV7KHSURSRVHGDOJRULWKPLWHUDWHVEHWZHHQWZR W\SHV RI VXESUREOHPV D QRQOLQHDU SURJUDPPLQJ 1/3 VXESUREOHP DQG D VSHFLDOO\ WDLORUHG PDVWHU PL[HG LQWHJHU OLQHDU SURJUDPPLQJ 0,/3 SUREOHP )URP QXPHULFDO UHVXOWV ZH FRQFOXGHG WKDW LW LV SRVVLEOH WR VLJQLILFDQWO\ LPSURYH WKH HFRQRPLF SHUIRUPDQFHRIFRROLQJV\VWHPVRIWKHWRWDODQQXDOL]HGFRVWUHGXFWLRQ DQGDQ LQFUHDVH RI WKH FRHIILFLHQW RI SHUIRUPDQFH RI DOPRVW 3DUWLFXODUO\ WKH ODUJHU SURILWDELOLW\RIWKLV ZRUNLVDWWDLQHGE\SURSHUO\DGMXVWLQJ WKHRSHUDWLQJFRQGLWLRQVRI DOOWKHXQLWVDQGVWUHDPVHPEHGGHGLQWKHIORZVKHHW
5HIHUHQFHV >@ -70F0XOODQ5HIULJHUDWLRQDQGWKHHQYLURQPHQWIXWXUH,QW-5HIULJ >@ *$ )ORULGHV 6$ .DORJLURX 6$ 7DVVRX /& :UREHO 0RGHOOLQJ VLPXODWLRQ DQG ZDUPLQJLPSDFWDVVHVVPHQWRIDGRPHVWLFV\VWHP$SSO7KHUP(QJ >@ %+ *HEUHVODVVLH * *XLOOpQ*RViOEH] / -LPpQH] ' %RHU 'HVLJQ RI HQYLURQPHQWDOO\ FRQVFLRXV DEVRUSWLRQ FRROLQJ V\VWHPV YLD PXOWLREMHFWLYH RSWLPL]DWLRQ DQG OLIHF\FOHDVVHVVPHQW$SSOLHG(QHUJ\
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A method for the design and planning operations of heap leaching circuits Jorcy Y. Trujillo,a Mario E. Mellado,b Edelmira D. Gálvez,b,c Luis A. Cisternas,a,b a
Departamento de Ingeniería Química, Universidad de Antofagasta, Chile Centro de Investigación Científico Tecnológico para la Minería, CICITEM, Chile c Departamento de Ingeniería Metalúrgica, Universidad Católica del Norte, Chile b
Abstract Heap leaching is a widely used extraction method for low-grade minerals, including copper, gold, silver, and uranium. This method has new applications in nonmetallic minerals such as saltpeter and soil remediation. Although a number of experimental studies have been carried out as well as modeling, which allows better understanding of the phenomena and its operation, few studies have been carried out with the objective of optimizing the process. Most of the studies which consider optimization, either experimentally or through the use of models, have been done from a technical perspective. The aim of this work is to develop a methodology for the design and planning of the heap leaching circuit. A superstructure which includes a number of alternative circuits is proposed. Then a mathematical model is developed that represents the constraints of the system and maximizes the profits. An example is considered to validate the proposed methodology. The results show that the current mode of operation of these systems can be improved using this methodology. Keywords: Heap leaching, optimization, hydrometallurgy, process design, planning.
1. Introduction Heap leaching (HL) is a process to extract low-grade minerals from ore, including copper, precious metals, nickel and uranium. This method has new applications including non-metallic minerals such as saltpeter (Valencia et al., 2008) and soil remediation (Hanson et al., 1993). In HL mined ore is crushed into small chunks, or ground and agglomerated into quasi-uniform particles, and heaped on an impermeable plastic and/or clay lined leach pad where it is irrigated with a leach solution which contains an appropriate leaching agent. Then, the solution percolates through the heap and leaches out the value metal. The leach solution containing the dissolved metals is then collected in a storage pond. Later, the leached solution is sent to a solution purification process (e.g. solvent extraction) and metal recovery process (e.g. electrowinning). Although a number of experimental studies have been carried out as well as modeling and optimization, which certainly allows to have a better understanding of the phenomena and its operation, only few studies have been done with the goal of optimizing the process concerning its design and planning. Most of the studies which have considered optimization, either experimentally or through the use of models, have been done from a technical perspective. For example, Mellado et al. (2010a) studied the optimization of the irrigation flow rate. Padilla et al. (2008) analyzed the economics of the heap, which represents a balance between the recovery and plant capacity. The results of the study have shown that the design (height of the heap), and planning of the
A method for design and planning operation of heap leaching circuits
307
operation (operational time) were interactive factors, and that maximum recovery was not necessarily the best measure of operational efficiency based on economic considerations. The aim of the present work is to develop a methodology for the design and planning of the heap leaching circuit.
2. Mathematical Model In this section, a MINLP model to optimize the planning and design of the heap leaching system is presented. The problem consists on designing (usually the heap height) and planning (leaching time) of a heap leaching system for the maximization of economic benefits. The strategy of solution consists on using mathematical programming based on a superstructure of the heap leaching system. The heap system consists of heap leach units and solution purification/metal extraction (SPME) units. The superstructure of the heap leaching system is built including a mixer in the input and a divider in the output of each heap leach unit and each SPME unit as shown in Fig. 1, and allowing the transfer of solutions between all the units or those that the designer wants to consider. Then, mass balances for each component were developed for each process units.
Ljin xj,kin
Li,j xi,j,k qi,j
qjin
Ljout xj,kout
xi,j,k
qjout
qj,i
j
Figure 1. Superstructure for heap leach and SPME units.
The mass balance in each heap leaching unit j, is given by
Lout j ,k
Linj ,k M j ,k R j ,k R j 1,k
Where L j ,k
(1)
x j ,k q j is the mass flow rate by cycle, x j ,k is the concentration of k, and
q j the volumetric flow rate by cycle in the heap j. Now, M j ,k is the mass of metal of the valuable specie k in each heap j, and is given by M j ,k
Z j Aj U g k . Here Z j , A j , U ,
g k are the heap height, heap area, mineral density and mineral grade respectively. Also, R j ,k is the recovery of metal k in heap j, and it can be calculated using the analytical model (Mellado et al., 2010b)
Rj
§ u · D § HbZ j · º ª -kT ¨ s t w ¸ w¸ k Ae ¨ t «1 O e ¨© H bZ j ¸¹ /e W H 0r 2 ¨© us ¸¹ » » Z Jj E « ¬« ¼»
D
(2)
J. Trujillo et al.
308
where D , E ,J , kT , kW , O , / are constants to be computed, u s is the superficial bulk flow velocity, H b is the bulk solution volume fraction, t is time, D Ae is the effective pore diffusivity of the reagent, H 0 is the ore porosity, and r is the particle radius. Note that eq. (2) assumes that the heap leaching units are operating in series. From the solution viewpoint the heap can operate in countercurrent, concurrent and/or cascade flow. In the event that the process unit corresponds to a solution purification/metal extraction (SPME) unit, the material balance is given by
Lout j ,k
Linj ,k Pj ,k
(3)
where Pj ,k is the production of metal k in the SPME unit j. The cycle time, t, is given by the following equation, where N is the number of cycles in the time horizon H.
H
(4)
Nt
The cycle time correspond to t
max( t j t j 1 ) , where t j corresponds to the leaching j
time of the heap j. The objective function corresponds to the maximization of the following function, where I is the income from sales and C to the design and operation costs. Also, w represents the weights of the revenue and cost functions.
max U
wI I wC C
(5)
The model is completed with the mass balances in the mixers and splitters, variables upper and lower bounds, assignments of concentrations and flow rates at the outlet of the SPME units, cost and income functions, and McCormick relaxation for bilinear expressions. The resulting model corresponds to an MINLP problem.
3. Case Study This section presents the application of the model to an example which corresponds to a heap leaching system with one heap and one SPME unit. The heap leaching unit corresponds to a copper ore heap of 456,000 m2, with a grade of 1.5 % of copper. Also, the time horizon corresponds to 360 days and copper price was considered to change from 2,000 to 12,000 $/ton. The recovery function is represented by the following disjunction function:
yi ª º «R abt» j » « LO iD «t d t d tiUP » i « » ¬« Z j Z i ¼»
(6)
A total of eight disjunctions were considered, including two levels of heap heights (6 and 9 m) and four time ranges (from 6 to 180 days) (see Figure 2). The problem was solved using GAMS-BARON using an Intel Core i7 CPU 2.67 GHz in 0.16 s. The results are shown in Figure 3. Although the profit increase as the selling price of copper increases, as expected, the leaching cycle times (and therefore the number of cycles in horizon time) and the heights of heaps vary in a non-regular way.
A method for design and planning operation of heap leaching circuits
309
Moreover, copper recoveries were between 83% and 55%. If the copper grade in the ore is reduced to 1%, the heap height was 9 m for all the copper prices, but the leaching times decreased (and also the recoveries) as the metal price increased. This simple example shows that there are clear ways to improve the design and planning, in order to maximize profits in heap leaching systems. 1,2
Copper Recovery (%)
1
0,8 Z=6 m
0,6
Z=9 m 0,4
0,2
0 0
50
100
150
200
Time (days)
Figure 2. Copper Recovery disjunction of equation 6.
a
b
c
d
Figure 3. Result of case study. a) heap height, b) cycle numbers, c) cycle time and d) profit.
310
J. Trujillo et al.
4. Conclusions A MINLP model has been developed for the process design and planning of heap leaching systems. The results show that the heap leaching design and planning are sensitive to the metal price. The results also show that the design (height of the heap) and the operation (leaching time) are problems coupled from the economic perspective, and thus the search for optimal conditions, from the economic perspective, must be included as well as other variables. This coupling is due to the fact that these variables affect both the recovery and the capacity of the heap leaching operation. It can be observed that the optimum, from the economic perspective, does not necessarily represent the maximum recovery. The application to more complex systems, such as systems with several heaps and SPME units, and production of various metals, relates to future work. Moreover, as used in this work, the disjunctions from a nonlinear recovery equation can lead to a bit error in the computations of the profits. This can lead to study estimations of the error involved in this approximation, just for theoretical reasons, mainly because, there are other operational variables that can lead to errors in the computations. Finally, as the price of copper is not completely predictable, and as the big mining operations are constantly building new heap systems, this approach can be used online with the economical variables that can affect the profits. Acknowledgments The authors wish to thank CONICYT for its support through Fondecyt Project 1090406.
References A. Hanson, B. Dwyer, Z. Samani, D. York, 1993. Remediation of chromium - containing soils by heap leaching: column study. Journal of Environment Engineering 199 (5), 825–841. M. Mellado, E. Gálvez, L. Cisternas, 2010a, On the optimization of flow rates on copper heap leaching operations, International Journal of Mineral Processing, submitted. M. Mellado, M. Casanova, L. Cisternas, E. Gálvez, 2010b. On scalable analytical models for heap leaching. Computers and Chemical Engineering. In Press G. Padilla, L. Cisternas, J. Cueto, 2008, On the optimization of heap leaching, Mineral Engineering 21, 673-678. W. Pennstrom, J. Arnold, 1999. Optimizing heap leach solution balances for enhanced performance, Minerals and Metallurgical Processing 16 (1), 12–17. J. Valencia, D. Mendez, J. Cueto, L. Cisternas, 2008, Saltpeter extraction and modelling of caliche mineral heap leaching. Hydrometallurgy, 90, 103-114
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods Garyfallos Giannakoudisa, Athanasios I. Papadopoulosa, Panos Seferlisa,b, Spyros Voutetakisa a
Chemical Process Engineering Research Institute, Centre for Research and
Technology-Hellas, 6th km Harilaou Thermi Road, 57001, Thessaloniki, Greece b
Department of Mechanical Engineering, Aristotle University of Thessaloniki, 54124,
Thessaloniki, Greece
Abstract This work presents a novel approach for efficient systems design under uncertainty that uses data mining and model fitting methods during optimization to significantly reduce the associated computational effort. The proposed approach is implemented as part of a modified stochastic annealing algorithm, but remains independent of the employed optimization method. A numerical example and a case study on a stand-alone system for power generation from renewable energy sources are used to illustrate the merits of the developments. The obtained results indicate robustness and efficiency in terms of solution quality and computational performance, respectively. Keywords: Systems optimization, uncertainty, data mining, Stochastic Annealing, renewable energy.
1. Introduction Stochastic search methods such as Stochastic Annealing (StA) and Stochastic Genetic Algorithms (SGA) [1-4] have been proposed in recent years to address the optimization under uncertainty of process systems. The underlying algorithmic philosophy employed to treat uncertainty involves the use of probability distributions to generate samples which are introduced individually into the simulation of system models. This enables the emulation of effects caused by the uncertain parameters in the addressed optimization problem. Apparently, the number of utilized samples is of crucial importance. Large numbers of samples are required to maintain a realistic representation of the uncertain parameter distribution but at the expense of reduced computational efficiency. This is due to the increased computational effort required to simulate the effects of each sample through the employed system model during optimization. This major issue has been previously addressed [2, 3] by employing efficient sampling techniques and strategies that allow a variable sampling schedule throughout the optimization procedure. Fewer samples are allowed at initial optimization iterations, which are then increased significantly as the algorithm gradually proceeds to
G.Giannakoudis et al.
312
termination. However, their utilization often requires significant computational effort as the random selection of a large number of samples is not prevented, even at initial optimization iterations. Furthermore, high numbers of samples towards termination still result to an increased computational burden for large-scale problems involving detailed system models and combinatorial complexities.
2. Proposed method This work proposes the combined use of data mining and model fitting in the course of optimization to enable efficient management of the sampling procedure, employed to treat the considered uncertain parameters. Figure 1 illustrates the proposed approach as an extensive modification to the StA algorithm [1-3]. The novel algorithmic sequence is highlighted within the dashed frame. The implemented modifications are independent of the employed optimization algorithm, as they do not intervene with decision making operations that are distinctive of particular algorithms. While Hammersley sampling is employed in this work, any other sampling technique can also be utilized. Yes
No
Update temperature
Termination criteria?
Yes
Solution
Initialise randomly
No
x UpdateMarkovchain x Generatenewsetofdecisionvariables xm
Simulate clustercenteruk
Iterate k=1,Ɂclust times
CalculateɃF(xm,uk) Chainlength exceeded?
Selecti=1,Ɂsamp samplingpoints
Group points into k=1,Ɂclust clusters x Regressk=1,Ɂclustpoints x FitmodelOF(xm,uk)=f(uk)
Algorithm independent procedure PredictOF(xm,ui)for allpointsi=1,Ɂsamp
No
Accepted? Yes
Calculateaggregate objective
Newstate
Figure 1: Proposed data mining method as part of a modified Stochastic Annealing algorithm
2.1. Description Initially, a clustering method is used to generate k=1,Nclust coherent groups (clusters) of similar points out of the entire set of the selected sampling points i=1,ȃsamp, used for the representation of the uncertain parameters vector (u). Statistical cluster centers (uk) are then calculated for each group, which lie in close proximity to the entire data contained in each cluster. As a result, each cluster center can be considered a valid representative for all the data (sampling points) contained in the cluster. Subsequently all cluster centers, instead of all available sampling points, are introduced to simulations using a system model to calculate the objective function value OF(xm,uk) that corresponds to
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods
313
each center (uk) (where xm represents the vector of decicion variables). In this respect, the available cluster center points (independent parameters) are then used in conjunction with their corresponding objective function values (dependent parameters) to calculate the regression coefficients of a continuous model. This model represents the employed OF(xm,uk) as a mathematical function of the cluster centers [OF(xm,uk)=f(uk)]. The objective function values OF(xm,ui) that correspond to the remaining sampling points (ui), contained in each cluster, can now be calculated using the developed predictive model, hence avoiding the time consuming simulations based on the system model. 2.2. Implementation details The proposed approach enables the use of constantly large numbers of sampling points regardless of the size of the optimization problem addressed or the stage of the performed optimization search. The number of generated clusters is an important parameter that affects the performance of the method. A large number of clusters results to fewer points within each cluster. This enables an improved representation from the derived center of all the cluster points and results to accurate predictions from the regression model. However, increasing the number of clusters also results to further time-consuming simulations. An automated statistical method is used [5] to maintain the number of clusters considerably lower compared to the sampled set of uncertain parameter values, while facilitating accurate model predictions. The fitted model provides objective function value predictions that are either identical or lie within very close proximity to the values calculated through simulations. This is verified by use of the R2 coefficient of multiple determination, which is calculated in three steps. Firstly, predictions are obtained through the regression model for OF(xm,ui) values. Subsequently, the predictions are used to replace their corresponding sampling points (ui) that exist within each one of the original clusters. Finally, a new cluster center is derived for each cluster based on the objective function values (and not the sampled points as previously). This center represents the predicted objective function values that lie within each cluster. If it is similar to the objective function values obtained through model simulations for each corresponding cluster center, then the regression model provides accurate predictions. This similarity is measured through R2. The number of regression terms employed in the model is derived through statistical Ftests for model adequacy, also used to evaluate the correctness of R2. 2.3. Numerical example The proposed method is illustrated through a numerical example that employs the following cost model (details available in [1]): OF 1 ( y 1 , y 2 , y 3 , u 1 , u 2 )
y1
¦ (( y
1
3 ) 2 ( u 1 y 2i 3 ) 2 ( u 2 y 3i 3 ) 2 )
(1)
i 1
Terms y1, y2, y3 represent the decision variables. The uncertain parameters u1 and u2 follow the probability distributions shown in Table 1, which also shows the clustering ranges considered and the employed regression model. The regression coefficients Įi (i=1,6) are recalculated in each algorithmic iteration. In all cases the performance of the StACMF algorithm (StA with clustering and model fitting) is compared with an
G.Giannakoudis et al.
314
adaptation of StA developed in this work. Their comparative performance is measured based on the ratio of the number of simulations performed by the two algorithms (NStACMF/NStA) to achieve optimality. The number of allowed samples is constantly 150 for StACMF, while sampling for StA is allowed to vary in the range [20, 150]. Table 1: Data and optimization-computational performance results for the numerical example Clustering Optimum values for Performance R2 Case u1 u2 range (y1),(y2),(y3) ratio
1 2 3
N(0,2) N(0,2) N(0,2)
N(0,2) N(0,2) U(1.5,3)
25-35
(3),(3,3,3),(3,3,3)
>0.999
15-25 (3),(3,3,3),(3,3,3) >0.999 20-30 (3),(3,3,3),(1,1,1) >0.996 Regression model: OF(u1,u2)=a1+a2 u1+a3 u2+a4 u1 u2+ a5 u22+a6 u1 u22
0.39 0.26 0.28
In all three cases the two algorithms found the same optimum solution. The obtained results indicate that StACMF is significantly faster, as the number of required simulations is only a small fraction of those required by StA. The value of R2 is very high in all cases, indicating that the employed model provides accurate predictions. The minor inaccuracies in the predictions (R2<1) do not prohibit the identification of the optimum solution by StACMF. The use of a lower clustering range (fewer clusters) in case 2 results to improved performance compared to case 1, while the optimum solution is still obtained. The simultaneous use of different distributions in case 3 does not affect the optimization and computational performance of StACMF compared to StA.
3. Case study 3.1. Background The proposed approach is applied to the design optimization of a hybrid system for power generation from renewable energy sources, with medium- to long-term energy storage capabilities in the form of hydrogen. It consists of photovoltaic panels, wind generators, chemical accumulators, an electrolyser, a fuel cell, a compressor, hydrogen storage tanks and a diesel generator. Details can be found in [6]. The design of such a system involves increased uncertainty due to unpredictable weather variability and equipment efficiency changes. The optimization aims to minimize the net present value (NPV) of investment for a ten year operating period. The considered decision variables are 8, namely the number of PV panels (npv), the number of the wind generators (nwg), the nominal capacity of the accumulators (nacc), the maximum operating power of the electrolyzer (Pmax,e), the capacity of the intermediate (Vb) hydrogen storage tanks, the nominal power of the fuel cell (Pop,fc) and the upper (SOCmax) and lower (SOCmin) limits of the stage of charge of the accumulators. The considered uncertain parameters are 4 and involve the solar radiation (u1) and wind speed (u2) as well as the efficiencies of the electrolyzer (u3) and of the fuel cell (u4). The number of allowed samples for the two algorithms is similar to that used in the numerical example, whereas the range of allowed clusters is [25, 35]. 3.2. Results and discussion The optimum OF value identified by StA is slightly better than StACMF (Table 2). This is a reasonably small deviation considering the high combinatorial complexity of the
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods 315 problem. It is also characteristic of stochastic search methods that often converge to a narrow distribution of similar solutions. The small difference corresponds to slightly fewer PV panels identified in StA and slightly different SOC limits (up to 4%), while the optimum values for the remaining decision parameters are identical. Even though the design problem involves multiple decision variables and uncertain parameters, the regression model consists of only eight terms. Only uncertain parameters u1 and u2 are necessary in the model, based on the statistical tests performed for the determination of the required terms which implies that only u1 and u2 have significant effect on the objective function value under the operating conditions assumed in the case. Design of the system under different weather data (e.g., another location) would have resulted in fitting models with significant contribution by the equipment efficiencies u3 and u4. In the vast majority of the optimization iterations R2 is maintained over 0.98. This indicates reasonably good fitting using a simple model, compared to the much more complex system models that are avoided. The time performance per iteration is almost three times better with StACMF, indicating significant gains for time-consuming design problems. Such benefits are combined with the constantly used 150 samples throughout the optimization in StACMF. Table 2: Data and performance results of StA and StACMF algorithms in case study. Average Average Average CPU time (sec) Average total number of CPU time for clustering + model CPU time Method simulations (sec) per fitting calculations per (sec) per per iteration simulation iteration iteration StA 79 0.0642 5.0726 StACMF
OF ( u 1 , u 2 )
28
0.0642
0.0143
1.8122
OF [k€] -46.205 -46.268
Regression model b o b1 u 1 b 2 u 2 b 3 u 12 b 4 u 22 b 5 u 1 u 2 b 6 u 2 u 12 b 7 u 1 u 22
4. Concluding remarks The proposed approach increases the computational efficiency in systems optimization problems under uncertainty, while constantly maintaining an inclusive representation of the uncertain parameters throughout the optimization search. The implementation of clustering and model fitting are fast and computationally insignificant compared to numerous system model simulations required in existing StA implementations. The approach can handle increased numbers of decision variables and uncertain parameters without making the optimization computational effort prohibitive.
References [1] [2] [3] [4] [5] [6]
Painton L., Diwekar U., (1995), Europ. J. Oper. Res., 83, 489-502. Chaudhuri, P. Diwekar U., (1996), AICHE J., 42, 3, 742-752. Chaudhuri P., Diwekar U., (1999), AICHE J., 45, 8, 1671-1687. Xu W., Diwekar U. (2005), Ind. Eng. Chem. Res., 44, 7132-7137. Papadopoulos A.I., Linke P., (2006), Chem. Eng. Sci., 61(19), 6316-6336. Giannakoudis G., Papadopoulos A.I., Seferlis P., Vouetakis P., (2010), Int. J. Hyd. Ener., 35, 872-891.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integrated Design of a Reactor and a GasExpanded Solvent Eirini Siougkrou, Amparo Galindo and Claire S. Adjiman Department of Chemical Engineering, Centre for Process Systems Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
Abstract Gas-expanded liquids have recently generated great interest, because of their distinct behaviour, which combines gas and liquid characteristics, and because they are more environmentally benign than traditional organic solvents. In this work, a methodology for the integrated design of a CO2-expanded solvent and a reactor for a Diels-Alder reaction is presented. The co-solvents studied are acetonitrile, methanol and acetone. The solvatochromic equation is used for the prediction of the rate constant combined with a preferential solvation model, while the group-contribution volume translated Peng Robinson is used for the phase equilibria. Finally, the most appropriate co-solvent and the optimum operating conditions are proposed for the particular process. Keywords: integrated and process design; gas-expanded solvent; solvatochromic equation; mixture design; reactor design
1. Introduction The role of solvents in chemical reactions is vital, especially in the control of reaction rates and temperature. Liquid-phase chemical reactions take place in many industrial processes, so it is important to be able to choose the best solvent for each reaction, depending on the process context. There have been a few recent attempts to tackle this problem, in the context of pure organic solvents [1-4]. Mixed solvents, however, are often beneficial, as better performance than any of the individual solvents in the mixture can sometimes be observed, but have rarely been studied in the context of design. Gasexpanded liquids (GXLs) [5], which are mixed solvents composed of an organic solvent and a compressible gas (usually carbon dioxide (CO2) due to its safety and economic advantages) have generated great interest. Their advantages include recovery and recycle of the organic solvent and CO2 through simple depressurisation, moderate operating pressures, enhanced transport rates and reaction rates and reduced environmental impact. In choosing a GXL, two key questions arise: what is the best organic solvent to be combined with CO2 and what should the composition of the GXL be? The objective of this paper is to present a computational methodology for making these choices on the basis of process performance, by building upon recent work on the computer-aided design of solvents for reactive systems [3-4].
2. General Problem Formulation The aim of this work is the integrated design of a CO2-expanded solvent for a given reaction and the design of a reactor for that particular reaction. This problem can be expressed as an optimization problem:
Integrated Design of a Reactor and a Gas-Expanded Solvent
317
min (ݐݏܥx) subject to property/ process model constraints: ݃(x) ൌ Ͳ property/ process model constraints: ݄(x) Ͳ where x is a vector of the degrees of freedom of the system. In our case, the objective function, ݐݏܥሺxሻ, is the total cost of the process, which needs to be minimized for a specific amount of product. The property constraints include the reaction rate constant and the phase equilibria relationships, while the mass balances of the process constitute the process model constraints. A specific case study is considered throughout this paper: the design of a continuous stirred tank reactor (CSTR) and of a CO2-expanded solvent for the Diels-Alder reaction of anthracene with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) to form the adduct (8,9,10,11-dibenzo-4-phenyl-2,4,6-triaza[5,2,2,0] tricycloundeca-8,10-diene-3,5-dione). This reaction has been studied in acetonitrile + CO2 in [6]. The organic co-solvents considered are acetonitrile, methanol and acetone.
3. The Model The process consists of a CSTR, a separator (e.g. crystalliser) and a pump (Figure 1). The separator is not included in the calculation of the cost, as its cost is much lower compared to the reactor’s. In the cost of the process, the capital cost is taken into account, which includes the cost of the reactor and the cost of the organic co-solvent. The model can be divided in five submodels: one that relates solvent properties to the reaction rate, the preferential solvation model that relates Fin solvent composition to solvent properties, phase equilibrium that relates reactor volume and solvent composition to pressure, the CSTR mass balances and a cost model. The first three are Fout described in more detail in the following sections. A temperature T = 40 oC is assumed throughout. The model is implemented in Figure 1. The flow sheet of the process gPROMS [7]. 3.1. The Reaction Rate Constant The reaction is considered to be first order with respect to anthracene (concentration [A] in mol m-3) as the reactor is run with an excess of PTAD. The reaction rate constant k (in s-1) is given by a solvatochromic equation [8], a linear free-energy relationship. The equation proposed by Ford et al. [6] for the particular reaction of interest is used: ln݇ ൌ ͳǤͻ െ ʹǤʹߨ כെ ͶǤͺߙ ͳǤͷͺߚ
(1)
where ߨ כis the polarity/polarizability, Į corresponds to the acidity and ߚto the basicity of the solvent. The solvent solvatochromic parameters, ߨ כ, Į and ߚcan be measured via UV/vis absorption or chromotography. In the case of mixed solvents, there have been some attempts to predict these parameters using preferential solvation models [9,10]. 3.2. The Preferencial Solvation Model In this work, the preferential solvation model of Ràfols et al. [10] has been applied. Investigating different preferential solvation models, they proposed a general model that
E. Siougkrou, A. Galindo and C.S. Adjiman
318
is based on two-solvent exchange processes. A solvatochromic property, Y (S*, D, or E) can then be obtained as: ܻ ൌ
భ ൫ଵି௫మబ ൯ାమ మȀభ ሺ௫మబ ሻమ ାభమ భమȀభ ൫ଵି௫మబ ൯௫మబ ൫ଵି௫మబ ൯ାమȀభ ሺ௫మబ ሻమ ାభమȀభ ൫ଵି௫మబ ൯௫మబ
(2)
where Y1 and Y2 are the solvatochromic properties in the pure solvents, S1, S2, Y12 is that in a solvent S12, introduced by the model, that is formed by the interaction of the two solvents S1 and S2 in the microsphere of solvation, f2/1 and f12/1 are preferential solvation parameters that measure the tendency of the indicator (solute) to be solvated by solvents 2 and 12, respectively, rather than solvent 1, and, finally, ݔଶ is the mole fraction of solvent S2 in the mixed solvent. Here, the parameters f2/1, f12/1 and Y12 have been fitted to experimental data [6, 11] for the solvatochromic parameters of the mixed solvents. 3.3. Phase Equilibrium The reactor is assumed to operate in the liquid phase, at conditions of solid-liquidvapour equilibrium. Phase equilibrium calculations thus yield the solubility of anthracene for the chosen solvent/operating pressure. The solubility of PTAD in the organic solvents and CO2 is 10 orders of magnitude higher than that of anthracene. Thus, the solid phase is assumed to consist of anthracene, the vapour phase of the organic co-solvent and carbon dioxide and the liquid phase of the organic co-solvent, carbon dioxide and anthracene. The vapour-liquid and solid-liquid equilibrium calculations have been performed based on the isofugacity condition for the relevant components: ݕ ߮௩ ܲ ൌ ݔ ߮ ܲ and
߮௦ ܲ௦௨ ሺܲ ሻ ൌ ݔ ߮ ܲ
(3)
respectively, where superscripts v, l and s correspond to the vapour, liquid and solid phases, respectively, ߮ is the fugacity coefficient of component i, yi, xi are the mole fractions of component i in the vapour and liquid phase, respectively. ܲ௦௨ is the sublimation pressure of the solute (anthracene), which is calculated from the Antoine equation [2], ሺܲ ሻ is the Poynting factor. The fugacity coefficient of the pure solute, ߮௦ , is considered to be unity at the very low sublimation pressure. The fugacity coefficients of both the vapour and the liquid phases are calculated with the group contribution volume-translated Peng-Robinson (GC-VTPR) equation of state [12].
4. Results In figure 2, the reaction rate constant is given as a function of the mole fraction of CO2 in the mixed solvent. An increase of the amount of CO2 causes an increase in the reaction rate constant, reaching a maximum for a CO2 mole fraction close to 0.95, except for the case of methanol, where, although the rate is increasing, the reaction is so slow that the maximum rate constant is at pure supercritical CO2. In figure 3, vapourliquid equilibrium (VLE) data and predictions using GC-VTPR for the binary solvent systems are shown. It is evident that the higher the CO2 content, the higher the pressure of the system. The predictions of GC-VTPR EoS are satisfactorily accurate. Another issue that needs to be taken under consideration is the solubility of anthracene in the mixed solvent and how this is influenced with the change in pressure. Our phase equili-
Integrated Design of a Reactor and a Gas-Expanded Solvent
319
9
3
7
2
6
P (MPa)
k (s-1)
8 2.5
1.5
5 4 3
1
2
0.5
1 0
0
0
0.2
0.4
0.6
0.8
1
x CO2 Figure 2. Calculated reaction rate constants at T = 40 ƕC. Continuous line:acetonitrile, dashed line:acetone, dash-dot line: methanol. Symbols: experimental data for acetonitrile + CO2 [6] .
0
0.2
0.4
0.6
0.8
1
xCO2 Figure 3. VLE at T = 40 ƕC with GC-VTPR EoS. Symbols are experimental data, diamonds: acetonitrile [14], circles: acetone[15], triangles:methanol [16]. Lines as figure 2.
xanthr
brium calculations, shown in figure 4, indicate that the solubility decreases when the mole fraction of CO2 (or equivalently, the pressure) increases. GC-VTPR successfully captures the solubility of anthracene in the pure organic solvents. In order to design a continuous stirred-tank reactor, for the Diels-Alder reaction, one must find a 0.006 trade-off between increasing the amount 0.005 of CO2, which increases the reaction rate constant but decreases the solubility of 0.004 anthracene and results in higher 0.003 pressures, which increases the cost considerably. For a fixed production rate 0.002 (42 kg/min), the dependence of the 0.001 reactor volume on the mole fraction of CO2 is shown in figure 5. The volume of 0 the reactor when methanol is the co0 2 4 6 8 10 solvent is not given, as it turns out to be P (MPa) very large (around 40 m3), since the reaction in methanol + CO2 is very slow Figure 4. Solubility of anthracene at T = 40 ƕC as a compared to the other solvents (figure 2). function of pressure in the mixed solvents, with GCVTPR. Symbols indicate experimental data[17-18] The total cost of the reactor [13], including and lines as per figure 2. the cost of the organic solvents, follows the same trend as the volume of the reactor and is given in figure 6. When the co-solvent is acetonitrile a minimum cost occurs for a mole fraction of CO2 (xCO2) equal to 0.47, while, when the co-solvent is acetone the cost always increases with increasing xCO2.
5. Conclusions An integrated approach to the integrated design of a reactor and a CO2-expanded solvent has been proposed and applied to a Diels-Alder reaction. Taking into consideration the influence of CO2 on the rate constant and the phase equilibrium, the
E. Siougkrou, A. Galindo and C.S. Adjiman
320 1
30000
0.9 25000
0.8
Creactor ($)
Vreactor (m3)
0.7 0.6 0.5 0.4 0.3 0.2
20000 15000 10000 5000
0.1 0
0 0
0.2
0.4
0.6
0.8
1
xCO2 Figure 5. The volume of the reactor as a function of the mole fraction of CO2. Continuous curve is acetonitrile, dashed curve is acetone.
0
0.2
0.4
0.6
0.8
1
xCO2 Figure 6. The cost of the reactor as a function of the mole fraction of CO2. Continuous curve is acetonitrile, dashed curve is acetone.
optimal operating conditions, including concenration, pressure and volume, that minimise the cost have been found. The solvatochromic equation and a preferential solvation model have been used for the calculation of the reaction rate constant and GCVTPR has been used for phase equilibrium calculations. These calculations compare well with available data. Both acetonitrile+CO2 and acetone+CO2 seem to be appropriate for the reaction, giving reasonable values for the volume of the reactor. On the basis of initial cost estimates, acetone seems to be the optimum among solvents studied co-solvent for the particular process, as the resulting cost is low for a wide range of mole fractions of CO2. Nevertheless, if environmental criteria are to be taken into consideration, the optimal operation would be for mole fractions of CO2 close to 0.8, where the least possible amount of organic co-solvent is used for a reasonable cost. Under these conditions, both acetonitrile and acetone appear to be suitable. Methanol appears not to be a good co-solvent as the reaction rate is predicted to be very low.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
R.Gani, C.Jiménez-González, D.J.C.Constable, Comp. Chem. Eng., 2005, 1661 SI.Stanescu, L.E.K.Achenie, Chem. Eng. Science, 2006, 6199 M.Foliü, C.S.Adjiman, E.N.Pistikopoulos, AIChE J., 2007, 53, 1240 M.Foliü, C.S.Adjiman, E.N.Pistikopoulos, Ind. Eng. Chem. Res, 2008, 47, 5190 P.G.Jessop, B.Subramanian, Chem. Rev., 2007, 107, 2666 J.W. Ford, J.Lu, C.L.Liotta, C.A.Eckert, Ind. Eng. Chem. Res., 2008, 47, 632 Process Systems Entrprise, gPROMS, www.psentreprise.com/gproms, 1997-2009 M.J.Kamlet, J.M. Abboud, M.H.Abraham, R.W.Taft, J. Org. Chem, 1983, 48, 2877 A.R.Harifi-Mood, A.Habibi-Yangjeh, M.R.Gholami, J. Phys. Chem. B, 2006, 110, 7073 C.Ràfols, M.Rosés, E.Bosch, J.Chem. Soc., Perkin Trans. 2, 1997, 243 V.T.Wyatt, D.Bush, J.Lu, J.P.Hallett, C.L.Liotta, C.A.Eckert, J. of Sup. Fluids, 2005, 36, 16 J.Ahlers, T.Yamaguchi, J.Gmehling, Ind. Eng. Chem. Res., 2004, 43, 6569 J.M.Douglas, Conceptual Design of Chemical Processes, McGraw-Hill, 1988, pg.574 A.Kordikowski, A.P.Schenk, R.M.Van Nielen, C.J.Peters, J. of Sup. Fluids, 1995, 8, 205 T.Adrian, G.Maurer, J. Chem. Eng. Data, 1997, 42, 668 D.Kodama, N.Kubota, Y.Yamaki, H.Tamaka, M.Kato, Netsu Bussei, 1996, 10, 16 E.A.Cepeda, M.Diaz, Fluid Phase Equilibria, 1996, 121, 267 L.N. Petrova, Vestn. Khark. Politekh. Inst., 1974, 92, 9
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Computer Aided Flowsheet Design using Group Contribution Methods Susilpa Bommareddya, Mario R. Edena, Rafiqul Ganib a b
Department of Chemical Engineering, Auburn University, AL 36849, USA CAPEC, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
Abstract In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent different unit operations in the system. Feasible flowsheet configurations are generated using efficient combinatorial algorithms and the performance of each candidate flowsheet is evaluated using a set of flowsheet properties. A systematic notation system called SFILES is used to store the structural information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design. Keywords: Flowsheet synthesis/design, process group contribution, reverse simulation.
1. Introduction There are many different approaches to process synthesis including expert systems, optimization or algorithmic methods, and conceptual methods based on physical insights. This paper highlights a novel hybrid method for Computer Aided Flowsheet Design (CAFD) that combines physical insights with algorithmic reverse design approaches to enable systematic identification of feasible flowsheets at significantly reduced computational expense. The framework presented in this paper is based on the process group (PG) contribution approach developed by d’Anterroches and Gani (2005) [1]. The CAFD approach was inspired by the group contribution based methods for Computer Aided Molecular Design (CAMD), which includes building blocks (atoms and functional groups) to generate and represent molecules; group contribution (GC) based property models to predict target properties; a standard molecular structure notation system (such as SMILES) to store and visualize the molecular structure information; and a synthesis method to generate and screen molecules that match the target (design) properties. Analogous to CAMD, in the CAFD approach, flowsheets are generated and represented by functional process groups; process group contribution based property models are employed to predict flowsheet properties; a notation system (called SFILES) is used for storing the flowsheet structural information; and a synthesis method is used to generate and identify the feasible flowsheets. Like functional groups in molecules that are characterized by atoms and their molecular weight, each process group is characterized by the type of unit operation/process and their corresponding driving force. Each PG contributes to the flowsheet (performance) properties, which can be calculated once a feasible configuration has been identified. The candidate flowsheets are ranked based on performance criteria like energy consumption, amount (mass) of external agents used and/or cost/profit. Once a set of near optimal flowsheet
322
S. Bommareddy et al.
alternatives have been identified, rigorous simulation is used to verify the predicted performance and select the best option.
2. The CAFD Framework 2.1. Problem Definition Given the raw materials and the desired product specifications, the problem is to identify the optimal process flowsheet that transforms the raw materials into the desired products in the most efficient manner. This means that a flowsheet needs to be synthesized and the design of each equipment/operation needs to be determined, matching a set of performance criteria. The CAFD framework, based on the GC-concept consists of the following steps: a) Problem Definition and Analysis (analyze the synthesis/design problem to define the performance criteria in terms of a set of target properties and to establish the desired target property values; define the initial search space in terms of a superstructure of alternatives); b) Process Synthesis (generate and screen process flowsheet alternatives); c) Process Design (determine the equipment/operation designs for the selected feasible process flowsheets); and d) Process Verification (verify the design through rigorous simulation and/or experiments). 2.2. Problem Analysis The objective here is to define the type of process that needs to be synthesized, e.g. processes with or without reaction; to identify the phases (vapor, liquid, and/or solids) that may be involved; to determine the number of operations/tasks that need to be performed; to select the types of process groups needed to generate the flowsheet alternatives; to select the performance criteria by which to evaluate the process flowsheet alternatives; and, to select the properties (together with their target values) that affect the selected performance criteria. To achieve the above objectives, the raw materials and product specifications are analyzed to establish whether or not reactions are involved. This information together with the chemical system involved is analyzed to determine the minimum number of processing steps and to select the process groups. Having the set of process groups and the minimum number of processing steps (operations/tasks), a superstructure of flowsheet alternatives is determined. Finally, the performance criteria (such as lowest cost, efficient use of energy, minimal waste and environmental impact) are selected together with the properties that affect them. 2.3. Flowsheet Synthesis The objective here is to generate and evaluate feasible flowsheet alternatives based on the selected process groups; the minimum number of processing steps and the product specifications. A number of methods and tools are needed here. For example, the selected process groups need to be initialized with respect to the chemical systems handled; combination rules that use the process groups as building blocks to generate process flowsheets; screening methods to verify that the desired processing steps and the product specifications are met; a method to determine the SFILES notation for each feasible flowsheet; and finally, use of flowsheet property models to calculate the target properties of the flowsheets and rank them according to a desired property function. 2.3.1. Initialization of Process Groups The PGs are initialized with respect to the number of chemicals and their identities. For example, In case of separation PGs, among the groups selected to perform a desired separation task, only the PGs with the highest property ratios (corresponding to the separation tasks – see Jaksland et al. (1995) [2]) are selected. This constraint reduces the number of initialized PGs. Based on the remaining larger size (in terms of number of
Computer Aided Flowsheet Design using Group Contribution Methods
323
chemicals handled) PGs, redundant smaller size PGs are eliminated to prevent the formation of structurally infeasible flow sheets. Performing the above for all sets of groups with different sizes gives a final set of PGs that could form structurally and practically feasible flow sheets. 2.3.2. Generation of Flowsheet Alternatives The objective of this step is to generate the feasible flowsheet alternatives by combining the initialized PGs. The following procedure is employed to generate a list of feasible flowsheet alternatives: 1. For each reactor-PG and Np sized-PG (Np is the number of components in the desired product), the matching smaller size groups are identified. 2. For each list of PGs, combinations of PGs are generated subject to: 2.1. Number of PGs in the combination = Np – 1 + (reaction PG) 2.2. If s indicates the separation PG size, and ns indicates the number of separation PGs of size s, the following constraints must be satisfied: If s = Np, then ns = 1; If s = 3, 4, … Np – 1, then 0 ns int (Np/s); If s = 2; then 1 ns int(Np/2). 3. Within the reduced solution space identified by the steps above, combinations not leading to targeted product components are eliminated. 2.3.3. SFILES Representation of Flowsheets All feasible flowsheets generated from the previous step are given their corresponding SFILES notation according to the method developed by d’Anterroches et al. (2005) [1] and stored in terms of this notation. 2.3.4. Ranking of Flowsheet Alternatives The target properties for each generated feasible flowsheet alternative are calculated using the flowsheet property models combined with tables of the property contributions of each PG. As in CAMD, the target property models may be only structure dependent (primary properties), or they may be structure as well as other property dependent (secondary property), or, they may be dependent on multiple phenomena (effect of energy and mass separating agents). Based on these target properties, the flowsheet alternatives that are structurally feasible and satisfy the property targets, are identified. 2.4. Process Design by Reverse Simulation The objective here is to determine the optimal values of the minimum number of design variables corresponding to each process equipment and/or operation in the selected feasible flowsheet. For counter-current staged separation processes, this includes variables such as number of stages, feed location, product specifications and reflux ratio. For reactors, examples of the design variables are reactor volume, residence time, reactor effluent composition, and temperature. With the reverse approach, separation related PGs such as distillation, extractive distillation, and flash drums are characterized in terms of their driving force [3] while reaction related PGs are characterized in terms of their highest attainable reaction point [4,5]. The design method back-calculates the design variables from the highest driving force or highest attainable reaction point. 2.5. Final Verification The objective here is to verify the synthesis/design results from the previous steps as well as to determining the remaining unknown variables of the process. This is done through rigorous simulation provided the necessary models are available. To perform these simulations, the design variables fixed in the previous steps are used. In this way, the matching of the calculated variables with the process specifications confirm the design. Since the design corresponds to the maximum driving force (summing all the individual driving forces gives the total driving force) and/or the attainable region, it is
324
S. Bommareddy et al.
expected that it will correspond to the minimum energy consumption design. Alternatively, the design could also be verified by carefully designed experiments.
3. Case Study – Production of Diethyl Succinate Succinic acid is a potential co-product from bioethanol manufacture, which can be further reacted with ethanol to produce diethyl succinate, a useful solvent for cleaning metal surfaces and paint stripping. In this case study, the production of diethyl succinate from ethanol and succinic acid is investigated. The objective is to identify the optimal process configuration by mimizing the energy requirements. The potential reaction pathways to form diethyl succinate are identified as shown in Eqs. (1) and (2): Succinic Acid (E) + Ethanol (A) o Monoethyl Succinate (D) + Water (B)
(1)
Monoethyl Succinate (D) + Ethanol (A) o Diethyl Succinate (C) + Water (B)
(2)
Since the conversion of succinic acid and ethanol is incomplete, the product (diethyl succinate) needs to be recovered and then purified from the reactor effluent while the reactants are recycled to the reactor. Minimum number of processing steps is 4 (mixing, reaction, product recovery and product purification). The initial mixture analysis identifies the feasible separation techniques shown in Table 1. Table 1. Separation tasks and potential techniques. Separation Technique
Property Ratio Threshold Values
Separation Tasks
Crystallization
Melting Point > 1.27
A/C, C/E, D/E
Liquid Membrane
Radius of Gyration > 1.03 Molar Volume > 1.08 Solubility Parameter > 1.28
A/B, A/D, A/E, B/C, B/D, C/D, C/E, D/E
Pervaporation
Molar Volume > 1.08
A/B, A/D, A/E, B/C, B/D, C/D, C/E, D/E
Distillation
Vapor Pressure > 15
A/C, B/C
Flash
Vapor Pressure > 15
A/C, B/C
Table 2. Initialized process groups. Unit Operation
Process Group
Kinetic Model Based Reactor
rAE/pABCDE, rpervAE/pB/ACDE
Crystallization
crsE/DBCA, crsE/DCA, crsDBC/A, crsE/DC, crsDC/A
Liquid Membrane
lmemCDEA/B, lmemCDE/A, lmemCDA/B, lmemCD/E, lmemCD/A, lmemCD/B, lmemC/D, lmemA/B
Pervaporation
pervCDEA/B, pervCDE/A, pervCDA/B, pervCD/E, pervCD/A, pervCD/B, pervC/D, pervA/B
Distillation
AB/CDE, AB/CD, A/CDE, A/CD, B/CD
Flash
fAB/CDE,fAB/CD, fA/CDE,fA/CD, fB/CD
Computer Aided Flowsheet Design using Group Contribution Methods
325
The mixture analysis also reveals the existence of two binary azeotropes (water/diethyl succinate and water/ethanol). Therefore, azeotropic distillation, extractive distillation, and liquid-liquid extraction might also be potential separation techniques to be considered in the synthesis problem. However, pervaporation and the liquid membrane selectively remove water from the mixture thus alleviating the need for azeotropic separation. A pervaporation assisted reactor is found to be an efficient configuration for esterification reactions. After the initial analysis, 103 process groups were initialized and by applying the rules to remove structurally and practically infeasible flowsheets, the number of PGs were reduced to the 33 shown in Table 2. A total of 176 feasible flowsheets were identified from the candidate process groups and represented by the corresponding SMILES notation. The energy index flowsheet property was calculated for all candidate configurations and the two SFILES strings with the lowest value of the energy index (0.051) are shown below. The first configuration consists of a reactor and four separation units: pervaporation, distillation, crystallization and liquid membrane. The second configuration involves a pervaporation reactor and three separation tasks: distillation, crystallization and liquid membrane. 1. (iAE)(rAE/ABCDE)(pervCDEA/B)[(A/CDE)[(crsE/DC)[(oE)](lmemC/D)[(oC)] (oD)](oA)](oB) 2. (iAE)(rAE/pB/ACDE)[(A/CDE)[(crsE/DC)[(oE)](lmemC/D)[(oC)] (oD)](oA)](oB) It is assumed that the membranes exhibit very high selectivity thus leading to a near perfect separation and recovery. The reverse simulation of the distillation column using the driving force approach yielded a design operating at a maximum driving force of 0.85 corresponding to a column with 15 stages (feed location 13.5) and a reflux ratio of 0.552 (minimum 0.368). For the two feasible flowsheets selected for final verification, one has already been verified by Alvarado-Morales et al. (2010) [4] while the other with the pervaporation assisted reactor is currently being verified.
4. Conclusions In this paper a novel systematic framework for synthesis of flowsheets based on a process group contribution method has been presented. Representing each process configuration using these unique process groups significantly reduces the computational load as no detailed calculations are required during the synthesis step. Once identified, the candidate flowsheets are ranked using overall performance indicators (flowsheet properties) like energy consumption, cost/profit etc. Reverse simulation techniques are then employed to identify the design parameters of the optimal flowsheets, which are used as initial estimates for rigorous simulation of the design.
References [1] [2] [3] [4]
L. d’Anterroches, R. Gani (2005). Fluid Phase Equilibria, 228-229, 141-148. C.A. Jaksland, R. Gani, K.M. Lien (1995), Chemical Engineering Science, 50(3), 511-530. E. Bek-Pedersen, R. Gani (2004), Chemical Engineering and Processing, 43, 251-262. M. Alvarado-Morales, M.K.A. Hamid, G. Sin, K.V. Gernaey, J.M. Woodley, R. Gani (2010), Computers and Chemical Engineering, 34, 2043-2061. [5] D. Glasser, C.M. Crowe, D. Hildebrandt (1987), Ind. Eng. Chem. Res., 26, 1803-1810
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Business Process Model for Process Design that Incorporates Independent Protection Layer Considerations Tetsuo Fuchinoa, Yukiyasu Shimadab, Teiji Kitajimac, Kazuhiro Takedad, Rafael Batrese, Yuji Nakaf a
Chemical Engineering Department, Tokyo Institute of Technology,
2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552,Japan b
Chemical Safety Research Group, National Institute of Occupational Safety and Health,
1-4-6, Umezono, Kiyose, Tokyo, 204-0024, Japan c
Institute of Technology, Tokyo University of Agriculture and Technology,
2-24-16, Naka-cho, Koganei, Tokyo, 184-8588, Japan. d
Department of Materials Science and Chemical Engineering,
Shizuoka University,3-5-1, Johoku, Naka-ku, Hamamatsu, 432-8011, Japan e
Industrial Systems Engineering Group, Toyohashi University of Technology,
Hibarigaoka 1-1, Tempaku-cho, Toyohashi, 441-8580, Japan f
Chemical Resources Laboratory, Tokyo Institute of Technology,
4259, Nagatsuda, Midori-ku, Yokohama, 226-8503, Japan.
Abstract The purpose of independent protection layers (IPLs) is to prevent the occurrence of hazardous events by designing protective systems against any failure sequences that might lead to a significant hazard. Therefore, it becomes necessary to make sure that the process hazard analysis identifes all potential failure sequences so that the IPLs can be designed in a robust way consistent with the identified failure sequences. However, engineers are not always conscious about the whole process involving the process and plant design. Furthermore, the analysis that precedes the IPL design and the IPL design itself are not incorporated in a systematic way with the process and plant design. This is also related to the lack of design rationale in the design of safety systems, resulting in alarms floods and even more serious problems. This paper presents a business process model for process design that incorporates the notion of independent protection layer design. Keywords: independent protection layer, process hazard analysis, business process model, process safety design, IDEF0
Business Process Model for Process Design being Conscious of Independent Protection 327 Layer
1. Introduction Chemical processes have potential hazards, and are designed to avoid that a given potential hazard evolves into an incident. In general, the potential hazard is controlled within a safety region of the normal operating modes. However, some initiating events which exceed the control capabilities of normal operating limits are the cause of abnormal process deviations and hazardous events that can lead to human and physical damages. An independent protection layer (CCPS, 2001) is aimed at preventing incidents by protectings against a particular type of hazardous event. The most commonly encountered independent protection layers that should be considered during process and plant design are: (1) inherently safer process design, (2) basic control system, process alarm and operator supervision, (3) critical alarms, operator supervision, and manual intervention, (4) automatic Safety Interlock System (SIS), (5) physical protection (relief devices) (6) physical protection (containment dikes), (7) facility emergency response, (8) community emergency response. In a typical plant engineering project, the analysis that precedes the IPL design and the IPL design itself are not incorporated in a systematic way with the process and plant design. This is also related to the lack of design rationale in the design of safety systems, resulting in alarm floods (ISA, 2007), that occur when alarm rates exceeds 10 alarms in 10 minutes (often in the hundreds) in which important alarms are likely to be missed. Basically, alarm floods can be avoided if the process hazard analysis identified all potential hazard scenarios, and the independent protection layers were designed in a systematic way that is consistent with the identified hazard scenario. A variety of operations take place during the life-cycle of the plant. These include initial startup, partial startup, restart, startup after turnaround, normal shutdown, emergency shutdown, partial shutdown. During process and plant design, engineers must address the requirements imposed by these operations in an integrated way. Otherwise, the design and specifications that work well with a certain kind of operation may negatively affect the efficacy and efficiency of another kind of operation. Similarly, the design of an alarm that neglects the limits imposed at which SIS operate has a probability to lose the original intention of the alarm which is to “attract the attention of the plant operator to significant changes that require an assessment or action” (EEMUA, 2007). On the other hand, if an alarm is required by the plant but it lacks an automatic SIS, then the design would have to be modified in order to give more time to the operator to respond. However, it is often the case that the process designer designs the basic control system, process alarms, critical alarms, SIS and relief devices without an integrated approach to the design and specification of the independent protection layers. Furthermore, when process hazard analysis that is performed after completion of process design requires additional process alarms, the lack of rationalization and documentation of the independent protection layers can result into operational problems such as duplicated alarms or inconsistent alarm settings, that potentially lead to alarm flood situations. In this paper, a business process model for process design being conscious of the independent protection layer is developed. The IDEF0 (Integration Definition for Function Modeling) (NIST, 1993) is adopted for the business process modeling. In this model, the process hazard analysis and the independent protection layer design are explicitly integrated, so that the IPLs can be designed in a robust way consistent with the identi-
328
Fuchino et al.
fied hazard scenarios. Consequently, documentation and rationalization of the protection layer design becomes possible, resulting in a better alarm management.
2. Previous IDEF0 Model for Process and Plant Design There are several reports of business process models for representing the process design (Fuchino et al., 2004, Sugiyama et al., 2006). However, current efforts focus on development phase of process design, and the safety protection systems design is not considered. The PIEBASE (Process Industry Executive for achieving Business Advantage using Standards for data Exchange) was an international consortium to achieve a common strategy and vision for the delivery and use of internationally accepted standards for information sharing and exchange (ISO-STEP), and developed a business process model to represent the Fig. 1 Node Tree of PIEBASE Activity core business activity of chemical process Model industry (PIEBASE, 1998). The PIEBASE activity model uses a template approach across all principal activities. This template consists of three steps, (1) manage, (2) do and (3) provide resources. Fig. 1 shows the node tree which pulled out the part concerning to the safety design and process hazard analysis from PIEBASE activity model. In this model, "A-0: conduct Core Business" is developed into "A1: Manage the Business", "A2: Acquire Input", "A3: Create Product", "A4: Sell Output" and "Provide Supporting Resources". The process plant is one of the physical assets to be provided for the "A3: Create Production" activity. Designing and engineering process plant is divided into two phases; i.e. "in concept" and "in detailed", and the safety design and engineering are performed through these two phases of "A554224: Produce Conceptual Safety Engineering Designs" and "A5542326: Design Infrastructural and Safety, Health, and Environmental Protection Systems". The information on designed and constructed process plant becomes the mechanism information for A3 activity, and the result of safety design and engineering is evaluated in "A315 Assess Safety, Health, and Environmental Protection for Performing Production" activity. The purpose of PIEBASE activity model is to provide a common understanding of engineering and information requirements during the different activities that occur during the life-cycle of a plant. However, the activities in the model were defined so as to reflect the current existing practices. Therefore, the activity model fails in addressing the integration between the IPL design and the different process hazard analyses. The process hazard analysis is applied to only as a crosscheck of the resulting design.
Business Process Model for Process Design being Conscious of Independent Protection 329 Layer
3. Improved IDEF0 Model This is an IDEF0 activity model for process design that incorporates independent protection layer design. Similar to the PIEBASE activity model, the improved IDEF0 model is based on a template. However, the template has been extended to five types of sub-activities, i.e. “Manage”, “Plan”, “Do”, “Evaluate” and “Provide Resources” (Fuchino et al., 2010). The proposed template, Fig. 2 Node Tree from A-0 Activity is applied across all principal activities, and the lifecycle engineering view point is adopted. Fig. 2 shows a part of the node tree from “A-0: Perform LCE” as the top activity. This is a lifecycle model that follows the systems engineering organization of definition, development, and deployment. The process design activity consists of three phases: conceptual, preliminary and final, and the plant design is composed of two phases: preliminary and final. The conceptual process design phase (activity A33) corresponds to the inherently safer process design in IPL, including hazard elimination and substitution, inventory considerations, and plant location; preliminary process design phase is related to the design of IPLs (2) to (6). In “A34: Develop Preliminary Process Design” activity, the process design according to operational requirements of normal, abnormal and emergency operations is designed. In designing process for normal steady state operation (A343), basic process control is designed, so that the safety operating ranges should be assessed in A3432 before activity A3433. Develop Preliminary Process Design for Startup and Shutdown (A344) evaluates the current plant design to verify that all the necessary equipment are available to perform the startup and shutdown. As a result preliminary operating proceFig. 3 Development of Node A3442 dures are obtained along with information on operating limits and time-related data which can be used to configure state-based alarm algorithms that detect when the plant changes operating state and dynamically modify the alarm settings to conform to the proper settings for each state (Hollifield, 2007). The synthesis of startup and shutdown operations takes place in activity A3442. Fig. 3 shows further development from activity A3442. To specify initial conditions and safety safety constraints in A34423, the hazardous conditions should be assessed in A34422. Fig. 4 shows further development from “A345: Develop Preliminary Process Design for Abnormal Situation”. In order to determine the operation category (fallback, partial shutdown or total shutdown) process hazard analysis is necessary (A34522). This
Fuchino et al.
330
is because hazard analysis is used to identify possible hazard scenarios and its recommendations for additional sensors, alarms or other IPLs, some of which are addressed in activity A34523. Furthermore, because hazard scenarios contain information about causes, consequences, and corrective actions, they can also be used justify the design rationale for a given alarm. Furthermore, operational responsibility should be estimated in A34523, and the operation category is to be decided in A34524. The activities to perform process hazard analysis are depicted in Figs. 2, 3 and 4. It becomes clear that the process hazard analysis is necessary for the protection layer design.
4. Conclusion This paper describes an IDEF0 activity model for process design that takes into account the independent protection layers. It is clear that process hazard analyses and independent protection layer design should be performed concurrently to generate rationalized process safety design including process and critical alarms design with operator responsibilities. Fig. 4 Development of Node A345 For example, the proposed model insures that every alarm or other protection layer are consistent with the operator actions, time to response. The model can also be used to provide specific information such as (trip points, causes, consequences, corrective actions) that justifies a given alarm.
x
References
CCPS, 2001, “Layer of Protection Analysis,” New York; American Institute of Chemical Engineers, Center for Chemical Process Safety. EEMUA. Alarm Systems - A Guide to Design, Management and Procurement Engineering Equipment and Materials Users Association (2007) Fuchino, T., T. Wada and M. Hirao, 2004, “Acquisition of Engineering Knowledge on Design on Industrial Cleaning System through IDEF0 Activity Model,” Proceedings of KnowledgeBased Intelligent Information and Engineering Systems, pp 418-424 Fuchino, T. , Y. Shimada, T. Kitajima and Y. Naka, 2010, “Management of Engineering th Standards for Plant Maintenance based on Business Process Model,” Proceedings of 20 European Symposium on Computer Aided Process Engineering, pp 1363-1368 Hollifield, B. R., E. Habibi. Alarm Management - Seven Effective Methods for Optimum Performance. ISA, (2007) ISA, 2007, “Alarm Management: Seven Effective Methods for Optimum Performance, ” North Carolina; Instrumentation, Systems and Automation Society. NIST, 1993, “Integration Definition for Function Modeling,” Federal Information Processing Standards Publication, 183, http://www.itl.nist.gov/fipspubs/idef02.doc, National Institution of Standards and Technology. PIEBASE, 1998, “PIEBASE Activity Model”, http://www.posc.org/piebase/, Process Industries Executive for Achieving Business Advantage using Standards for Data Exchange. Sugiyama, H., M. Hirao, R. Mendivil, U. Fischer and K. Hungerbuhler, 2006, “A Hierarchical Activity Model of Chemical Process Design based on Life Cycle Assessment”, Trans IChemE, Part B, Proc. Saf. Environ. Prot. 84(B1), pp63-74.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Conceptual design of glycerol etherification processes Elena Vlad, Costin Sorin Bildea, Elena Zaharia, Grigore Bozga University Politehnica Bucharest, Department of Chemical Engineering, Polizu 1-7, 011061-Bucharest, Romania.
Abstract The feasibility of an industrial-scale, acid-catalyzed process for etherification of glycerol with i-butene is analyzed. A simplified mass balance of the process is derived using a kinetic model for the reactor and black-box model for the separation section. Sensitivity analysis of the steady state model shows that the system exhibits both state multiplicity and regions where no solution exists. The nominal operating point is chosen to avoid high sensitivity to disturbances and to guarantee feasibility when operation and design parameters are uncertain. The stability and robustness in operation are checked by rigorous dynamic simulation. Keywords: Glycerol etherification, design, control, dynamic simulation.
1. Introduction Glycerol is obtained as by-product of biodiesel production in amounts that are equivalent to approximately 10 % wt. of the total product. Di- and tri-ethers of glycerol are compounds soluble in diesel and biodiesel, improving the quality of the fuel and therefore being interesting alternatives to commercial oxygenate additives. Reaction of i-butene with glycerol in presence of homogeneous [1] or heterogeneous [2,3] acid catalysts yields a mixture of mono-, di-, and tri-tert-butyl glycerol ethers. Conceptual processes which could perform this reaction are described in references [1,4,5,6]. Here, we present the design of a glycerol etherification plant processing a nominal flow rate of 2 kmol/h of glycerol, assumed to be the by-product of a 15000 tone/year biodiesel plant. We focus on operating conditions leading to high selectivity in di-ether, 0.9 being a typical value. The robustness in operation is also considered.
2. Conceptual design 2.1. Reactor-separation-recycle model A simplified model of the plant is used to choose a nominal operating point together with the plantwide control structure. Fig. 1 (left) presents the Reactor–Separation– Recycle structure of the plant [7, 8]. i-Butene recycle i-Butene i-Butene recycle CS2 CS1 i-Butene i-Butene
i-Butene recycle
0
3
0
0
3
3
Mono-Ether recycle
FC
FC
3
LC
LC di-Ethers tri-Ether
1 Reactor
2
Separation
1
4
Glycerol
Glycerol
3
0
0
3
to Reactor
Glycerol recycle
x
r0
1
Glycerol 0
3
to Reactor
Glycerol recycle
Glycerol recycle 1
FC
FC
LC 1
to Reactor
LC 1
to Reactor
Fig. 1 – Reactor-Separation-Recycle structure of the glycerol etherification plant (left) and the principle of two different plantwide control structures
332
E. Vlat et al.
To investigate the steady state behaviour the etherification plant, two control structures is considered (Fig. 1). In all control structures the flow rate of fresh glycerol is set to the value FG,0. Control structures CS1 and CS2 differ by the second flow specification: fresh i-butene (FI,0) and ratio r0 = FI,0/FG,0, , respectively. For each control structure, the mathematical model of the Reactor – Separation - Recycle system is solved. The conversions of glycerol and i-butene (XG and XI, respectively) are plotted versus the flow rate of fresh glycerol (FG,0). For a fixed value of the fresh i-butene flow rate FI,0 (CS1) the model has a feasible solution (positive flow rates) only for a certain range of the fresh glycerol flow rate FG,0. Fig.2 displays the results obtained for an etherification plants employing a reactor of 1 m3. When the flow rate FG,0 is set to the upper limit, only di-ether is obtained as product. However, both glycerol and i-butene conversions approach zero and the recycle rates become infinite. This limit is independent of the reactor volume. 1
1
V = 1 m3
0.8
V = 1 m3
0.8
F I,0 / [kmol/h]=4.2
4.6
5
4.6
5
0.6
XI
XG
0.6
F I,0 / [kmol/h]= 4.2
0.4
0.4
0.2
0.2 0
0 1.5
2
2.5
1.5
3
2
F G,0 / [kmol/h]
2.5
3
F G,0 / [kmol/h]
Fig. 2 – Control structure CS1: glycerol and i-butene conversions vs. fresh glycerol flow rate, for different values of the fresh i-butene flow rate.
Fig. 2 shows that, for given values of the model parameters, either zero or two steady states are possible. When they exist, the two states are characterized by similar values for the glycerol conversion, but very different values for the conversion of i-butene. Moreover, a very high sensitivity of the i-butene conversion with respect to glycerol flow rate is observed when high selectivity in di-ether is required. This implies that small disturbances will lead to large changes of the i-butene recycle rate, known as the “snowball effect”. In conclusion, control structure CS1 offers the advantage of easily setting the ratio between the di- and tri-ethers by manipulating the reactants flow rates, but the operating points of high di-ether selectivity exhibits high sensitivity to disturbances and are dangerously close to the feasibility limit. Fig. 3 presents the conversions versus the fresh glycerol flow rate, for different reactor volumes, when control structure CS2 is used. The system shows multiple steady states and a region of unfeasibility. On the two solution branches, the values of glycerol conversion are very close to each other. 0.5
1
FI,0/FG,0=2.1
0.8
0.46
0.6
FI,0/FG,0=2.1
XG
XI
0.48
V / [m3] = 1
0.4
0.44
2
3
V / [m ] = 1
0.42
4
2
4
0.2 0
0.4 0
2
4
6
8
F G,0 / [kmol/h]
10
12
14
0
2
4
6
8
10
12
14
F G,0 / [kmol/h]
Fig. 3 – Control structure CS2: glycerol and i-butene conversions vs. fresh glycerol flow rate, for different values of the reactor volume.
Conceptual design of glycerol etherification processes
333
Compared to control structure CS1, much larger glycerol flow rates can be processed. The same selectivity in di-ether, namely VD/G = 3 – FI,0/FG,0 = 0.9 is obtained at all operating points depicted in Fig. 3. It appears that the 1 m3 reactor allows increasing the amount of processed glycerol by 50% from the nominal value, up to 3 kmol/h. 2.2. Separation section Depending on the temperature and composition, a mixture of glycerol, i-butene and glycerol ethers can exist in a single or two different liquid phases. Fig. 4 analyzes the liquid-liquid (L-L) equilibrium at 25 oC. Assuming that i-butene can be easily separated, the composition of the reactor-outlet stream (Fig. 4a, point M+D+G) falls in the singlephase region. This happens because the large amount of mono-ether increases the miscibility of glycerol and di-ether. However the immiscibility can be exploited for separating the reactants from products by mixing, in the L-L separator, the fresh glycerol with the reactor outlet. The mixture (point L) separates into a glycerol-rich phase (L1) and a DTBG-rich phase (L2). However, the DTBG-rich phase L2 contains significant amounts of MTBG. 1 G
L1
(a)
1
L
0.8
0.6 0.4
G / G0
XG
0.8
M+D+G
0.2
L2
0
D 0
M 0.2 0.4 0.6 0.8 XMTBG
1
0.6
IB0/G0=0
1
(L1)
(b) 2
0.4 (L ) 2 0.2 IB0/G0=0 0 0 0.4 0.8
2 1 1.2
1.6
M0 / G0
Fig. 4 – Liquid-liquid equilibrium (a) Liquid-liquid immiscibility occurs when the column bottom stream is mixed with fresh glycerol. (b) Addition of i-Butene improves the separation of glycerol
Fig. 4(b) shows glycerol distribution between the two liquid phases, starting from an equimolar glycerol - DTBG mixture with various amounts of MTBG and i-butene. It can be observed that i-butene has a favorable effect on the L-L equilibrium because it decreases the solubility of glycerol in the DTBG-rich phase. In conclusion, the separation of i-butene from the reaction mixture should be done after the L-L split.
3. Detailed design This section presents details of the glycerol etherification process (Fig. 5). The flowsheeting software AspenPlus was used as a CAPE tool. The physical properties of glycerol, i-butene and water are available in AspenPlus databank. The properties of the ethers were calculated using group contribution methods. The behaviour of the liquid phase was described by the NRTL activity model. The interaction parameters of pairs involving ethers and glycerol or i-butene were taken from [1]. The other unknown interaction parameters were estimated using UNIFAC. Ideal mixing was assumed. Glycerol etherification. The etherification of glycerol with i-butene takes place in a CSTR of 1 m3. The reaction temperature and pressure are set to 90 ºC and 14 bar, respectively, when the reaction mixture is liquid. When the same reactor-inlet flow rates were specified to the simplified (Reactor-Separation-Recycle) and Aspen models, identical results for the reactor-outlet stream were obtained.
E. Vlat et al.
334
I3b IB3B
I3a IB3A
IB0
I0
MIX-IB MIX -I B
I1 IB1
G0
G0
C1
C1
CSTR
C2 C2
CST R
4
D+T
SEP-GLL S EP-GLL
2
9
2a
2b M+D+T
2 B4 6
G1+M1a G1+M1A
B3
G1+M1 G1+M1
M1b
M1-B
Fig. 5 – Flowsheet of the glycerol etherification plant
G-L-L separation. The composition of the reactor-outlet stream falls in the singlephase region, but two liquid phases are formed when fresh glycerol is added, as previously discussed. Therefore, G-L-L separation is possible. The temperature is reduced to 50 ºC and the pressure is set to 1 bar. The cooling duty is 27.8 kW. 10% of ibutene is found in the vapour stream “I3a” and recycled. The stream “G1+M1a” contains glycerol and mono-ether and is recycled. The liquid stream “2a” contains ibutene and ethers. Column C1 separates the i-butene (stream I3b). It has 9 theoretical stages with the feed on stage 3. The column has a partial condenser and is operated at atmospheric pressure. The column diameter is 0.2 m. The reflux ratio is set to 2. The reboiler duty is 79.6 kW and the condenser duty is 25 kW. Column C2 separates di- and tri-ether from mono-ether. It has 50 theoretical stages with the feed on stage 25. The column has a total condenser and is operated under vacuum to avoid high temperature in the bottom of the column. The column diameter is 0.8 m. The reflux ratio is set to 4. The reboiler duty is 191 kW and the condenser duty is 209 kW. Stream “M1b” contains 97% mole fraction mono-ether and some of di-ether. Stream “4” contains 83% di-ether and 10.2% tri-ether (mole fractions).
4. Dynamics and control From the viewpoint of steady state behaviour, the design performed in the previous sections together with control structure CS2 allow processing the nominal flow rate of glycerol and tolerate rather large disturbances. However, the analysis showed two coexisting steady states, which cannot be simultaneously stable. Moreover, the simplified model used in previous section assumed perfect separation of the products from the unconsumed reactants, which certainly is not the case. Therefore, the dynamics of the plant must be considered in order to prove the stability of the operating point and the resiliency with respect to disturbances. To reach this goal, a dynamic model of the plant was built in AspenDynamics. Besides the control loops of CS2, standard control of the G-L-L separator and distillation columns was used. The controllers were tuned by a simple version of the direct synthesis method. Fig. 6 presents a sample of results obtained by dynamic simulation. Starting from the steady state, at time = 1 h, the glycerol flow rate was changed from 2 kmol/h to 2.2 kmol/h and 1.8 kmol/h, respectively.
Conceptual design of glycerol etherification processes
F / [kmol/h]
8
I3b
2
4
6
(a)
1.5
4
1
3
10
G1+M1
I3a
2
2
0.5
0
0
2
4
6
t / [h]
8
10
6 I3b
4
1
0
8
4
1.5
0.5
0
10 G1+M1
2.5
F / [kmol/h]
3 2.5
335
(b)
I3a
2 0
0
2
4
6
8
10
t / [h]
Fig. 6 – Dynamic simulation results.
It can be seen that the nominal operating point is stable, and the plant achieves stable operation when disturbances are introduced.
5. Conclusions Production of glycerol ethers by etherification of glycerol with i-butene catalyzed by homogeneous acid catalysts is feasible. For a typical glycerol flow rate of 2 kmol/h, the reaction can be carried on in a CSTR of 1 m3. The reactants conversion is high and small recycles are needed. The separation products - unconsumed reactants can be achieved by a combination of a 3-phase flash and two distillation columns. When one of the control structures considered in this work is applied, multiple steady states are possible and the flow rate of glycerol that can be processed is limited. For this reason, the behaviour of the plant was investigated by steady state sensitivity analysis. This allowed selecting a robust control structure.
Acknowledgement The work has been funded by the Sectoral Operational Programme Human Resources Development 2007-2013 of the Romanian Ministry of Labour, Family and Social Protection through the Financial Agreement POSDRU/88/1.5/S/61178 and by CNCSIS – UEFISCSU, projects IDEI 1545/2008 – “Advanced modeling and simulation of catalytic distillation for biodiesel synthesis and glycerol transformation” and IDEI 1543/2008 – “A nonlinear approach to conceptual design and safe operation of chemical processes”.
References 1. Behr, A. and L. Obendorf, Development of a process for the acid-catalyzed etherification of glycerine and isobutene forming glycerine tertiary butyl ethers, Eng. Life. Sci. Comm., 2, 185, 2003. 2. Klepáþová, K., D. Mravec, M. Bajus, Tert-Butylation of glycerol catalysed by ionexchange resins, Appl. Catal. A:General, 294, 141, 2005. 3. Klepáþová, K., D. Mravec, A. Kaszonyi, M. Bajus, Etherification of glycerol and ethylene glycol by isobutylene, Appl. Catal. A:General, 328, 1, 2007. 4. Versteeg, W.N., O. Ijben, W.N. Wernink. K. Klepacova, S. Van Loo, Method of preparing GTBE, WO 2009/147541 A1, 2009. 5. Gupta, V.P., Glycerine ditertiary butyl ether preparation, US 5476971, 1995. 6. Noureddini, H., Process for producing biodiesel fuel with reduced viscosity and a cloud point below 32 degrees Fahrenheit, US6015440, 2000. 7. A. C. Dimian, C. S. Bildea, Chemical Process Design: Computer-Aided Case Studies, Wiley-VCH, 2008. 8. Bildea, C.S. and A.C. Dimian, Fixing flow rates in recycle systems: Luyben’s rule revisited, Ind. Eng. Chem. Res., 42 , 4578, 2003.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Dynamic Conceptual Design under Market Uncertainty and Price Volatility Davide Manca, Andrea Fini, Mirko Oliosi CMIC Department, Politecnico di Milano, 20133 MILANO, ITALY, [email protected]
Abstract The paper proposes and discusses a methodology to quantify the economic potential of a plant subject to market variability. Price fluctuations and market uncertainty are analyzed and modeled. The manuscript assesses the variability of raw and product prices (i.e. hydrocarbons in the HDA process) related to a reference indicator (i.e. crude oil). Afterwards, it proposes a methodology to forecast a series of economic scenarios to quantify the feasibility of installing and running the HDA process (according to a dynamic interpretation of the economic potentials proposed by Douglas, 1988). Finally, the paper evaluates the statistical distribution of a dynamic economic potential to quantify the financial risk of investment on such a plant. Keywords: Conceptual design; Price fluctuations; Market uncertainty; Econometric model.
1. Introduction Douglas (1988) proposed, formalized and discussed the conceptual design of chemical plants based on a hierarchical methodology for the sequential optimization of economic potentials. Such potentials depend on investment and working costs as well as on revenues from selling the main product(s) and possible byproduct(s). Nevertheless, the work of Douglas did not take into account the price variability of raw materials, products, and utilities that are subject to market demand. Usually, the conventional approach to conceptual design finds either a sub optimal or even an unrealistic solution because it neglects the dependency of the economic terms from the time-varying market oscillations. Ullman’s encyclopedia (2002) reports that running a hydrodealchilation (HDA) plant (Douglas, 1988), which produces benzene from toluene, can be either economically profitable or unprofitable according to the price variability of such compounds. Milmo (2004) reports that the mean running time of a HDA plant is about 40% of the theoretical operating time due to frequent periods when the toluene price is higher than the benzene one. Nonetheless, HDA plants are run all over the world.
2. Modeling the functional dependence of hydrocarbon prices The commodity market dictates the oscillations of both raw and product materials according to the supply & demand law. With reference to hydrocarbons (e.g., toluene and benzene in the HDA process), it is reasonable to define crude oil as the reference indicator to model their price dynamics (i.e. market demand and economic fluctuations; see also Figure 1). A covariance analysis allows assessing the high correlation existing between the crude oil price and the benzene (toluene) price (i.e. absence of significant time delay). In addition, a time series analysis of commodity prices shows the absence
Dynamic Conceptual Design under Market Uncertainty and Price Volatility
337
120
140
100
120 crude oil price [$/bbl]
molar price [$/kmol]
of any seasonal nature whilst the plot of correlograms provides further information about the lack of time delays between benzene (toluene) price and crude oil quotation.
80
60
40
20
100 80 60 40
Toluene [$/kmol] Benzene [$/kmol] 0 2005
2006
2007 2008 2009 months jan 2005 - apr 2010
2010
20 2005
2006
2007 2008 2009 months jan 2005 - apr 2010
2010
Figure 1 - Monthly economic fluctuations of benzene, toluene and crude oil prices in the 2005-2010 period.
In addition, the autocorrelograms are monotonically decreasing. This point shows the manifest self-dependency of both benzene and toluene prices from the corresponding previous quotations (on a monthly basis). These bits of information allow proposing an autoregressive model with exogenous input (ARX) also known as “autoregressive with distributed lag” model, according to the econometrics terminology (Stock and Watson, 2003), whose structure is: Px ,i ax bx Pco,i cx Px ,i 1 (1)
Figure 2 - ARX model of the benzene price and comparison with the real quotation. The dashed vertical line divides the left portion of data used to identify the econometric model from the right portion used for the cross-validation. A quite similar trend is also shown by toluene.
where Px ,i is the price of hydrocarbon x at time i (in our model at i th month) and Pco is the price of crude oil. By minimizing the sum of the square differences between the real quotation and the model price of equation (1) over a given time interval, it is possible to evaluate the model parameters ( ax , bx , cx ) of HDA process for both benzene and toluene (see also Figure 2).
338
D. Manca et al.
3. Evolutionary scenarios of hydrocarbon prices Once the dependency of hydrocarbon prices from the crude oil indicator has been established, it is time to propose and discuss a model to forecast the possible and consistent scenarios of the hydrocarbon future quotations. Figure 3 shows the weekly relative variations between the market quotations of crude oil in the five-year period: 2005-2010.
Figure 3 - Weekly relative variations of crude oil price in the 2005-2010 period.
A statistical analysis of the relative variations shown in Figure 3 allows determining the stochastic nature of the phenomenon represented by a gaussian distribution with a slightly positive mean value, which shows a bullish trend of the crude oil quotations. The limited values of the autocorrelogram of the variations of the crude oil price prove the absence of periodic phenomena and show the stochastic nature of weekly fluctuations. These remarks allow defining the following dynamic model of the crude oil price: (2) Pco ,i Pco ,i 1 1 RANDN V co Pco where RANDN is a random number chosen from a normal distribution with mean zero ( Pco ) and standard deviation one ( V co ). Equation (2) describes a typical Markov process i.e. a stochastic discrete-time process where the new status of the process depends only from the previous one (Häggström, 2002). Once the scenarios of the future prices of crude-oil have been modeled, it is possible to determine from equation (1) the future scenarios of hydrocarbon derivates (e.g., benzene and toluene for the HDA process). The difference (i.e. error) between the ARX model and the real data (see also Figure 2) of hydrocarbon prices can be ascribed to the stochastic fluctuations of market quotations. This is supported by the gaussian distribution of the errors that has a zero mean. It is possible to observe that most of the absolute errors are below the 20% threshold (with the exception of the time period corresponding to the financial world crisis at the end of 2008, where the error is as high as 120% and, consequently, it is rejected as an outlier). Given these considerations, the future prices of crude oil derivates can be modeled by means of the following equation: (3) Px.i ax bx Pco,i cx Px ,i 1 1 RANDN V x
where the stochasticity of future market quotations is accounted for by the RANDN V x term. Given a crude-oil scenario of future market quotations determined by equation
Dynamic Conceptual Design under Market Uncertainty and Price Volatility
339
(2), it is possible to determine the corresponding scenarios of toluene and benzene prices from equation (3). Figure 4 shows one of the possible future scenarios for crude oil, toluene and benzene.
Figure 4 - One of the possible future scenarios of commodities market quotations in the fiveyear period 2010-2014.
4. Dynamic economic potentials The opportunity of retrofitting the HDA process, by installing an energy production section that can be run alternately with the feed-effluent heat-exchanger to maximize a dynamic economic potential, was discussed by Manca and Grana (2010) as a function of hourly fluctuations of the electric energy price within a given day. This paper focuses on an extension of classic economic potentials (as theorized by Douglas, 1988) to take into account the variability of market prices of raw materials, utilities, products, and byproducts in terms of a suitable distribution of future scenarios. It is then possible to define a new set of dynamic economic potentials: DEP 2, DEP3, DEP 4 in accordance with Douglas’ notation: EP 2, EP3, EP 4 . Rev 4i , k ª $ º ¬ h¼ NR ª NP ½º « °¦C p ,i , k Fp ¦Cr ,i , k Fr °» max « 0, ® p 1 (4) r 1 ¾» « °Cel ,i , k Wel Cst Fst CH O FH O C fuel Ffuel °» ¿¼ 2 2 ¬ ¯ nMonths
DEP 4k ª $ º ¬« y ¼»
¦ Rev4 i 1
i , k nHoursMonth
nEquip
¦ IC e
e 1
nMonths /12 nMonths /12 where i is the month index; k the scenario index over a period of nMonths months; C are costs; F flowrates; W electric power; p, r , el , st , H 2 O, fuel are referred respectively to products, reactants, electric energy (pumps and compressor), steam and water (condensers and reboilers), fuel (furnace); nHoursMonth is the number of operating hours in a month; IC are the investment costs of nEquip process units. The
340
D. Manca et al.
max function allows accounting only for positive Rev 4 revenues (i.e. when the toluene and benzene prices make the process economically viable) otherwise the plant is switched off (along the i th period) and only the negative term of equipment investment is charged in the computation of DEP 4 . By simulating a series of possible scenarios for the commodities and utilities future market price, it is possible to get a forecast of the distribution of economic potentials through a given period (e.g., a five-year period, as shown in left panel of Figure 5).
Figure 5 – Left panel: distribution of a set of 3,000 possible future scenarios for the DEP 4 indicator (five-year period). Right panel: cumulative distribution of the DEP 4 scenarios.
The cumulative distribution curve of dynamic economic potentials (see also right panel of Figure 5) allows comparing the HDA investment with other alternative investments (e.g., financial assets). This allows also quantifying the corresponding risk of investment under uncertainty. Finally, it is possible to compare the economic potential distribution of different plant layouts and the profitability of alternative solutions in the field of energy allocation and exploitation (Manca and Grana, 2010).
References Milmo S., “Benzene prices in Europe escalate to tight supply-demand”, Chemical Market Reporter, 5, 1-2, (2004) Douglas J. M., Conceptual Design of Chemical Processes, McGraw-Hill, New York, (1988) Manca D., R. Grana, “Dynamic Conceptual Design of Industrial Processes”, Computers and Chemical Engineering, 34, 5, 656–667, (2010) Häggström O., Finite Markov Chains and Algorithmic Applications, Cambridge University press, Cambridge, (2002) Stock, J.H., M.W. Watson, Introduction to Econometrics, Pearson Education, London, (2003) Ullmann's encyclopedia of industrial chemistry, Vol. 4, 6th edition, Wiley–Vch, (2002)
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Analysis of separation possibilities of multicomponent mixtures Laszlo Szabo, Sandor Nemeth, Ferenc Szeifert, University of Pannonia, Department of Process Engineering, Egyetem Str. 10, H-8200 Veszprém, Hungary
Abstract Rectification is the most often used fluid separation technology in the chemical industry. The improvement of the distillation equipments and processes are important because of their incidence and their huge energy needs. Recently systematic synthesis of separation sequences is developed significantly. This paper presents a case study of a separation of the multicomponent cracked gas. Those rules of the separation sequence synthesis, were emphasized that are based on the actual example and can be generalized. The generalized rules and the separation structure developed applying generalized rules unambiguously depend on the thermodynamics properties of multicomponent mixture and the specification of the products. Keywords: separation of multicomponent, boiling point order (BPO), difference of boiling points (DBP), cracked gas
1. Introduction Conventional distillation columns divide the feed stream into two products. The complex column configurations (side product, side stripper, etc.) can be decomposed to a sequence of conventional columns. Therefore the separation of multicomponent mixture can be realized as a sequence or sequences of the separation steps with two products. The separation with two products is unambiguously determined by the thermodynamics properties of the feed stream. Several methods are known for the determination of the best separation sequence [3, 7], for example: • algorithmic approaches involving established optimization principles, • heuristic methods based on rules of thumb, • evolutionary strategies wherein improvements are systematically made to an initially created separation sequence, and • thermodynamic methods involving applications of heat cascade principles. In some cases two or more methods are often combined in the process synthesis of distillation system. Disadvantages of the algorithmic and evolutionary methods are that their applications require special mathematical background and computational skill from the user. Although heuristic rules can be applied easily to determine the order of separation sequence unfortunately several heuristics rules contradict to each other [8]. Table 1 shows the evolution of the heuristic rules [1, 2, 4]. At the application of heuristic methods it is often a problem to define the easiest and heaviest splitting. Through the analysis of binary mixtures we collected those important thermodynamical parameters and models which describe the separation best and can be applied in the case of multicomponent mixtures. The separability of binary mixtures is determined by Vapor-Liquid-Equilibra (VLE), bubble and dew point curves (TXY) and the difference of boiling points (DBP) of the pure components. The VLE shows the thermodynamic
342
L. Szabó et al.
limits of the separation. The DBP of the pure components is also determined the difficulty of the separation. The relative volatility depends on the concentration of the mixture therefore it can not be used as characteristic parameter for the description of the separation. Special coefficients are used to describe the difficulty of separation in the heuristic rule systems. The most commonly used parameters are the coefficient of difficulty of separation (CDS) and the coefficient of ease of separation (CES) [5, 7]. The structure of this paper is the following: the applied rules are illustrated by the design of separation structure of cracked gas from olefin plant, and then we discuss the heuristic rules which can be generalized. Aspen PlusTM software was applied for the calculation. Table 1. Historical overview of the development of main heuristic rules Heuristic rules Perform the easiest separation first Perform equimolar splits (50/50) The heaviest separation last First remove the most plentiful component The cheapest separation first Perform separation with the lowest CDS Perform separation with the highest CES Perform separation with the lowest energy index Perform direct sequence Perform sequence without non-key components
Author, year Harbert, 1957; Douglas, 1988 Harbert, 1957; Heaven, 1969; King, 1971; Douglas, 1988 Rudd, 1973; Douglas, 1988 Nishimura in Hiraizumi, 1971; King, 1971; Rudd, 1971; Douglas, 1988 Harbert, 1957; Rudd, 1973; Douglas, 1988 Nath in Mothard, 1981 Nadgir in Liu, 1983 Lien, 1983 King, 1971 King, 1971;Gomez in Seader, 1976
2. Separation of the cracked gas A cracked gas from olefin plant was chosen for the investigation of the separation system of the multicomponent mixtures. The cleaned and cooled cracked gas consists of 36 components; the main compounds are the methane (No. 3) the ethylene (No. 4) the propylene (No. 6) and n-octane (No. 12) [6]. Difference of boiling point
Concentration of feed
80
0.4
70
DBP (K)
50 40
0.2
30 20
Mass fraction (-)
0.3
60
0.1
10 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Components
Figure 1: Properties of the cracked gas In order to determine the structure of separation system in the first step the components of the cracked gas were ordered according to the boiling points of the pure component. This order is called as the boiling point order (BPO). In this case the BPO does not change with the pressure. The components are numbered and the first compound is the lightest (hydrogen), while the last compound is the heaviest (1, 3-butadiene). In the
Analysis of separation possibilities of multicomponent mixtures
343
second step the concentrations and the DBP of the neighboring components were plotted (Fig. 1). In third step the products were defined step by step using Figure 1. The product can be one component or a mixture of neighbor compounds in the BPO. Economic and thermodynamic considerations have to be taken into account during the definition of the products; for example the DBP of components to be separated is high enough and let the products are, from the market viewpoint, valuable. In our example both pure and mixture products were defined. The pure products are the ethylene (No. 4), the ethane (No. 5) and the propylene (No. 6). The mixture products are the light gas fraction (hydrogen No. 1, CO No. 2, methane No. 3), the light C3 fraction (No. 6-8), the heavy C3 fraction (No. 6-9), the C4 fraction (No. 10-16), the C5 fraction (No. 17-29) and the heavy aromatic fraction (No. 30-36). These products are recovered in high purity (99%). The light and heavy C3 fractions are rest of the light and heavy components. The ethane, the light and heavy C3 products are not pure therefore these mixtures can be fed back into the stream cracker. The fourth step of this method is the determination of the place of the first split and the specification of separation. The components were divided into two groups by the place of the split (head and bottom products). Based on Figure 2 the first separation step was defined between A= [1, 8] (light product) and B= [9, 36] (heavy product) groups, because: • The components with high concentration are in the A product. After the separation a large product (which consist of a few components) and a small product (which consist of many components) streams arise. • The feed stream is a gas or vapor phase from which it is necessary to condensate product B. The B product is in the smaller quantity in the feed stream. • At the splitting the DBP is high enough therefore the splitting is easy. • There are components with low concentration on the border of the split (the overlap will be small expectedly). • The number of products will be commensurable on the two sides (4 and 5). The separation was performed by short cut method (Aspen Plus DSTWU unit) hence the key components had to be defined. The key component of the “A” product was the propylene (6), while the key of the “B” product was the 1, 3-butadiene (12) and 99% recovery of both key components were defined, because: • These components are close to the place of split. • Concentrations of these components are high enough. • DBP of these components is significant. In the next separation step the head product of the first separation was performed. No. 3 and No. 4 were defined as key components (Figure 3), because: • This is a limit of the products specification. • DBP of these components is significant. • Concentrations of these components are high enough. This separation was also performed by short cut method (Aspen Plus DSTWU unit) defining 99% recovery of key components. At the other separation step we also applied the above principles (Figure 4 - 9). The place of the split was only defined at the small concentration and small DBP when it was on the limit of the product specification (5th separation step). The recovery of the key components was 99% in all separation steps. The Figure 11 shows the structure of the separation system, while the Figure 10 shows the purity of the products.
344
Place of split Key component
Difference of boiling point
Place of split
Concentration of feed
Key component
80
0.4
80
0.4
70
0.35
70
0.35
60
0.3
60
0.3
50
0.25
50
0.25
40
0.2
40
0.2
30
0.15
30
0.15
20
0.1
20
0.1
10
DBP (K)
Mass fraction (-)
10
0.05
0 3
4
5
6
7
8
9
10
11
12
13
14
15
0 2
3
4
5
Figure 2: 1st separation step
50
0.5
50
40
0.4
30
0.3
20
0.2
10
0.1
0 3
4
5
6
7
8
9
11
12
DBP (K)
Place of split Key component 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3
30
0.2 0.1 0
10 0 1
2
3
4
5
6
Components
Figure 4: 3rd. separation step
Figure 5: 4th separation step
Place of split Key component
Difference of boiling point Concentration of feed
40 30 20 10 0 5
6
7
8
9
0.35 0.3 0.25
10
0.2
8 6
0.15
4
0.1 0.05
2 0
10
0 12
13
14
Difference of boiling point Concentration of feed
Place of split Key component
0.4 0.3
40
0.25
30
0.2 0.15
20
12
13
0.18 0.16 0.14 0.12 0.1 0.08
30
0.06 0.04 0.02 0
10
0 11
0.2
20
0.05
0
Place of split Key component
40
0.1
10
0
14
26
27
28
Components
29
30
31
Components
Figure 8: 7th separation step
Figure 9: 8th separation step
Mass fraction of product 1-3
Mass fraction of product 4
Mass fraction of product 4- 6
Mass fraction of product 6
Mass fraction of product 7, 8
Mass fraction of product 6-9
Mass fraction of product 10-16
Mass fraction of product 17-29
Mass fraction of product 30-36
1 0.9
Mass fraction (-)
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1-3
18
50 DBP (K)
0.35
Mass fraction (-)
50
10
17
60
0.45
9
16
Figure 7: 6th separation step
60
8
15 Components
Figure 6: 5th separation step
7
Place of split Key component
12
Components
Difference of boiling point Concentration of feed
7
14
DBP (K)
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 4
13
40
10
50
DBP (K)
10
Components
60
6
9
20
0
Difference of boiling point Concentration of feed
DBP (K)
Mass fraction (-)
60
Mass fraction (-)
DBP (K)
Difference of boiling point Concentration of feed 0.6
2
8
Figure 3: 2nd separation step
Place of split Key component
60
1
7
Components
Components
Difference of boiling point Concentration of feed
6
Mass fraction (-)
2
1
Mass fraction (-)
1
0.05
0
0
4
5
6
7
8
9
Components
Figure 10: Purity of the products
10-16
17-29
30-36
32
Mass fraction (-)
DBP (K)
Difference of boiling point Concentration of feed
Mass fraction (-)
L. Szabó et al.
Analysis of separation possibilities of multicomponent mixtures
345
Figure 11: Structure of the separation system
3. Rules for the separation of the multicomponent mixtures The elements of the experiences obtained in the course of the development of this case study which can be generalized the next are the following: • Beside the application of different indicators measuring the difficulty of the separation it is expedient ordering of the components of the mixture to be separated according to a boiling point and plotting concentration and DBP of the components (before all separation steps). • With the help of this figure, taking market viewpoints into consideration, N product class can be defined. • With the help of this figure the place of the split and the key components can be determined. • The place of the split is at the limit of the product where the DBP is large enough and the concentrations of the adjacent components are small between the products. • The key components are close to the split place and their concentration is relatively large. It is necessary to allow compromises during the design of the separation system of the multicomponent mixture since the requirements contradicting to each other may be frequent. The less compromises present the better result of the separation system are.
4. Acknowledgment László Szabó is grateful for the support of the PhD Fellowship of the MOL Plc The financial support from the TAMOP-4.2.2-08/1/2008-0018 (Livable environment and healthier people – Bioinnovation and Green Technology research at the University of Pannonia) project is gratefully acknowledged.
Reference 1. A. K. Modi, A. W. Westerberg, 1992, Distillation column sequencing using marginal price Ind. Eng. Chem. Res., 31 (3), pp 839–848 2. J. M. Douglas, 1988, Conceptual Design of Chemical Processes, ISBN: 0-07-017762-7 3. M.K. Kattan, P.L. Douglas, 1986, A New Approach to Thermal Integration of Distillation Sequences, Can J. of Chem. Eng 64/February, pp. 162–170. 4. N. Nishida, G. Stephanopoulos and A. W. Westerberg, 1981, A Review of Process Synthesis, AIChE J., 27/3, pp 321-351 5. R. Nath., R. L. Motard, 1981, Evolutionary Synthesis of Separation Processes, AIChE Journal, vol. 27, no. 4, July pp 578-587 6. T. Gál, B. G. Lakatos, 2006, Pirolizáló kemence matematikai modellezése és számítógépes szimulációja (Modelling and simulation of cracking furnace) , PhD Thesis, Univesity of Pannonia, Veszprém, Hungary 7. V.M. Nagdir, Y.A. Liu, 1983 , Studies in Process design and synthesis: Part V: A simpleheuristic method for systematic synthesis of initial sequences for multicomponentseparations, AIChE J. 29, pp 926-934. 8. Zs. Fonyó, Gy. Fábry, 2004, Vegyipari mĦvelettani alapismeretek (Unit operations), ISBN: 963-19-5315-7
0123ÿ56789 ÿ 9826ÿ8 ÿ8963 7ÿ ÿ78 22ÿ5
7 ÿÿ5 5ÿ01ÿ 5ÿ23898682ÿÿ 872ÿ ÿÿ88222ÿ53872!ÿ "ÿ0#11ÿ52 $ 7ÿ%&ÿÿ7'32ÿ7 2 7$ ÿ
(ÿ*+,-./01ÿ/++2ÿ3+1ÿ/40ÿ50602+-,07/ÿ+3ÿ-+2892:*/;*ÿ :*;5<ÿ=87/40=;=ÿ-1+*0==ÿ31+,ÿ1070>:?20ÿ3005=/+*@ÿ 3+1ÿ?;+,:7.3:*/.1;7Aÿ
6 78ÿÿBÿ 73 CDÿ237ÿEÿBÿF 297Dÿ% 3G ÿHÿF6 Dÿÿ D D 7IÿFÿE7 ÿÿB6D 2ÿ ÿJ'8ÿÿÿ K
LKMNOKPNOQÿNSÿTUPVWVXKPVNYZÿ[\]V^YÿKY_ÿ`_aKYb\_ÿcNYPONdeLTfc`gZÿÿhbiNNdÿNSÿ ci\WVbKdÿjY^VY\\OVY^ZÿhPKP\ÿkYVa\O]VPQÿNSÿcKWUVYK]ÿeklmc`nfgZÿcKWUVYK]ÿehfgÿ opqrpsrtuZÿvOKXVdÿ Mm Y]PVPwP\ÿNSÿvVNSKMOVbKPVNYZÿhPKP\ÿkYVa\O]VPQÿNSÿcKWUVYK]ÿeklmc`nfgZÿcKWUVYK]ÿehfgÿ 1x#yxzy{0ZÿvOKXVdÿ
(?=/1:*/ÿ
|' ÿ$ 2ÿ ÿ ÿ ÿ
7 ÿ38ÿ37 3ÿ822ÿ87ÿDÿ}6 38 ÿ8}ÿ87 2ÿ87ÿ 3226 2ÿ'$ ÿ83$3 ÿ3' ÿ $ 89 3ÿ8}ÿD83 72ÿ ÿ3 ' ~6 2ÿ}87ÿ D8 6}367 ÿ38ÿ978$ ÿ} ÿ~63ÿ8 ÿ3' 2 ÿD83 72ÿD898 72!ÿ3' ÿ 983ÿ!ÿF!ÿ'2ÿ7 $ ÿ2 } 3ÿ33 38 ÿFÿ2ÿ9786 ÿ}78ÿ3ÿ ÿÿ97863ÿ3'3ÿ ÿD ÿ8D3 ÿDÿ} 7 338 ÿ8}ÿ2672ÿ}78ÿ7 D ÿ2867 2ÿ 26'ÿ2ÿ267 ÿ ÿ87 ÿFÿ2ÿ3' 78923ÿ ÿ''ÿ237 3'ÿ3 7ÿ ÿ3ÿ2ÿ 7 ÿ ÿ3' ÿD8ÿDÿ29 ÿ'7822ÿ8}ÿ3' ÿ 23 7ÿ3ÿÿ73 ÿ3'3ÿ ÿD ÿ8 378 ÿ Fÿ ÿ32ÿ898 72ÿ7 ÿD ÿ62 ÿ ÿ3' ÿ ÿ} ÿ ÿ3' ÿ}87ÿ8}ÿ9 32ÿ87ÿ $ 2ÿ6 ÿ38ÿ32ÿ 3ÿD8893D3ÿ ÿD8 7D3ÿFÿ2ÿÿ$ 723 ÿ 98 7ÿ''ÿ ÿD ÿ9786 ÿ3'ÿÿ ÿ29 376ÿ8}ÿ9789 73 2ÿ3ÿ2ÿ 37 ÿ }}63ÿ38ÿ 9 7 3ÿ} ÿ863ÿ3' ÿ893ÿ$6 2ÿ8}ÿ3' ÿ ÿ$7D 2ÿ38ÿ' $ ÿ 3' ÿ7 ~67 ÿ9789 73 2ÿ2 ÿ3ÿ2ÿ3 ÿ8 26 ÿ ÿ 9 2$ ÿ% 7 ÿ3'2ÿ ÿ ÿ 3'2ÿ87ÿ2ÿ38ÿ263 ÿ3' ÿFÿ2 3' 22ÿ978 22ÿ}78ÿ7 D ÿ}
238ÿ ÿÿ 8 7ÿ26387ÿ 5ÿF !ÿ 3} ÿ3' ÿ98 7ÿ9789 73 2ÿ7 38 ÿ3'ÿ 89 738 ÿ8 38 2ÿ38ÿ8D3 ÿÿ} ÿ97863ÿ3'ÿ 27 ÿ'73 7232ÿÿ ÿ 08>+15=
ÿD8 7D ÿ98 7ÿD8 ÿ $ 2ÿ2638 ÿ 32ÿ 98 7C38 ÿ
ÿ7/1+5.*/;+7ÿÿ
|226 ÿ
7 ÿ2ÿ8 ÿ8}ÿ3' ÿ823ÿ9873 3ÿ7 2ÿ8}ÿ3 7ÿ2 ÿ ÿ''ÿ 6329 7ÿ2 3232ÿ7 ÿ8 37D63 ÿ38ÿ'6 ÿ' 3'ÿ7 ÿDÿ9786 ÿ3226 ÿ 26D23363 2ÿ3'3ÿ ÿ7 2387 ÿ3' ÿ2376367ÿ} 367 2ÿ ÿ9'288ÿ}6 38 2ÿ8}ÿ 367ÿ 3226 2ÿ ÿ$$8ÿ 3ÿ ÿ 7ÿ9 32ÿ 7 ÿ3' ÿ823ÿ62 ÿ}87ÿ ÿ967982 2ÿ ÿÿÿÿÿ'$ ÿD
ÿ8 936ÿ' ÿDÿ7 3ÿ $ 2ÿ ÿ $ 89 ÿ8}ÿD898 72ÿ 367ÿ ÿ2 3' 3!ÿD2 ÿ8 ÿ7 D ÿ 7 2867 2ÿ%8 7D ÿ ÿD8D287DD ÿ98 72ÿ7 ÿ9D ÿ8}ÿ883 ÿ z7$ ÿ3226 ÿ}8738 ÿ ÿ6 78ÿ 738 ÿ3' 7 Dÿ97 $ 3 ÿ3' ÿ93 3ÿ}78ÿ 6 78 ÿÿ2 8 ÿ267 7ÿ38ÿ7 8$ ÿ3' ÿ $ ÿ7 6 ÿ823ÿ ÿ376ÿ 83ÿ!ÿF!ÿ87ÿ983 ÿ23 2ÿ863ÿ8 ÿ3' ÿ 7 ÿD89232ÿ 7 3ÿ3'ÿ3' ÿD 23ÿ$D3ÿ9789 73ÿ978} ÿ ÿ3' ÿ823ÿ3373$ ÿ823ÿ2376367 ÿ ' ÿ 3ÿÿ0##!ÿÿ
0ÿ23456789ÿ733 ÿ39ÿ7 8ÿ 88 35487ÿ3ÿ53 272ÿ2 ÿ7 8ÿ59328ÿ934ÿ 347 988 8ÿ88 732ÿ39ÿ3462769ÿÿ ÿÿÿÿÿÿ!"#ÿ!ÿ$"ÿ!ÿ%ÿ&''ÿ ÿ(ÿÿ$"ÿ$ÿ)ÿ$"ÿ'ÿ*ÿ+ÿ""!ÿ %ÿ'&(%ÿ!#ÿ'ÿ"ÿ$$ÿ(ÿ$ÿ!,'ÿ!ÿ"ÿ$ÿ !ÿ,ÿÿ(ÿ)-)ÿ!"#ÿ./01ÿ!#ÿÿÿ2.331-ÿ !ÿ'ÿ'ÿ!ÿ.4')ÿÿ!*%ÿ05561*ÿ7ÿ$ÿ'ÿÿ$ÿ &ÿ"ÿÿ'ÿ!ÿ%ÿ'ÿ'"ÿÿ$ÿ'ÿÿÿÿ!ÿ "$ÿÿ!"#ÿ$ÿÿ!!ÿ"8ÿ$ÿÿ.91ÿÿ:ÿ.-1ÿ"ÿ$"ÿ'ÿ ""*ÿ;$$ÿ$ÿ'!)ÿ'ÿ!ÿÿ')'ÿ!ÿÿÿ$ÿ'ÿ <ÿ"ÿ.!1ÿÿ(ÿ"ÿ$ÿ'ÿÿ.2(%ÿ055=1*ÿÿÿÿ ')'!ÿ(!ÿ)!ÿ!"ÿ&''ÿÿÿ!-"ÿÿ$$ÿÿ )ÿ$ÿ)ÿÿÿ&ÿ"ÿ$ÿ*ÿ+ÿ"!%ÿ'ÿ!"ÿ ÿÿÿ)ÿ'ÿ(!ÿÿ$ÿ&'ÿ""!ÿ >"ÿ"$ÿ.?)%ÿ05@51*ÿÿ 3ÿÿ$$!ÿÿ8(ÿÿ8"!!ÿÿÿ"!ÿ&'ÿ'ÿÿ >ÿ$ÿÿ!ÿ!<)ÿ$ÿ'ÿ"!ÿ(!ÿ$ÿ'ÿ"ÿ(!*ÿÿ !ÿ&ÿÿÿ'ÿÿ')'ÿ"'"!ÿ"!)ÿÿ"!ÿÿÿ (!ÿ'ÿ"ÿ$ÿÿ(!ÿÿ'ÿ!"ÿÿÿÿÿ"ÿ 'ÿ8"!ÿÿÿÿ')'ÿ!ÿÿ)ÿ$!ÿ'ÿ$ÿ 'ÿÿ%ÿ'ÿ%ÿ"!!ÿ&)'ÿÿÿ""ÿ(*ÿ A'ÿ"ÿ$ÿ'ÿÿÿÿÿ'ÿÿ'ÿÿ$"ÿ&!ÿ $<ÿ ÿ""!ÿ"!*ÿÿÿ<ÿ"ÿ&ÿ 8"!!ÿ"ÿ$ÿ!ÿÿ'#ÿ$"ÿ$"ÿ$ÿ*ÿ 2"!ÿÿÿÿ!ÿ!!&ÿ$)ÿ'&ÿ'ÿ!"ÿÿÿ !ÿÿ'ÿ!ÿ*ÿ3ÿÿ'&ÿ'&ÿÿÿ!ÿÿ$ÿ'ÿÿ !ÿÿ$ÿ'ÿ!ÿÿ$ÿ!"ÿÿ)ÿÿÿ $ÿ!*ÿA'ÿ(!ÿ"!ÿÿÿÿ!ÿ!ÿÿ)ÿ'ÿ ÿ$ÿ!ÿ"ÿÿ!"ÿ$ÿ"$)*ÿÿÿ
BCÿEFGHIHJHKLÿ
A'ÿÿ'ÿÿ&ÿ(ÿÿ&ÿ!ÿ%ÿÿ<&Nÿ'ÿ$ÿ ÿÿ&''ÿ&ÿÿÿ'ÿ"!ÿ$ÿ!ÿÿ$"ÿOÿ'ÿ ÿ(!(%ÿ'ÿÿ'ÿ"!%ÿ)'ÿ$ÿ!ÿ%ÿÿ $"ÿ'ÿÿ$ÿ!ÿÿ*ÿ P4Q4ÿS8T Tÿ5TUÿVW X YYÿZWÿÿ[ 9\ 8 ] ÿÿ ÿÿÿÿ"ÿÿ$ÿ'ÿ&!ÿ"ÿÿ$ÿÿ $!ÿÿ"ÿÿ(%ÿ')'ÿÿ(ÿ$ÿ%ÿÿ$$ÿ(!ÿ*ÿ!"ÿÿÿ!ÿÿÿ'ÿ"ÿ")ÿ)ÿ$ÿ !"ÿ"ÿ$"ÿ&!ÿ$<*ÿÿÿ&ÿÿÿÿÿ ÿÿ'ÿ!ÿ$"ÿÿÿS8T ]Z8T^Yÿ_8 89^\*ÿ3ÿÿÿ(!ÿ "%ÿ$!(ÿ'$"(ÿ**ÿ'ÿ&ÿ(ÿ"!ÿ'&ÿÿÿ ÿ!ÿÿÿ'ÿ"ÿ$ÿ!ÿ*ÿÿÿÿ&ÿÿÿ $ÿ)ÿ$ÿÿ'!ÿ%ÿ(!%ÿ ÿ(ÿ !!ÿ"*ÿA'ÿÿ&ÿ"!ÿ&'ÿ'ÿÿ$ÿ'ÿ""!ÿ"!ÿ 2;`ÿa2b*ÿ0)ÿÿ."%ÿ$!8ÿ%ÿ1ÿ&ÿ ÿ$"ÿ'ÿ8"!ÿ(!"%ÿÿ'ÿ!ÿ&ÿ"!ÿ$ÿÿ!)ÿ ÿ*ÿ
ÿ
348
ÿ
34ÿ54ÿ64ÿ789 ÿ ÿ84ÿ
ÿ 0ÿ!7 8ÿ9328ÿ!46 73ÿ "#$ÿ%&'()'ÿ&')*ÿ+,%-.$/)0&(),1ÿ2&3ÿ*)4)*$*ÿ)1ÿ(#/$$ÿ3(&5$36ÿ,%)5,.$/3ÿ7,/.&(),18ÿ%&'()*$ÿ ,9(&)1)15ÿ&1*ÿ:;<ÿ5$1$/&(),1=ÿ>1)()&%%-8ÿ(#$ÿ%&'()'ÿ&')*ÿ3,%?(),1ÿ7/,.ÿ(#$ÿ7$/.$1(&(),1ÿ 3$'(),1ÿ)3ÿ7$*ÿ(,ÿ(#$ÿ,%)5,.$/)0&(),1ÿ/$&'(,/ÿ2#$/$ÿ(#$ÿ%&'()'ÿ&')*ÿ3,%?(),1ÿ)3ÿ ',1'$1(/&($*ÿ9-ÿ/$.,4)15ÿ2&($/=ÿ"#$ÿ/$3?%()15ÿ+/,*?'(ÿ',1(&)13ÿ&ÿ.)@(?/$ÿ,7ÿ%&'()'ÿ&')*ÿ &1*ÿ+/$*,.)1&($%-ÿ%)1$&/ÿ,%)5,.$/3ÿ)1ÿ&1ÿ&A?$,?3ÿ3,%?(),1=ÿ>(ÿ)3ÿ'&//)$*ÿ,?(ÿ9-ÿ.$&13ÿ,7ÿ &ÿBC"Dÿ/$&'(,/ÿ&1*ÿ&ÿ*)3()%%&(),1ÿ',%?.1ÿ(,ÿ/$'-'%$*ÿ%&'()'ÿ&')*=ÿ"#$ÿ*).$/ÿ7,/.&(),1ÿ)3ÿ (#$ÿ1$@(ÿ3(&5$ÿ,7ÿ:;<ÿ+,%-.$/)0&(),1Eÿ)(ÿ?3$3ÿ&ÿ3(&11,?3ÿ'&(&%-3(ÿF()1ÿ,'(,1,&($Gÿ&1*ÿ(#$ÿ +/)1')+&%ÿ/$&5$1(ÿ)3ÿ(#$ÿ:;<ÿ7/,.ÿ(#$ÿ,%)5,.$/ÿ/$&'(,/=ÿ<ÿBC"Dÿ/$&'(,/ÿ2&3ÿ?3$*ÿ(,ÿ *$',.+,3$ÿ(#$ÿ,%)5,.$/3ÿ(,ÿ7,/.ÿ%&'()*$=ÿ"#$ÿ3(/$&.ÿ+/,*?'(ÿ,7ÿ(#$ÿ*$+,%-.$/)0&(),1ÿ /$&'(,/ÿ)1'%?*$3ÿ%&'()*$8ÿ2&($/8ÿ%&'()'ÿ&')*8ÿ&1*ÿ3,.$ÿ%)1$&/ÿ,%)5,.$/3=ÿ"#)3ÿ3(/$&.ÿ)3ÿ ',11$'($*ÿ(,ÿ(#$ÿ%&'()*$ÿ+?/)7)'&(),1ÿ3-3($.ÿ(#&(ÿ)1'%?*$3ÿ&ÿ7%&3#ÿ$4&+,/&(,/ÿ&1*ÿ *)3()%%&(),1ÿ',%?.13ÿ)1ÿ3$A?$1'$8ÿ2#)'#ÿ3$+&/&($ÿ(#$ÿ*)..$/ÿ7/,.ÿ(#$ÿ,(#$/ÿ+/,*?'(3=ÿ ;&'()*ÿ&')*ÿ&1*ÿ2&($/ÿ&/$ÿ/$'-'%$*ÿ(,ÿ(#$ÿ,%)5,.$/ÿ7,/.&(),1ÿ3(&5$=ÿ"#$ÿ'&(&%-3(ÿ)3ÿ /$',4$/$*ÿ&1*ÿ&ÿ+?/5$ÿ3(/$&.ÿ)3ÿ?3$*ÿ(,ÿ+/$4$1(ÿ&''?.?%&(),1ÿ)1ÿ(#$ÿ/$&'(,/=ÿ;&'()*$ÿ 7/,.ÿ(#$ÿ3$',1*ÿ3(&5$ÿ,7ÿ(#$ÿ+%&1(ÿ)3ÿ.)@$*ÿ2)(#ÿ&1ÿ&++/,+/)&($ÿ+,%-.$/)0&(),1ÿ'&(&%-3(ÿ )1ÿ&ÿBC"Dÿ/$&'(,/=ÿ"#$ÿ%&'()*$ÿ+,%-.$/)0$3ÿ(,ÿ+,%-%&'()*$8ÿ/$3?%()15ÿ)1ÿ&ÿ.)@(?/$ÿ2#)'#ÿ )3ÿ+&33$*ÿ,1ÿ(,ÿ&ÿ.?%()3(&5$ÿ$4&+,/&(),1ÿ2#$/$ÿ(2,ÿ3$+&/&(,/ÿ9%,'H3ÿ&/$ÿ?3$*ÿ(,ÿ/$.,4$ÿ (#$ÿ'&(&%-3(ÿF3).?%&()15ÿ(#$ÿ'&(&%-3(ÿ1$?(/&%)0&(),1Gÿ&1*ÿ(,ÿ,9(&)1ÿ(#$ÿ+?/)7)$*ÿ:;<=ÿ>(ÿ 2$/$ÿ'&//)$*ÿ,?(ÿ3(?*)$3ÿ(,ÿ&1&%-0$ÿ(#$ÿ9$#&4),/ÿ,7ÿ(#$ÿ.,%$'?%&/ÿ2$)5#(ÿ1?.9$/ÿ FIJKG8ÿ4&/-)15ÿ(#$ÿ/$&5$1(ÿ&1*ÿ'&(&%-3(ÿ/&(),3ÿ&1*ÿ(#$ÿ,+$/&(),1ÿ',1*)(),13=ÿJ)(#ÿ(#&(ÿ )17,/.&(),1ÿ)(3ÿ+,33)9%$ÿ(,ÿ+/$*)'(ÿ(#$ÿÿ4&/)&9%$ÿ4&%?$3ÿ/$A?)/$*ÿ(,ÿ,9(&)1$*ÿ&ÿ:;<ÿ(#&(ÿ '&1ÿ9$ÿ?3$*ÿ)1ÿ9,1$ÿ$15)1$$/)15ÿ&++%)'&(),13ÿFIJKLMNNNNGÿFB#$158ÿMNNOG=ÿ
PQÿSTUVWXUÿ
Z[ÿ\8948773ÿÿ9328ÿ ])5?/$ÿ^ÿ3#,23ÿ(#$ÿ3).?%&(),1ÿ7%,23#$$(ÿ,7ÿ%&'()'ÿ&')*ÿ+/,*?'(),1ÿ+/,'$33ÿ7/,.ÿ3?'/,3$ÿ &3ÿ2$%%ÿ&3ÿ(#$ÿ+?/)7)'&(),1ÿ,7ÿ(#$ÿ&')*ÿ?3)15ÿ$3($/)7)'&(),1ÿ&1*ÿ#-*/,%-3)3ÿ/$&'(),138ÿ /$3+$'()4$%-8ÿ)1ÿ/$&'()4$ÿ*)3()%%&(),1ÿ3-3($.3=ÿ"#$ÿ7$/.$1(&(),1ÿ/$&'(),1ÿ2&3ÿ',1*?'($*ÿ &(ÿN_=^`ÿaÿ&1*ÿ^ÿ&(.ÿ2)(#ÿ.&33ÿ7/&'(),1ÿ,7ÿ3?'/,3$ÿ&1*ÿ2&($/ÿ,7ÿN=bÿ&1*ÿN=c8ÿ /$3+$'()4$%-=ÿ"#$ÿ#-*/,%-3)3ÿ&1*ÿ$3($/)7)'&(),1ÿ/$&'(),13ÿ2$/$ÿ'&//)$*ÿ,?(ÿ&(ÿ`=^`ÿaÿ &1*ÿO=^`ÿa8ÿ/$3+$'()4$%-=ÿ ÿ
ÿ ])5?/$ÿ^=ÿ]%,23#$$(ÿ,7ÿ(#$ÿ%&'()'ÿ&')*ÿ+/,*?'(),1ÿ ÿ Zÿ3 489d73ÿÿ9328ÿ "#$ÿ7?%%ÿ+/,'$33ÿ,7ÿ:;<ÿ+,%-.$/)0&(),1ÿ)3ÿ*$+)'($*ÿ)1ÿ])5?/$ÿM=ÿ",ÿH1,2ÿ(#$ÿ9$3(ÿ ,+$/&(),1ÿ',1*)(),13ÿ,7ÿ$&'#ÿ/$&'(,/8ÿ)(ÿ2&3ÿ?()%)0$*ÿ(#$ÿ3$13)()4)(-ÿ&1&%-3)3ÿ(,,%ÿ,7ÿ(#$ÿ 3).?%&(,/=ÿ"#$ÿ&).ÿ,7ÿ(#$ÿ7)/3(ÿ+&/(ÿ,7ÿ(#)3ÿ+/,'$33ÿFe%)5,.$/)0&(),1Gÿ2&3ÿ(,ÿ,9(&)1ÿ:;<ÿ
0ÿ23456789ÿ733 ÿ39ÿ7 8ÿ 88 35487ÿ3ÿ53 272ÿ2 ÿ7 8ÿ59328ÿ934ÿ 349 988 8ÿ88 732ÿ39ÿ3462769ÿÿ ÿÿÿÿÿÿÿ!ÿ"ÿÿÿÿÿ ÿ#$%&ÿÿÿ'($)*+,-ÿÿ.ÿ/0ÿÿ"/+130ÿ/&ÿ/ÿ 02345ÿ6&ÿ/ÿ7ÿ&ÿ8080ÿ/ÿÿ"ÿÿ39ÿ ÿ80ÿ08ÿÿ33ÿ ÿ!ÿÿ808ÿÿ($)*+,ÿÿ800ÿ8ÿ!.ÿ4ÿ !ÿ/0ÿ0ÿÿÿ8ÿÿÿ8ÿ:;!,ÿÿ'$%:!)<+,-ÿÿ "ÿÿ#$%ÿÿÿÿÿ/"ÿ&ÿ=0ÿÿ8ÿ ///ÿ88ÿÿ'$)>?)<+7-ÿÿ!ÿÿÿ.ÿ/0ÿ8ÿ103ÿÿ /&ÿ/ÿ0@345ÿ6&ÿ/ÿ99437ÿ&ÿ8080ÿ/ÿ"ÿÿ09ÿ 80ÿ08ÿÿ907ÿ&ÿÿ/0ÿÿÿÿÿÿ$0ÿÿ 88ÿÿÿ/88ÿ)8/ÿ80ÿ/ÿ//ÿÿ$%:!)<+,ÿÿÿ 0ÿ8ÿÿÿÿ8ÿÿ!.ÿ4ÿ!ÿÿ8ÿÿÿ/A8ÿ /ÿÿÿÿÿÿ#$%ÿ/08ÿÿ0ÿBÿ88ÿÿ ÿ'#$%+,-ÿÿ0ÿÿ'C%#(,+3-ÿÿ/0ÿ"8ÿÿ80ÿÿ 0ÿÿ/"8ÿ888ÿ!ÿ/8ÿÿ0ÿÿ8ÿÿ"ÿ08ÿ 8ÿ)ÿÿ8ÿÿ8ÿÿ8ÿ8&ÿ.ÿÿÿÿ0ÿ80ÿ ÿ88ÿÿ"ÿ&ÿÿÿ/.ÿÿ"ÿÿÿÿÿ/ÿ (8ÿÿÿ80&ÿÿÿ0ÿÿ888ÿÿ"ÿ&ÿÿ8ÿÿ.ÿÿ 80ÿÿÿÿÿ#$%ÿ"ÿÿDÿ3ÿÿÿ0ÿÿ#$%ÿ ÿÿ8.ÿ'EFG-ÿÿÿ8ÿÿ$%H$0ÿ80ÿÿÿÿ ÿ!ÿ8ÿ0ÿ8ÿ88ÿÿ80ÿÿ/ÿÿÿÿ ÿ75423Iÿÿ.80ÿ%0J8ÿÿ8ÿÿ80ÿ"8ÿÿ/8ÿ 808ÿÿÿ/.ÿÿ8ÿ/8ÿ#$%ÿ//ÿÿÿEFGÿ!ÿ0ÿ ÿ8Aÿ#$%ÿ/ÿÿÿ08ÿÿ//8ÿ#$%ÿ ÿÿÿÿÿÿÿ#$%ÿÿ/ÿ0K345ÿ6&ÿ/ÿ99437ÿ &ÿÿ"ÿÿ4ÿ3&ÿ10ÿ/ÿ80ÿÿ8ÿ947ÿÿ ÿ
ÿ
Dÿ7ÿDÿÿÿ#$%ÿ/A8ÿ !.ÿ4ÿ)8/ÿ80ÿ/ÿÿÿ($)*+,ÿ80ÿ$%:!)<+,ÿ
ÿ
RPMOTURÿ
NVQWNUOMÿMOTXPNMÿ VTXPQYÿMOTXPNMÿ LMNLOMPQORÿ ZOOYÿ MOX[\]ÿ MOX[\^ÿ VQ_`QY\]ÿ aTLNM\]ÿ VQ_`QY\bÿ VQ_`QY\^ÿ
!/ÿc6dÿ 7@245ÿ 329@2ÿ #ÿcdÿ 4ÿ 99437ÿ e7(ÿcfHdÿ 45IIÿ 904ÿ $%ÿ'22g-ÿ 44020ÿ 0040@ÿ cfHdÿ $0ÿcfHdÿ 9ÿ 75Ih+4@ÿ #$%ÿcfHdÿ 9ÿ 9ÿ :%!+4ÿcfHdÿ +++ÿ +++ÿ EFGÿ +++ÿ +++ÿ
0I27Iÿ 953ÿ 935ÿ 0K7@@ÿ 9ÿ 9ÿ +++ÿ +++ÿ
02345ÿ 02345ÿ 02@I9ÿ 0@345ÿ 7ÿ 7ÿ 953ÿ 99437ÿ 47533ÿ 3059I0ÿ 47K43ÿ 09K@4ÿ 3I@47ÿ 0K7@27ÿ 0454@ÿ 5747Kÿ 9ÿ 9ÿ 4K3KKKÿ 492@KK@ÿ @50K4Kÿ 9ÿ 495750@9ÿ @5K9297ÿ +++ÿ +++ÿ 5ÿ 5ÿ 5I@53ÿ +++ÿ KK@K4K0ÿ 2I35Kÿ
ÿ ÿ
ÿ
350
34ÿ54ÿ64ÿ789 ÿ ÿ84ÿ
ÿÿÿ!"ÿ#$%&'ÿ$(ÿ)&'%&'ÿ*'+,*ÿ)-ÿ./012"ÿÿ
9746;<9ÿ 3453647869ÿ =;7>?ÿ @8AB8C> Dÿ 46=E>Fÿ @8AB8C>Gÿ ,%+'&+ÿHIJÿ KLK"Lÿ MLN"OKÿ MPK"Lÿ MQK"Lÿ .+**&+ÿH0',Jÿ Lÿ O"OLK!ÿ O"OLK!ÿ O"OLKÿ R!SÿHITUR+Jÿ Oÿ Oÿ O"LLÿ O"LLÿ /0ÿHITUR+Jÿ Oÿ KK"QOÿ !!"MQÿ !K"NQÿ /V'W(ÿHITUR+Jÿ Oÿ NLN"NOÿ LXN"KOÿ LPN"NOÿ ./0ÿHITUR+Jÿ Oÿ Oÿ Oÿ PNNL"MOÿ Y01!ÿHITUR+Jÿ O"Lÿ Oÿ Oÿ O"Lÿ Z[\ÿ ÿ111ÿ 111ÿ 111ÿ !LPK"XOÿ
ÿ
]WT&+ÿK"ÿÿ./0ÿ,)V&+ÿ^ WT_'ÿ$&,+ÿ*ÿ-&$V'W)$ÿ)-ÿ'_ÿ,**ÿ-)^ÿ`+ÿW'W)$*ÿ)-ÿ +T$'*ÿ$(ÿV'a*'"ÿ
Fbÿ=defghijdeiÿ
#'ÿ^*ÿ*_)^$ÿ'_'ÿW'ÿW*ÿ%)**Wÿ')ÿ%+)(&Vÿÿ_WT_ÿ,)V&+ÿ^ WT'_ÿ./0ÿ-+),ÿV'WVÿÿ VW(ÿ)'W$(ÿ'_+)&T_ÿ-+, $''W)$ÿ%+)V**"ÿ#$ÿ)+(+ÿ')ÿV_W`ÿÿ_WT_ÿ,)V&+ÿ^ WT_'ÿ %)a, +ÿW'ÿW*ÿ$V**+aÿ')ÿ%+)(&Vÿ-W+*'ÿÿ)^ÿ,)V&+ÿ^ WT_'ÿ./0ÿ$(ÿ'_$ÿ'_ÿVWVWVÿ (W, +ÿkV'W(lÿ^_WV_ÿW*ÿ%)a, +Wm(ÿ')ÿ)'W$ÿ'_ÿ(*W+(ÿ./0"ÿ#$ÿ*%W'ÿ)-ÿ'_ÿ %&+W-WV'W)$ÿ*V'W)$*ÿW$Tÿ'))ÿV),%WV'(nÿ'_*ÿ)$*ÿ+ÿ`+aÿW,%)+'$'ÿW$ÿ'_W*ÿ %)a, +Wm'W)$ÿV&*ÿ'_ÿV)$V$'+'W)$ÿ)-ÿ+T$'*ÿ_WT_aÿ--V'*ÿ'_ÿV)$(W'W)$*ÿ')ÿ V_W`ÿ''+ÿ./0ÿ-W$ÿ%+)%+'W*ÿ*&V_ÿ*ÿ_WT_ÿ,)V&+ÿ^ WT_'ÿ$&,+"ÿ_ÿ *W,&'W)$ÿ%+)V**ÿ(`)%(ÿW*ÿÿ*&W'ÿ'))ÿ')ÿT&W(ÿ'_ÿ%+)(&V'W)$ÿ)-ÿ'W)+ÿ,(ÿ ./0ÿ%)a, +*ÿ-)+ÿW),$&-V'&+W$T"ÿ
4opoqoefoiÿ
r"ÿY_$Tnÿ!OONnÿ.)aV'WVÿVW(ÿk./0lÿsa$'_*W*ÿ$(ÿZ)(W-WV'W)$*tÿ0ÿ2 `W^nÿ]+)$'W+*ÿ)-ÿ Y_,W*'+aÿW$ÿY_W$nÿMkKlnÿ!N1!XM"ÿ u"ÿvW$Tnÿ!OLOnÿY_,WVÿs'+&V'&+ÿ)-ÿ.)ak/V'WVÿ0VW(lnÿ.)ak/V'WVÿ0VW(ltsa$'_*W*nÿs'+&V'&+*nÿ .+)%+'W*nÿ.+)V**W$Tnÿ$(ÿ0%%WV'W)$nÿ[WanÿXL1PN"ÿ I"ÿs`anÿ!OOPnÿs'%1w+)^'_ÿ.)a, +Wm'W)$ÿ.+)V**ÿZ)(W$Tÿ$(ÿ.+)(&V'ÿx *WT$nÿv)_$ÿ[Waÿ yÿs)$*nÿ#$VnÿPN1X!M"ÿ
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Robust optimisation methodology for the process synthesis of continuous technologies Mayank P. Patel,a Nilay Shah,a Robert Ashe,b a b
CPSE, Dept. Chemical Engineering, Imperial College, London, SW7 2AZ, UK AM Technology, The Heath Bus. & Tech. Park, Runcorn, Cheshire, WA7 4QX, UK
Abstract Small-scale continuous technologies also fall under the process intensification umbrella which offers new opportunities through energy-intensive processes or devices, especially pertinent in the production of pharmaceutical molecules. The focus of this work is to develop a systematic method for the application of a modular tubular reactor with heat exchange to a number of reaction types. A dynamic optimisation is performed around the entire system with the objective to match the process conditions to the chemistry. A degree of freedom in the internal geometry is exploited as a manipulated variable to control the progression of reaction. The novel ‘green’ three step synthetic route of ibuprofen is highlighted. A crude yield of 90% at a residence time of 33 seconds is optimally found. Crucially, this efficient three-step process requires no purification of intermediates, thus saving time and energy as well as the reduction in total organic waste emission in the subsequent downstream. Keywords: Process intensification, continuous processing, optimization, modelling
1. Introduction The synthesis of fine chemicals and pharmaceuticals are generally carried out in a batch or semi-batch processes due to flexibility and versatility of the equipment over the continuous counterpart [Roberge et al., 2005]. However, poor transfer of heat generated from reactions limits the application of batch equipment to products of low volumes and short life-times. Microstructured reactors are well known in literature for the property of very efficient cooling and very good control, however the idea of localised cooling within microstructured devices is not customary due to heat dissipation through a reactor of such scale. On the contrary, a device that bridges the micro- and meso-scale divide would be hugely advantageous in providing localised cooling. This is achievable by considering the heat exchange capability within the system at any given point.
2. Reactor description The reactor of focus here is a tubular reactor of modular construction, and an adaption of the tube-and-shell heat exchanger design. The process stream flows through a serpentine arrangement of rectangular process channels where reactants mix and react, and exchanges heat with a number of heat transfer zones along the flow path (see Figure 1). The modular construction allows for the reactor to match the necessary conditions to the reaction, e.g. by adjusting the height of each section, so that resources utilised are kept to a minimum for any given process. The concept pursues a well-rooted technique in that the temperature is to be controlled throughout the flow path. To further increase the flexibility of operation, there are separate utility flows (with different temperatures) to cool or heat selected parts of the reactor.
352
M. Patel et al.
Reactants
Utility
Reactor volume (per unit length)
Q A
= Constant
Reaction heat Position in reactor Product
Figure 1. Left: Reactor concept emphasises that the surface area to volume ratio must match the reaction rate curve. Right: A schematic of the reactor, which is shown here to be made up of 8 vertical and 7 horizontal channels and 8 individual utility streams.
3. Mathematical description The reactor may, from a modelling point of view, be approximated as a 1-D continuous tubular reactor with cooling jackets around the tube. Initial experiments indicate perfect mixing conditions after a few channels, implying that the Arrhenius law can be used for modelling the reaction kinetics. From first principles, partial differential equations (PDEs) of the distributed system are discretised into elements which characterise the states of concentration C of species i, temperatures of the process (T) and cooling streams (Tc). The spatial derivatives are approximated with a first order backward finite difference method (BFDM) to a system of ordinary differential equations (ODEs); Material balance: Energy balances:
where, Reaction rate:
1 ∂C 1 ∂ 2C NoReac ∂Ci i=1,…, NoComp = −v ⋅ ⋅ ~i + D ⋅ 2 ⋅ ~ 2 i + ¦ (ν i , j ⋅ rj ) ∂t L ∂z L ∂z j =1 1 ∂T 1 ∂ 2T ∂T ρ ⋅ CP ⋅ = −ρ ⋅ CP ⋅ v ⋅ ⋅ + k ⋅ 2 ⋅ 2 + Q − Qrxn ∂t L ∂z˜ L ∂z˜ c c 1 ∂T ∂T ρ c ⋅ CPc ⋅ = − ρ c ⋅ CPc ⋅ v ⋅ ⋅ ~ − Q ∂t L ∂z Q=
S ⋅ U ⋅ (Tc − T ) ; H
rj = Aj ⋅ exp
Qrxn =
¦ (ΔH ⋅ r ) j =1
§ − Ea · NoComp ¨ R ⋅T ¸¹ ©
⋅
NoReac
∏
Ci
ai , j
j = 1,…, NoReac
i =1
The terms Q and Qrxn refer to the heat removal rate of the reactor element and the heat generation rate from the reaction respectively.
4. Optimisation formulation 4.1. Objective function Given a specific reaction, it is a non-trivial task to design the optimal conditions for a process in terms of physical size, operating points and choice of control signals. A suitable performance measure and objective function to accurately define the heat removal and generated rate along the entire flow path takes the form:
Robust optmisation methodology for the process synthesis of continuous technologies k § · 2 ¨ α L Qˆ − Qˆ rxn d~ z ¸¸ ⋅ ¦ ³ ¨ m =1 © k −1 ¹ Q Q ; rxn Qˆ rxn = 2 Q Q ¦ ¦ rxn 2
Integral square error: min ISE = tot where,
Qˆ =
(
NC
)
353
m=1,…,NC (No. Channels)
This error term is primarily a gauge of system performance to a desired response, in this case minimising the difference of the heat removal rate of the reactor to the rate of heat generation within the axial element k for all channels NC that make up the serpentine. 4.2. Control variables A sensitivity analysis determined the following input variables have the most influence on the system; process temperature (Tin), utility temperatures for each cooling zone c
( Tin ,n ; n=1,…,NU (No Utilities)). Variations in the inlet flow rate and height of the channels also play a not so trivial role as they dictate the residence time experienced by the process. However, variations in individual channel heights are of greater consequence when localising desired conditions to that of the flow rate. Therefore, the inlet flow rate is fixed and the heights of each individual section of the serpentine (Hm; m=1,…,NC) are used a degree of freedom. These will be of focus upon the optimisation to minimise the overall value of ISE, thus harmonising the reactor’s heat removal rate to the rate of heat release from the reaction. 4.3. Degrees of freedom and initialisation A mesh independency test indicates the dynamic behaviour may be adequately captured by descretising the system into 52 elements, thus computationally less expensive. Nevertheless, this optimisation formulation is autonomous to the number of discretisations. The first principles model indicates there are 1970 equations (model and initial conditions) with 2037 (unknown and known). Therefore, we have 67 degrees of freedom. The initialisation procedure assigns values to all the differential states within the system. In addition, we also define initial values to the flow rate (Fin) and the control variables mentioned above. 4.4. Final process conditions The process must end up at steady-state as the process transitions to the desired phase. To ensure this, deviations in product concentration and temperature at the outlet are held close to zero by enforcing inequality end-point constraints, i.e. at transition time tf:
dC (L) dT (L) ½ −ε ≤ ® C ; ≤ε ¾ dt ¿ t =t ¯ dt f The final product quality must also lie within acceptable bounds. Thus, product quality indicators are introduced along the flow path: max C product p
max C product − 5% ≤ C product §¨ L ·¸ ≤ + 5% p © p ¹ t =t f
Finally, the reactor must operate around a nominal residence time: minτ ≤ τ
t =t f
≤ maxτ
where, for each channel:
NC
§L· ©v¹
τ = ¦¨ ¸ m=1
m=1,…,NC
p= 4,3,2,1
354
M. Patel et al.
4.5. Operational constraints Intensive devices operate at the micrometer and millimetre length scale, and inherently are managed in close proximity to personnel within a manufacturing setup. For many applications, path constraints tend to be ‘soft’ and minor violations can be tolerated. This is not the case for this system and so the constraint is enforces by defining a violation variable on the process temperature: NC ª º d (Tviolation ) = «max§¨ 0, ¦Tave ·¸» dt © i =1 ¹¼ ¬
k
(
2
)
2 Tave = L ³ T ref − T d~ z
where,
k −1
This path constraint yields an additional end-point inequality constraint: − ∞ ≤ Tviolation
t =t f
≤β
In the interest of robustness, the constraint is relaxed with a liberal value assigned to the upper bound ȕ. It may, however, been tuned towards a small positive tolerance, and thus enforcing a ‘hard’ constraint. The control variables are also restricted with lower and upper bounds.
5. Case study: Ibuprofen The synthesis of the ibuprofen compound is a popular case study in green chemistry. The original Boots synthesis consisted of six steps, which subsequently was reduced three steps by BHC. To achieve a continuous-flow synthesis, a careful retrosynthetic analysis of ibuprofen needs to be performed, considering the synthesis of ibuprofen as an entity, as opposed to a series of independent reactions steps. Bogdan et al. [2009] have thus developed an efficient three-step process that requires no purification of intermediates (Figure 2), with a crude yield of 68% after acidic workup. Friedel-Crafts acylation
PhI(OAc)2 +
TMOF, H
hydrolysis or saponification
Figure 2. Continuous synthetic route to ibuprofen [Bogdan et al., 2009].
This chemistry was applied to the modular reactor comprising of 21 channels giving a total length of 1.688 m. The process model is optimised using the gPROMS dynamic optimisation tool [PSE Ltd., 2009]. The CVP_SS mathematical solver was chosen as it implements a control vector parameterisation algorithm based via single-shooting, meaning single integration of the dynamic model over the entire horizon. The optimisation problem took 1190 s to solve, with a total CPU time of 1166 s. The results highlight a configuration that considers the temperature profile along the reaction axis, and thus operates close to its optimum. The reaction initially takes place within the microdomain until the rate is slowed down by approaching the equilibrium concentration. At this point the intensity is lowered until more favourable conditions are met, i.e. at the millimetre length scale. As the stages of 1,2-aryl migration and saponification commence, each time the internal configuration transitions from the microdomain to the millimetre length scale (see Figure 3). The configuration clearly considers the exponential nature of the second-order reactions and so is mimicked by exploiting the height profile as the flow path progresses. The optimal solution reduces
Robust optmisation methodology for the process synthesis of continuous technologies
355
the spatial randomness and provides inherent control by process design as seen by the closely matched gradients of the heat curves in the stability analysis. The reaction power supersedes the performance of the reactor nearer the outlet. The intermediates and final product of ibuprofen steadily converges to an eventual 90% within a residence time of 33 s. A total ISE value of 0.046 within this setup is determined. The localised temperature control is harmonised to the level of heat released along the flow path. 1
Qˆ Qˆ
Channel Height
Normalised Heat Flux
0.8 0.6
rxn
0.4 0.2 0 -0.2 -0.4 -0.6
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
1.6
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Reactor Length [m]
Reactor Length [m] 100
160
90
140
80
Conversion [%]
Temperature [°C]
120 100 80 60
70 60 50 40 30
40
20
20
10 0
0 0
0.2
0.4
0.6
0.8
1
Reactor Length [m]
1.2
1.4
1.6
0
5
10
15
20
25
30
35
time [s]
Figure 3. Top left: Optimal reactor channel height configuration. Top right: Stability analysis of the normalised heat fluxes. Bottom left: Simulated process temperature. Bottom right: Simulated conversion of intermediates and ibuprofen.
6. Conclusions A robust optimisation methodology for the design a continuous technologies has been presented. An optimisation formulation that utilises a first principles model of the process to evaluate the necessary conditions required for maximum conversion. The internal geometry was exploited as an extra degree of freedom to control the progression of reaction. Overall, the results indicate an important consideration for process synthesis. Initial evaluation of a process requires choosing right structure for the right chemistry. This chosen structure should be fully utilised, i.e. regulate the energy dissipated into the bulk media equally from inlet to outlet. Thus, guaranteeing stability through robustness in the design.
References D. Roberge, L. Ducry, N. Bieler, P. Cretton, B. Zimmermann, 2005, Chemical Engineering & Technology, 28(3), 318-323. A. Bogdan, S. Poe, D. Kubis, S. Broadwater, D. McQuade, 2009, The Continuous-Flow Synthesis of Ibuprofen, Angew. Chem. Int. Ed, 48, 8547 –8550 Process Systems Enterprise Ltd., 2009, gPROMS Optimisation Guide, Release 3.2.0, Bridge Studios, 107a Hammersmith Bridge Road, London, W6 9DA, UK
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Shortcut Design for Kaibel Columns Based on Minimum Energy Diagrams Maryam Ghadrdan,a Ivar J. Halvorsen,b Sigurd Skogestada a
Norwegian University of Science and Technology, Chemical Engineering Department,
7491, Trondheim, Norway; e-mail: [email protected], [email protected] b
SINTEF ICT, Applied Cybernetics, N-7465 Trondheim, Norway; e-mail:
[email protected]
Abstract In this paper, a shortcut procedure is proposed to design a 4-product dividing-wall column. It is based on the information derived from Vmin diagram. This has the advantage of having more meaningful guesses for energy requirements and impurity flows in the column. An example is used for illustration. Keywords: Shortcut design, Kaibel Column, Minimum Energy Diagrams
1. Introduction The dividing wall column is a single-shell column, divided into two parts with a prefractionator and a main section with a sidestream product, which is capable of separating mixtures into three high-purity products. Compared to conventional schemes with two columns in sequence, it needs less energy, capital and space. In this paper we study the Kaibel column, which has been modified to have two sidestream products and can separate the feed into four high-purity prodicts using a single shell. In terms of design, there are 12 degrees of freedom for the Kaibel column. These are the number of theoretical stages in each of the 6 sections plus the 6 operational DOFs. This is for a given feed rate (e.g. F=1 mol/s) and the column diameter will depend on the chosen feed rate. Some shortcut methods have been proposed for design of 3-product columns (Triantafyllou and Smith 1992; Sotudeh and Hashemi Shahraki 2007). One approach is to extend the existing methods of conventional columns to dividing wall columns by representing the Petrlyuk column by three conventional columns. Another approach is to use more direct insight into the properties of the Petlyuk column and make use of the Vmin diagram. We use this approach.The method consists of the following steps: First the Vmin diagram is sketched. The advantages of using Vmin diagram in design are discussed in detail in section 3. In section 4, the minimum flowrates in all parts of the column will be calculated. Assuming that actual vapour flow is somewhat higher (around 10%) than the minimum value, the actual flows will be calculated. Nmin will be calculated based on Underwood equation, except for the section between two side streams for which the Fenske equation is used.
2. Vmin diagrams Figure 1 shows the Vmin diagram for the Methanol/Ethanol/1-Propanol/1-Butanol system ( c1 ..c4 ) which is the example considered in this paper. The peaks PAB, PBC and
A shortcut design for Kaibel Columns Based on Minimum Energy Diagrams
357
PCD represent minimum energy for sharp product splits of the original mixture in the extended petlyuk configuration. Each peak is related to each of the common Underwood roots ( θ A ,θ B ,θ C ). For a Petlyuk arrangement, the prefractionator performs the “easy” split between components A and D (PAD). However, in a Kaibel-arrangement the prefractionator performs the more difficult split between components B and C. For the Kaibel column we must compute the new peaks P'AB and P'CD (the detailed procedure on how to get the peaks P'AB , P'CD is found in (Halvorsen and Skogestad 2006)). The minimum energy in the Kaibel arrangement is given by the highest of the new peaks (here P'AB). It is obvious from this diagram that the Kaibel arrangement always consumes more energy than the full Petlyuk arrangement since P'AB > PAB, P'CD > PCD and trivially: P'AB > PBC and P'CD > PBC.
2
P'CD P'CD
1.5
P AB
P CD
V/F
P BC 1
0.5
0
0
0.2
0.4
0.6
0.8
1
D/F
Figure 1. (a) Vmin diagrams for equimolar feed of the first 4 simple alcohols, α = [ 6.616 4.343 2.256 1] , (b) Schematic of the column
In case of inequal peaks in petlyuk configuration, there will be an optimality region which is a line from preferred split point to the point where the two peaks become equal (Halvorsen 2001). The optimality region will be like a square below B/C peak (as shown in Figure 1), which is impurity allowance in prefractionator. We assume that the recovery of c1 in the top of prefractionator and the recovery of c4 in the bottom of prefractionator are 1 and 0 respectively ( rc ,T = 1, rc ,T = β1 , rc ,T = β 2 , rc ,T = 0 ). The net 1
2
3
4
flow rates which enter the main column for the top and bottom will be calculated from ¦ zi F βi and ¦ zi F (1 − βi ) respectively. The common underwood roots in the prefractionator are calculated from equation (1). The solution obeys α1 ≥ θ1 ≥ α 2 ≥ θ 2 ≥ " ≥ α N . 1 − q = ¦ (α i zi α i − θ )
(1)
Vmin, p = ¦ (α i zi F α i − θ ) × β i
(2)
i
358
M. Ghadrdan et al.
The vapour flow rate which corresponds to θ 2 will be the minimum requirement for prefractionator because it characterizes the B/C split.
3. Select product purites Selection of product purities is based on the economical analysis and customer needs. Note that the minimum vapour flow for the Kaibel column is the same as the maximum of the minimum energy required for any pair of product splits, and the highest peak shows the most difficult split. It is clear that we can think of extra energy in one section and then talk about either increasing the product recovery or designing with lower number of trays. It is shown that overfractionating one of the products makes it possible to bypass some of the feed and mixing it into the product while retaining the constraints on the products (Alstad, Halvorsen et al. 2004). In addition, the impurities in products can be guessed from Vmin diagram. For example, the highest peak in the Vmin diagram determines the component that may appear as impurity in the side stream during optimal operation. So, care should be taken in specifying the product impurities. Figure 2 shows the trends of changes in side stream impurity ratios as functions of splits and impurities coming from the prefractionator for the example studied in this paper. This proves the fact about the impurity flows which go to the sidestream and also helps to put some feasible values in mass balance equations. By writing the total and component mass balances for the whole column to get the minimum allowable flows inside each section we will have 8 equations (component balances) and 20 unknowns, which means that 12 variables should be set in order to solve the mass balance equations. Fzci = Dxci , D + S1 xci , S1 + S2 xci , S2 + Bxci , B and ¦ xi , Strj = 1 where x m , N means mole fraction of component m in Product N. We assume that the composition of the component in two sections away from which it is the main product, is nearly zero, e.g. the compositions of the lightest component in side stream 2 and bottom stream. By doing so and also specifying the composition of the main product in each product stream, there remains two DOF to be specified. It is shown that specifying two composition specifications in a product stream may lead to problems (Wolff and Skogestad 1995). This means that the impurity can not be chosen as an arbitrary value. Figure 3 shows the contours of the ratios of impurities in side streams around the optimum as functions of vapour and liquid split. It can be read from the figures that for example the specifying two ratios as any arbitrary specification may be infeasible. So, one important issue is the allowable variables which can be set for product impurities so that the mass balance equations lead to feasible solution.
4. Minimum allowable and actual internal flows The other internal flow rates for the prefractionator section and main column will be calculated easily from balances around different junctions. The common roots in the prefractionator section, will be the active roots in the main section. The minimum vapour flow rate value for each section in the main column can be calculated from equation (2), by simply substituting the proper feed flow, feed composition and recovery values for each section (for example zi ,2 = ( F D1 ) × β i zi , F , q2 = − Lmin, p D1 ,
β i ( sec 2 ) = Dz D D1 z D for top section of the main column). 1
A shortcut design for Kaibel Columns Based on Minimum Energy Diagrams
359
Now, we can continue with assuming the actual vapour flow needed for the whole column to some extent (we assume 10%) higher than the minimum value and then calculate the actual internal flows. 1.95
0.08
2.2
C3 impurity in Pref. top
1. 6
1. 6 5
6 1.
0.03 0.02 1.6
0.06
0.04
0.05 0.02
0
1.65 1.7 0.01
0.1
C3 impurity in Pref. top
75
5 1. 6
0.01
0 0.08
1.9
0.04
1.6 1.5 0.1
1.75
0.05
1.8 5
1.7
1. 6
0.06
1.8
1.8
1.65
1.7
1.9
1.7
1.65
1.
V/F
2
1.7
1. 85
0.07
1.952
1.9 1.85 1. 8 1.75
1.7
2.1
1.95
1.9 1.85 1.8 1.75
1.9 1.85 1.8 1.75
1.8
0.1 0.09 2.3
95 1. 2 2 05 0.09 0.1
1. 7 1. 851.9 1.75 1.8 0.03 0.04 0.05 0.06 0.07 0.08 C2 impurity in Pref. bottom
0.02
C2 impurity in Pref. bottom 0.1 0.09
25
0.08 0.07
C3 impurity in Pref. top
xA /xC in S1
20 15 10 5
0.05
4
0.03
8 10 2 1 1416 18
0.08
0.05
0.06
4
0.01
0.04 0
6
0.02
0.1
C3 impurity in Pref. top
2
0.04 2
0 0.1
0.06
0.02 0
0.01
C2 impurity in Pref. bottom
0.1
0.02
0.03 0.04 0.05 0.06 0.07 C2 impurity in Pref. bottom
15
5
0.05 0.04
15
0.03
5
0.02 0
10
0 0.1
0.06 10
C3 impurity in Pref. top
xB/xD in S2
10
15
15
0.07
5
20
20
5
30 25
25
10
5
0.1
15
10
35 0.08
0.09
20
15
0.09 10
0.08
0.01 0.08
0.06
0.05 0.04
0.02
0
0.1
0.01
C3 impurity in Pref. top
0.02
C2 impurity in Pref. bottom
0.03 0.04 0.05 0.06 0.07 C2 impurity in Pref. bottom
0.08
0.09
0.1
Figure 2. Objective value and side streams impurities as functions of impurities of C2 and C3 from bottom and top of the prefractionator respectively 10
8 x B/xD in S2
xA /xC in S1
8 6 4
6 4 2 0
2
0.7
0
0.65
0.7
0.6
0.65 0.6
0.45 0.55 0.5
Rv
0.3
0.35
0.45
0.55 Rv
Rl
0.5
0.5
0.4
0.4 0.5
0.35 0.3 Rl
Figure 3. Contours of the impurity ratios in side streams as functions of liquid and vapour split
360
M. Ghadrdan et al.
The liquid and vapour splits are defined as the ratio of the strams going to the . prefractionator to the amount coming to the joint. rL = L1 L2 and rV = V1 V3 The other internal flows on two sides of the wall will be calculated based on the splits. Since the internal flows should be greater than the minimum flows, there are some constraints which should be met. Otherwise, the equations will not have proper roots related to relative volatilities.
(
rL < ( L2 − Lmin,2 ) L2
rV > max V1,min V3 , (V1,min − (1 − q ) F ) V3
)
(
rL > max Lmin,1 L2 , ( Lmin,1 − qF ) L2 rV < (V3 − V3,min ) V3
)
(3)
Section four is the section between two side-streams and it’s considered to have total reflux and the number of trays will be calculated directly from Fenske equation. Since Fenske equation is based on assuming equal compositions of liquid and vapour streams at top and bottom of prefractionator, -which is not the case for DWC-, we derive the minimum number of trays from Underwood equation. A few iterations are done to reach a desired value for number of trays and energy requirement. The equation below is used for calculating the number of trays in each section. xi , L is the composition of the entering stream to prefractionator, which is calculated from pinch point equations (Halvorsen 2001). §§ α i xi , D ¨¨¦ α i − φ2 N = log ¨ ¨ ¨¨ ¨¨ ©©
· ¸ ¸ αx ¦ α i −i,φD ¸¸ i 1 ¹
α i xi , L § ·· ¨¦ α −φ ¸¸ i 2 ¨ ¸¸ αx ¨ ¦ α i −i,φL ¸¸ ¸¸ ¨ i 1 ¹¹ ©
§φ · log ¨ 2 ¸ © φ1 ¹
(4)
5. Conclusion Designing the complex columns is not as straightforward as the conventional columns. In this paper we have presented a method for shortcut design of Kaibel column based on Vmin diagram. By plotting the contours of the objective value as a function of the two operational DOFs, we can get more information about the behaviour of the column close to the optimum and do the optimal design based on the rigorous model.
References Alstad, V., I. J. Halvorsen, et al. (2004). "Optimal operation of Petlyuk Distillation Column: Energy Savings by Over-fractionating." Computer Aided Chemical Engineering 18: 547-552. Halvorsen, I. J. (2001). Minimum Energy Requirements in Complex Distillation Arrangements, Norwegian University of Science and Technology, Department of Chemical Engineering (Available from home page of S. Skogestad). PhD. Halvorsen, I. J. and S. Skogestad (2006). Minimum Energy for the four-product Kaibel-column AIChE Annual meeting 2006. San Francisco 216d Sotudeh, N. and B. Hashemi Shahraki (2007). "A Method for the Design of Divided Wall Columns." Chem. Eng. Technol. 30(9): 1-9. Triantafyllou, C. and R. Smith (1992). "The design and Optimisation of Fully Thermally Coupled Distillation Columns " Trans. Inst. Chem. 70: 118-132. Wolff, E. A. and S. Skogestad (1995). "Operation of integrated three-product (Petlyuk) distillation columns." Ind. Eng. Chem. Res. 34: 2094-2103.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
$VXSHUVWUXFWXUHRSWLPL]DWLRQDSSURDFKIRU RSWLPDOUHILQHU\ZDWHUQHWZRUNV\VWHPVV\QWKHVLV ZLWKPHPEUDQHEDVHGUHJHQHUDWRUV &KHQJ6HRQJ.KRU1LOD\6KDK &HQWUHIRU3URFHVV6\VWHPV(QJLQHHULQJ'HSDUWPHQWRI&KHPLFDO(QJLQHHULQJ ,PSHULDO&ROOHJH/RQGRQ6RXWK.HQVLQJWRQ&DPSXV/RQGRQ6:$=8QLWHG .LQJGRP
$EVWUDFW :DWHULVDNH\HOHPHQWLQWKHRSHUDWLRQRISHWUROHXPUHILQHULHV$VVXFKWKHUHDUHJUHDW LQWHUHVWV WR LQFRUSRUDWH ZDWHU UHXVH UHJHQHUDWLRQ WUHDWPHQW DQG UHF\FOH :5 DSSURDFKHVLQWKHGHVLJQRIUHILQHU\ZDWHUQHWZRUNV\VWHPVZLWKWKHDLPRIPLQLPL]LQJ IUHVKZDWHU FRQVXPSWLRQ DQG ZDVWHZDWHU JHQHUDWLRQ +HQFH WKLV ZRUN FRQFHUQV WKH RSWLPL]DWLRQ RI UHILQHU\ ZDWHU QHWZRUN V\VWHPV V\QWKHVLV FRPSULVLQJ ZDWHUSURGXFLQJ VWUHDPV VRXUFHV ZDWHUXVLQJ XQLWV VLQNV RU GHPDQGV DQG ZDWHUWUHDWPHQW WHFKQRORJLHV UHJHQHUDWRUV :H GHYHORS D VRXUFH±LQWHUFHSWRU±VLQN VXSHUVWUXFWXUH UHSUHVHQWDWLRQ WKDW HPEHGV DV PDQ\ IHDVLEOH DOWHUQDWLYHV DV SRVVLEOH IRU LPSOHPHQWLQJ :5 ZKLOH SUHVHUYLQJ DWWUDFWLYH FRQYH[LW\ SURSHUW\ DQG EHLQJ DPHQDEOH WR WLJKWHU PRGHO IRUPXODWLRQ $ PL[HGLQWHJHU QRQOLQHDU SURJUDP 0,1/3 LV IRUPXODWHG EDVHG RQ WKH VXSHUVWUXFWXUH WR GHWHUPLQH WKH RSWLPDO UHWURILW RI D ZDWHU QHWZRUN VWUXFWXUH LQ WHUPV RI WKH FRQWLQXRXV YDULDEOHV RI WRWDO VWUHDP IORZUDWHV DQG FRQWDPLQDQW FRQFHQWUDWLRQVDQGWKH±YDULDEOHVRIVWUHDPSLSLQJFRQQHFWLRQV7KHVXSHUVWUXFWXUH DQGWKH0,1/3H[SOLFLWO\PRGHOVSDUWLWLRQLQJUHJHQHUDWRUVSDUWLFXODUO\WKHPHPEUDQH EDVHGWUHDWPHQWWHFKQRORJLHVRIXOWUDILOWUDWLRQDQGUHYHUVHRVPRVLVZLWKWKHREMHFWLYHRI PLQLPL]LQJWKHIL[HGFDSLWDOFRVWVRILQVWDOOLQJSLSLQJFRQQHFWLRQVDQGWKHYDULDEOHFRVW RI RSHUDWLQJ DOO VWUHDP FRQQHFWLRQV ZKLOH UHGXFLQJ WKH SROOXWDQWV OHYHO WR ZLWKLQ UHJXODWRU\OLPLWV7KHSURSRVHGPRGHOLQJDSSURDFKLVLPSOHPHQWHGRQDQLQGXVWULDOFDVH VWXG\ XVLQJ WKH *$06%$521 SODWIRUP WR REWDLQ D JOREDOO\ RSWLPDO ZDWHU QHWZRUN WRSRORJ\ .H\ZRUGV2SWLPL]DWLRQ ZDWHUUHXVHUHF\FOHV\QWKHVLVVXSHUVWUXFWXUH PL[HGLQWHJHU QRQOLQHDUSURJUDPPLQJ0,1/3
,QWURGXFWLRQ ,QWKLVZRUNZHLQYHVWLJDWHWKHDSSOLFDWLRQRIWKHPDWKHPDWLFDORSWLPL]DWLRQDSSURDFK RIPL[HGLQWHJHUQRQOLQHDUSURJUDPPLQJ0,1/3 WRWKHUHWURILWRIDQRLOUHILQHU\ZDWHU QHWZRUN V\VWHPV 7KH VHPLQDO SDSHU DSSO\LQJ DQ RSWLPL]DWLRQ DSSURDFK WR VXFK SUREOHPVLVE\7DNDPDHWDO ZKLFKDGGUHVVHVWKHRSWLPDOZDWHUDOORFDWLRQLQD UHILQHU\ ZKLOH PRUH UHFHQW ZRUN DSSO\LQJ YDULRXV RSWLPL]DWLRQ WHFKQLTXHV WR WDFNOH VXFK FODVV RI SRROLQJ SUREOHPV FDQ EH IRXQG LQ *RXQDULV HW DO 0LVHQHU HW DO :LFDNVRQR DQG .DULPL .DUXSSLDK DQG *URVVPDQQ 0H\HU DQG )ORXGDV :H DUH PRWLYDWHG E\ WZR UHDVRQV WR XQGHUWDNH WKLV ZRUN )LUVW KLJK GHPDQG RI ZDWHU LQ WKH IXWXUH PD\ UHVXOW LQ D UHILQHU\ EHFRPLQJ YXOQHUDEOH WR ZDWHU VXSSO\ LQWHUUXSWLRQV 6HFRQG WKH ZRUN LV LQ VXSSRUW RI VXVWDLQDEOH GHYHORSPHQW DV H[HPSOLILHGE\LWVREMHFWLYHV RI PLQLPL]LQJ IUHVKZDWHU XVHDQGZDVWHZDWHUJHQHUDWLRQ LQDUHILQHU\
C.S. Khor and N. Shah
362
3UREOHP6WDWHPHQWDQG5HVHDUFK2EMHFWLYHV 7KH DLP RI WKLV ZRUN LV WR GHWHUPLQH DQ RSWLPDO UHILQHU\ ZDWHU QHWZRUN V\VWHPV VWUXFWXUH FRPSULVLQJ VHWV RI ZDWHUSURGXFLQJ VWUHDPV RI SURFHVV VRXUFHV ZLWK NQRZQ ZDWHU IORZUDWHV DQG FRQWDPLQDQW FRQFHQWUDWLRQV ZDWHUXVLQJ RSHUDWLRQV RI SURFHVV VLQNV ZLWK NQRZQ ZDWHU UHTXLUHPHQWV DQG PD[LPXP DOORZDEOH LQOHW FRQWDPLQDQW FRQFHQWUDWLRQV DQG ZDWHUWUHDWPHQW WHFKQRORJLHV WKDW PHHWV WKH FULWHULD RI PLQLPXP IUHVKZDWHU XVH DQG PLQLPXP ZDVWHZDWHU JHQHUDWLRQ ZLWK FRQWDPLQDQW FRQFHQWUDWLRQV ZLWKLQ WKH DOORZDEOH RSHUDWLQJ OLPLWV 7KLV LV DFKLHYHG WKURXJK WKH IRUPXODWLRQ DQG VROXWLRQ RI DQ RSWLPL]DWLRQ PRGHO EDVHG RQ D VXSHUVWUXFWXUH RI SRVVLEO\ DOO IHDVLEOH DOWHUQDWLYH FRQILJXUDWLRQV RI VXFK LQWHJUDWHG UHILQHU\ ZDWHU QHWZRUN V\VWHPV ZLWK WKH LQFRUSRUDWLRQRIZDWHUUHXVHUHJHQHUDWLRQDQGUHF\FOH:5 VWUDWHJLHV
6XSHUVWUXFWXUH5HSUHVHQWDWLRQ :HGHYHORSDVXSHUVWUXFWXUHWKDWH[SOLFLWO\PRGHOVWKHPDWHULDOEDODQFHVIRUPHPEUDQH EDVHGSDUWLWLRQLQJUHJHQHUDWRUVSDUWLFXODUO\WKHWUHDWPHQWWHFKQRORJLHVRIXOWUDILOWUDWLRQ 8) DQGUHYHUVHRVPRVLV52 DVVKRZQLQ)LJXUH7KHSHUPHDWHDQGUHMHFWVWUHDPV DUHPRGHOHGDVLPDJLQDU\VWDQGDORQHLQGLYLGXDOUHJHQHUDWRUV
2SWLPL]DWLRQ0RGHO)RUPXODWLRQ %DVHGRQWKHVXSHUVWUXFWXUHZHIRUPXODWHDPL[HGLQWHJHUQRQOLQHDUSURJUDP0,1/3 WKDW LV ODUJHO\ EDVHG RQ WKH PRGHO RI 0H\HU DQG )ORXGDV DQG *DEULHO DQG (O +DOZDJL DVSUHVHQWHGLQWKHIROORZLQJ :DWHUIORZEDODQFHVIRUWKHVRXUFHV ) ( L ) = ¦ )G ( LN ) + ¦ )D ( LM ) L , N.
M-
:DWHUIORZEDODQFHVIRUWKHJHQHUDOQRQPHPEUDQHEDVHGLQWHUFHSWRUV ¦ )G (LN* ) + ¦ )FF* ( N*c N* ) + ¦ )FF* ( N3 N* ) + ¦ )FF* ( N5 N* ) c .* N* c z N* N*
L,
= ¦ )E* ( N* M ) + M-
¦
c .* N* c z N* N*
N 3 . 3
)F * ( N* N*c ) +
¦
N 5 . 5
N 3 . 3
)F* ( N* N3 ) +
¦
N 5 . 5
)F * ( N* N5 ) N* .*
&RQWDPLQDQWFRQFHQWUDWLRQEDODQFHVIRUWKHJHQHUDOQRQPHPEUDQHEDVHGLQWHUFHSWRUV § ¦ )G ( LN* ) &62 ( T L ) + ¦ )FF* ( N*c N* ) &* ( T N*c ) ·¸ ¨ L, c .* N* c z N* N* ( 55 ( T N* ) ) ¨ ¸ + ) N N & T N ¨¨ ¦ FF* ( 3 * ) 3 ( 3 ) + ¦ )FF * ( N 5 N* ) &5 ( T N5 ) ¸¸ N 5 . 5 © N3. 3 ¹ § ¦ )E* ( N* M ) + )F * ( N* N*c ) · ¦ ¨ M¸ c .* N* c z N* N* = ( &* ( T N* ) ) ¨ ¸ N* .* T 4 ¨¨ + ¦ )F * ( N* N 3 ) + ¦ )F * ( N* N 5 ) ¸¸ N 5 . 5 © N3.3 ¹ :DWHUIORZEDODQFHVIRUWKHSHUPHDWHVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU ¦ )G ( L N3 ) + ¦ )FF3 ( N* N3 ) + ¦ )FF3 ( N3c N3 ) + ¦ )FF3 ( N5 N3 ) L,
N* .*
= ¦ )E3 ( N3 M ) + M-
¦
N* .*
N 3c . 3 N 3c z N 3
)F3 ( N3 N* ) +
¦
N 3c . 3 N 3c z N 3
N 5 . 3 N 5 z N 3
)F3 ( N 3 N3c ) +
¦
N 5 . 5 N 5 z N 3
)F3 ( N 3 N5 ) N 3 . 3
A Superstructure optimization approach for optimal refinery water network systems Synthesis with membrane-based regenerators
363
)LJXUH6LPSOLILHGVXSHUVWUXFWXUHUHSUHVHQWDWLRQIRUUHILQHU\ZDWHUQHWZRUNV\QWKHVLVSUREOHP
&RQFHQWUDWLRQEDODQFHVIRUWKHSHUPHDWHVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU § ¦ )G ( L N 3 ) &62 ( T L ) + ¦ )FF3 ( N* N 3 ) &* ( T N* ) · ¨ L, ¸ N* .* ( 55 ( T N3 ) ) ¨ ¸ ¦ )FF3 ( N3c N3 ) &3 ( T N3c ) + ¦ )FF3 ( N5 N3 ) &5 ( T N5 ) ¸¸ ¨¨ + N 5 . 5 © N3c .3 N3c zN3 ¹
§ ¦ )E3 ( N 3 M ) + ¦ )F 3 ( N 3 N* ) · ¨ M¸ N* .* = &3 ( T N 3 ) ¨ ¸ N 3 . 3 T 4 ¦ )F3 ( N3 N3c ) + ¦ )F3 ( N3 N5 ) ¸¸ ¨¨ + N 5 . 5 © N3c . 3 N3c zN3 ¹ 6SOLWUDWLRRQIORZEDVHGRQOLTXLGSKDVHUHFRYHU\IRUWKHSHUPHDWHVWUHDP § ¦ )G ( L N 3 ) + ¦ )FF3 ( N* N 3 ) + ¦ )FF3 ( N3c N3 ) + ¦ )FF3 ( N5c N3 ) ·¸ ¨ L, c . 5 N 5 c z N3 N* .* N 3c . 3 N 3c z N 3 N5 D¨ ¸ ¦ )FF5 ( N3c N5 ) + ¦ )FF5 ( N5c N5 ) ¸¸ ¨¨ + ¦ )G ( L N 5 ) + ¦ )FF5 ( N* N 5 ) + c . 5 N 5 c z N5 N* .* N 3c . 3 N3c z N 5 N5 © L, ¹ = ¦ )E3 ( N 3 M ) + ¦ )F3 ( N3 N* ) + ¦ )F3 ( N3 N3c ) + ¦ )F3 ( N3 N5c ) M-
N* .*
N3c . 3 N 3c z N 3
N 5 . 5 N 5 z N3
( N 3 N5 ) . 3 . 5 :DWHUIORZEDODQFHVIRUWKHUHMHFWVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU ¦ )G ( L N5 ) + ¦ )FF5 ( N* N5 ) + ¦ )FF5 ( N3 N5 ) + ¦ )FF5 ( N5c N5 ) L,
N* .*
= ¦ )E5 ( N5 M ) + M-
N3 . 3
¦
N* .*
)F 5 ( N 5 N* ) +
¦
N 3 . 3
c z.5 N5 c z N5 N5
)F 5 ( N 5 N 3 ) +
¦
c z .5 N5 c zN5 N5
)F5 ( N 5 N 5c ) N 5 . 5
364
C.S. Khor and N. Shah
&RQFHQWUDWLRQEDODQFHVIRUWKHUHMHFWVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU § ¦ )G ( L N5 ) &62 ( T L ) + ¦ )FF5 ( N* N5 ) &* ( T N* ) · ¨ L, ¸ N* .* § · D ¨ ¸ 55 T N ( ) ¸ + 5 ¸ ¨¨ ( D ) )FF5 ( N3 N5 ) &3 ( T N5 ) + ¦ )FF5 ( N5c N5 ) &5 ( T N5 ) ¸ © ¹¨ N ¦ c . 5 . N5 ¨ 3 3 ¸ c z N5 N5 © ¹
§ ¦ )E5 ( N5 M ) + ¦ )F5 ( N5 N* ) · ¨ M¸ N* .* ¸ N5 . 5 = &5 ( T N5 ) ¨ ¨ + ¦ )F5 ( N5 N3 ) + ¦ )F5 ( N5 N5c ) ¸ c . 5 N5 ¨ N3 .3 ¸ c z N5 N5 © ¹ :DWHUIORZEDODQFHVIRUWKHVLQNV ¦ )D ( L M ) + ¦ )E* ( N* M ) + ¦ )E3 ( N3 M ) + L,
N* .*
N 3. 3
¦
N 5 . 5
&RQWDPLQDQWFRQFHQWUDWLRQEDODQFHVIRUWKHVLQNV ¦ )D (L M ) &62 ( T L ) + ¦ )E* ( N* M ) &* ( T N* ) + L,
+
¦
N 5 . 5
N* .*
)E5 ( N5 M ) = ) ( M ) M -
¦
N3 . 3
)E3 ( N3 M ) &3 ( T N3 )
)E5 ( N5 M ) &5 ( T N5 ) d ) ( M ) &PD[ ( T M ) M -
%LJ0ORJLFDOFRQVWUDLQWVDQH[DPSOHLVLOOXVWUDWHGDVIROORZVIRU)D )D ( L M ) d )D8 ( L M ) \D ( L M )
)D ( L M ) t )D/ ( L M ) \D ( L M ) )RUELGGHQ PL[LQJ RI WKH SHUPHDWH DQG UHMHFW VWUHDPV RI DQ LQWHUFHSWRU LQ D VLQN LQ DQRWKHULQWHUFHSWRUDQGIURPDQRWKHULQWHUFHSWRU )E3 ( N 3 M ) )E5 ( N5 M ) = M )E3 ( N 3 N c) )E5 ( N5 N c) =
N z N c N .
)E3 ( N N3 ) )E5 ( N N 5 ) =
N .
&RPSXWDWLRQDO5HVXOWV :H DSSO\ WKH SURSRVHG 0,1/3 IRUPXODWLRQ RQ DQ LQGXVWULDOVFDOH FDVH VWXG\ RI D UHILQHU\ ZDWHU QHWZRUN VWUXFWXUH SUREOHP LQYROYLQJ VRXUFHV SRWHQWLDO WUHDWPHQW WHFKQRORJLHV DQG VLQNV ZLWK WKH FRQWDPLQDQW RLO DQG JUHDVH 7KH RSWLPL]DWLRQ LV H[HFXWHG XVLQJ WKH JOREDO RSWLPL]DWLRQ VROYHU *$06%$521 ZLWK DQ DEVROXWH RSWLPDOLW\WROHUDQFHRIDQGDUHODWLYHRSWLPDOLW\WROHUDQFHRI7KHRSWLPDOZDWHU QHWZRUNVWUXFWXUHFRPSXWHGLVVKRZQLQ)LJXUHWKDWUHJLVWHUVDERXWUHGXFWLRQLQ IUHVKZDWHUXVH
&RQFOXGLQJ5HPDUNV 7KLV ZRUN SURSRVHV D VXSHUVWUXFWXUH DQG D 0,1/3 IRUPXODWLRQ WKDW H[SOLFLWO\ PRGHOV PHPEUDQHEDVHGWUHDWPHQWWHFKQRORJLHVE\WUHDWLQJWKHSHUPHDWHDQGUHMHFWVWUHDPVDV LQGLYLGXDO LQWHUFHSWRUV 7KH QXPHULFDO H[DPSOHV GHPRQVWUDWH WKH FDSDELOLW\ RI WKH SURSRVHGDSSURDFKWRHYDOXDWH:5DOWHUQDWLYHVWRGHWHUPLQHDQRSWLPDOUHILQHU\ZDWHU QHWZRUNV\VWHPZLWKUHGXFWLRQLQIUHVKZDWHUFRQVXPSWLRQ
A Superstructure optimization approach for optimal refinery water network systems Synthesis with membrane-based regenerators
365
$FNQRZOHGJPHQWV 7KHPDLQDXWKRULVJUDWHIXOWR'RPLQLF)RRIRULQLWLDOGLVFXVVLRQVRQWKHSUREOHPDQG 1JDL
)LJXUH2SWLPDOZDWHUQHWZRUNVWUXFWXUHIRUWKHLQGXVWULDOFDVHVWXG\
5HIHUHQFHV & $ 0H\HU DQG & $ )ORXGDV *OREDO 2SWLPL]DWLRQ RI D &RPELQDWRULDOO\ &RPSOH[ *HQHUDOL]HG3RROLQJ3UREOHP$,&K(-RXUQDO± &(*RXQDULV50LVHQHUDQG&$)ORXGDV&RPSXWDWLRQDO&RPSDULVRQRI3LHFHZLVH /LQHDU 5HOD[DWLRQV IRU 3RROLQJ 3UREOHPV ,QGXVWULDO (QJLQHHULQJ &KHPLVWU\ 5HVHDUFK ± '6:LFDNVRQRDQG,$.DULPL3LHFHZLVH0,/38QGHUDQG2YHUHVWLPDWRUV)RU*OREDO 2SWLPL]DWLRQ2I%LOLQHDU3URJUDPV$,&K(-RXUQDO± 1 7DNDPD < .XUL\DPD . 6KLURNR DQG 7 8PHGD 2SWLPDO :DWHU $OORFDWLRQ ,Q $ 3HWURFKHPLFDO5HILQHU\&RPSXWHUV &KHPLFDO(QJLQHHULQJ± 5.DUXSSLDKDQG,(*URVVPDQQ*OREDO2SWLPL]DWLRQRI0XOWLVFHQDULR0L[HG,QWHJHU 1RQOLQHDU 3URJUDPPLQJ 0RGHOV $ULVLQJ LQ WKH 6\QWKHVLV RI ,QWHJUDWHG :DWHU 1HWZRUNV XQGHU 8QFHUWDLQW\&RPSXWHUV &KHPLFDO(QJLQHHULQJ± 50LVHQHU5&(*RXQDULVDQG&$)ORXGDV$GYDQFHVLQ*OREDO2SWLPL]DWLRQIRU 6WDQGDUG *HQHUDOL]HG DQG ([WHQGHG 3RROLQJ 3UREOHPV ZLWK WKH (3$ &RPSOH[ (PLVVLRQV 0RGHO&RQVWUDLQWVLQ'HVLJQIRU(QHUJ\ DQGWKH(QYLURQPHQW3URFHHGLQJVRI)2&$3' ±
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
New generalised double-column system for batch heteroazeotropic distillation Ferenc Denesa,b, Peter Langa, Xavier Jouliab a b
BME Dept. of Process Engineering, Muegyetem rkp. 3-5, H-1521 Budapest, Hungary University of Toulouse, INP-ENSIACET-LGC, 4, allée Emile Monso, 31432 Toulouse Cedex 4, France
Abstract We study the separation of the minimum homoazeotrope isopropanol – water by applying cyclohexane as entrainer in a new Generalised Double-Column batch heteroazeotropic distillation System (GDCS). First its feasibility is studied by a simplified method then we did sensitivity analysis by rigorous simulation. The minimum duration is determined for the GDCS and for the original DCS, and their performances are compared. Due to its higher number of degrees of freedom the GDCS provides a more flexible operation than the original DCS. In each case we obtained shorter process durations and lower specific energy demands by the GDCS than by the original DCS. Keywords: heteroazeotrope, batch distillation, closed system
1. Introduction In the pharmaceutical and fine chemical industries the Batch Heteroazeotropic Distillation (BHD) is widely applied. So far the BHD was performed in the industry only in Batch Rectifiers (BR) equipped with a decanter (in open mode, with continuous distillate withdrawal). The BR was investigated with variable decanter holdup by Rodriguez-Donis et al. (2002) and with continuous entrainer feeding by Modla et al. (2003) and Rodriguez-Donis et al. (2003), respectively. The BHD in BR and also in multivessel columns was extensively studied by Skouras et al. (2005). For the BHD we suggested a new Double-Column System (DCS, Fig. 1, Denes et al., 2009) which was experimentally verified for the mixture water – 1-butanol (Denes et al, 2010). In the DCS the two products are accumulated in the reboilers (closed operation mode). The goals of our paper are - to extend the DCS and suggest a new Generalised DCS (GDCS), - to study the feasibility of the separation of a homoazeotropic mixture in the GDCS, - to study the effect of the new operational parameters on the duration of the process, - to compare the performance of the new GDCS with that of the original DCS by rigorous simulation. For this study we chose the mixture isopropanol (IPA, A) – water (B) + cyclohexane as entrainer (E). For the rigorous simulation the dynamic simulator of ChemCAD (CCDColumn) is used.
New generalised double-column system for batch heteroazeotropic distillation
Fig. 1. Scheme of the original DCS
367
Fig. 2. Scheme of the GDCS
2. Feasibility study 2.1. Description of the new configuration We extended the original DCS. In the new generalised configuration (Fig. 2): - the distillate of Column β can be fed into any (fα) plate (or the decanter) of Column α (not only into the decanter as by the DCS), - the aqueous phase of the decanter can be fed into any (fβ) plate of Column β (not only into the top of the column), - Column β can be operated with homogeneous reflux (reflux ratio: Rβ). By the GDCS the decanter is fed only by the ternary heteroazeotrope and not by a mixture with the binary A-B homoazeotrope. Hence sharper liquid-liquid separation can be reached (longer tie line). The condensate of Column β whose A-content is higher than that of the B-rich phase coming from the decanter, is partially refluxed. Consequently the A-content of the top vapour β can be higher than by the original DCS. 2.2. Feasibility calculation Method: We apply a simplified model for the description of the distillation of the mixture A-B: the integral and differential total and partial material balances are solved. Input data: molar quantity of the charge: Uch = 100 kmol, composition of the charge [A, & & B, E]: x ch = [0.662,0.338,0] = x BAZ , prescribed purities: xA/prodA = xB/prodB = 0.99, total vapour flow rate: V = 20 kmol/h, division of the charge: uα = 0.6, reflux ratios: Rα = 3.79 (determined by the liquid-liquid split), Rβ = 1/2. Results: quantity of Product A: U αe = 66.53 kmol , division of the vapour flow rate: vα = Vα / V = 0.740, duration: t = 1052 min. The separation proved to be feasible.
3. Rigorous simulation The study of the influence of the new operational parameters and the comparison of the configurations are performed by rigorous simulation. The simplifying assumptions are: theoretical trays, constant molar liquid holdup on the trays, constant volumetric liquid holdup in the decanter, negligible vapour holdup.
F. Denes et al.
368
3.1. Influence of the operational parameters We studied the effect of the supplementary operational parameters (fα, fβ, Rβ) on the duration of the process. 3.1.1. Input data Fixed parameters: molar quantity of the charge: Uch = 100 kmol, composition of the & & charge in mole fraction: x ch = x BAZ = [0.662, 0.338, 0] , division of the charge: uα = 0.6, initial molar quantity of E in Reboiler α: U αb,E = 0.5 kmol , prescribed purity of the products: xA/prodA = xB/prodB = 0.99, number of plates: Nα = Nβ = 8, plate holdups: 3 U αHU = U βHU = 0.5 kmol plate , decanter holdup: U dec HU = 0.106 m , total heat duty of the reboilers: Q = 200 kW, division of the heat duty: qα = 0.792. Basic values of the parameters varied: feeding location in Column α: fα = 3, feeding location in Column β: fβ = 6, reflux ratio of Column β: Rβ = 1/2. 3.1.2. Results Feed plate location in Column α (Fig. 3) This parameter fα has influence mainly on the duration of the production of A (tα). We get the shortest tα when the distillate of Column β is fed into the decanter. When fα = 1 (top plate) the composition of the top vapour of Column α is out of the heterogeneous region therefore there is no liquid-liquid split in the decanter which makes the separation infeasible. From the 2nd plate the separation is feasible again but a further increase of fα results in the increase of tα. If fα > 4, the prescribed purity can not be reached. If the distillate of Column β is fed into the decanter the duration of the production of B (tβ) is much higher than in the cases when it is fed into Column α because the distillate of Column β changes significantly the liquid composition in the decanter and the difference between the compositions of the two liquid phases is smaller (the tie line is shorter). The increase of fα results in the slight decrease of tβ. We can state that the distillate of Column β must be fed into one of the upper plates. 2200
2100
time for A of 99 mol%
time for A of 99 mol% t [min]
time for B of 99 mol%
t [min] time for B of 99 mol%
2000
2100
1900
2000
1800
1900
1700
1800
1600 dec.
1 top
2
3
4
5
6
7
Fig. 3. Effect of fα on tα and tβ
fα 8 bottom
1700 1 top
2
3
4
5
6
7
Fig. 4. Effect of fβ on tα and tβ
fβ 8 bottom
New generalised double-column system for batch heteroazeotropic distillation
369
Feed plate location in Column β (Fig. 4) This parameter fβ has influence mainly on tβ. The increase of fβ results in the decrease of tβ and in the slight increase of tα. We can state that the distillate of Column β must be fed into one of the lower plates. Reflux ratio of Column β (Fig. 5) This parameter has strong influence on the duration of the production of both products, especially on tβ. The increase of Rβ results in the increase of tβ and in the decrease of tα. The purification of B needs less energy therefore the heat duty of Column β can be much lower than that of Column α. Hence the flow rate of the top vapour β is much lower than that of top vapour α which results in a slighter effect of the flow rate of Distillate β (and Rβ) on tα. We can state that a low reflux ratio (Rβ < 1) must be applied.
3500 time for A of 99 mol%
t [min]
time for B of 99 mol% 3000
2500
2000
1500
1000 0.0
0.5
1.0
1.5
2.0
Rβ
2.5
Fig. 5. Effect of Rβ on tα and tβ
We performed the above study also for two other charge compositions (20 and 40 mol% of A) and obtained similar results. 3.2. Comparison of the configurations 3.2.1. Method of the study We compare the GDCS with the original DCS having the same fixed parameters & ( U ch , x ch , x A / prodA , x B / prodB , N α , N β , U αHU , U βHU , Q ) as in the study of the influence of the new operational parameters, except for u α , q α and U αb,E which are variable like fα, fβ, Rβ. Before the comparison the optimum values of the variable parameters are determined by the downhill simplex method in each case. The objective function is the minimum duration. Mole fraction of A in the charge 0.2 0.4 0.662 (BAZ)
Column config. GDCS DCS GDCS DCS GDCS DCS
fα 3 decanter 3 decanter 2 decanter
fβ 8 1 8 1 8 1
Variable parameters Rβ uα % 0.05 5 0 5 0.16 7 0 30 0.36 10 0 18
Uαb,E kmol 0.5 0.5 0.5 0.5 0.5 0.5
Table 1. Optimum values of the variable operational parameters
qα % 70.7 67.7 66.8 72.0 60.4 54.4
F. Denes et al.
370
3.2.2. Results The optimum values of the operational parameters for both configurations are presented in Table 1. In the cases of the GDCS the feed plate location of Column α is always one of the upper plates but not the first. The feed plate location of Column β is always the lowest plate and Rβ is always low. In the cases studied the duration of the GDCS is always shorter (Fig. 6) and the specific energy demands of the products are also lower (Fig. 7) than by the original DCS. 800
2000 GDCS
t [min]
DCS
Q/Uprod [kJ/mol]
1600
A (GDCS)
A (DCS)
B (GDCS)
B (DCS)
600 1200 400 800 200
400
0
0 0.2
0.4
0.662 (BAZ)
xch,A
Fig. 6. Minimum duration for each configuration
0.2
0.4
0.662 (BAZ)
xch,A
Fig. 7. Specific energy demand for each product and configuration
4. Conclusion We studied the separation of isopropanol – water by batch heteroazeotropic distillation applying cyclohexane as entrainer in a new Generalised Double-Column System (GDCS). First its feasibility was studied by a simplified method then we did sensitivity analysis by rigorous simulation. Finally, the minimum process duration was determined by applying the downhill simplex optimisation method for the GDCS and for the original DCS, and their performances were compared. By the GDCS we obtained shorter durations and lower specific energy demands in each case.
Acknowledgement This project is supported by the Hungarian Research Funds (OTKA; project number: K82070), by the New Hungary Development Plan (project number: KMOP-1.1.1-07/12008-0031) and by the Embassy of France in Hungary, respectively.
References F. Denes, P. Lang, G. Modla, X. Joulia, 2009, Computers and Chemical Engineering, 33, 1631. F. Denes, P. Lang, X. Joulia, 2010, Distillation & Absorption 2010, 289-294. G. Modla, P. Lang , B. Kotai and K. Molnar, 2003, AIChE Journal, 49, 2533. I. Rodriguez-Donis, V. Gerbaud and X. Joulia, 2002, AIChE Journal, 48, 1168-1178. I. Rodriguez-Donis, J. A. Equijarosa, V. Gerbaud and X. Joulia, 2003, AIChE Journal, 49, 3074. S. Skouras, V. Kiva and S. Skogestad, 2005, Chemical Engineering Science, 60, 2895-2909.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) ΞϮϬϭϭůƐĞǀŝĞƌ͘s͘ůůƌŝŐŚƚƐƌĞƐĞƌǀĞĚ͘
Design of an Optimal Biorefinery Mehboob Nawaz, a Edwin Zondervan, b John Woodley a and Rafiqul Gani a a
Department of Chemical and Biological Engineering, Technical University of Denmark, Lyngby, 2800, Denmark b Department of Chemistry and Chemical Engineering, Eindhoven University of Technology, P.O.Box 513, 5600MB, Eindhoven, the Netherlands
Abstract In this paper we propose a biorefinery optimization model that can be used to find the optimal processing route for the production of ethanol, butanol, succinic acid and blends of these chemicals with fossil fuel based gasoline. The approach unites transshipment models with a superstructure, resulting in a Mixed Integer Non-Linear Program (MINLP). We consider a specific problem based on a network of 72 processing steps (including different pretreatment steps, hydrolysis, fermentation, different separations and fuel blending steps) that can be used to process two different types of feedstock. Numerical results are presented for four different optimization objectives (maximize yield, minimize costs, minimize waste and minimum fixed cost), while evaluating different cases (single product and multi-product). Keywords: biorefinery, optimization, Mixed Integer Non-Linear Programming
1. Introduction Our current energy related industry is strongly dependent on the fossil feed stocks. The future availability of these fossil feed stocks and increased environmental concerns forced us to think about other alternatives. A significant amount of work has been done to find these alternatives and now the identification of best processing techniques incorporating these alternatives has become a challenging issue. A sustainable alternative can be the production of fuels and chemicals from renewable biomass; especially lignocellulosic biomass as it has desirable environmental and price characteristics. This concept of production of chemicals and fuels from biomass is called biorefinery. In the last ten years the biorefinery concept captured the attention of many researchers and several methods have been reported for the production of fuels and chemicals from biomass. Now the time is ready to grow the industry from its present base to commercialization scale. A possible way to go forward is by systematic biorefinery plant design. Biomass is available in many different forms and many processing paths may be applied to produce a plethora of end products. However, a systematic way to evaluate the different routes is currently not at hand. A method that can be used to generate and evaluate the large number of possible processing routes and identify new production routes is very useful. In this contribution a biorefinery model is developed that can be used to compute the optimal production routes for ethanol, butanol, succinic acid and their blends with gasoline, where the objective function can be maximization of production of these chemicals, minimization of the waste produced and the cost. In our model we formulate
372
M. Nawaz et al.
the different processing steps as intervals and each interval is divided in further stages that contain different operations like separation, reaction and mixing.
2. Proposed Model for the Biorefinery Problem Given a network of different processing steps (reaction steps, mixing steps as well as separation steps) the biorefinery optimization problem is concerned with deciding which route to take through the network and determining which flows enter and leave the intervals, while maximizing the yield of the component desired and/or minimizing the costs, which can be expressed as costs for the feedstock, the chemicals and the waste. In the following, the biorefinery optimization model is formulated as an MINLP (Mixed Integer nonlinear Problem). The objective function maybe given as: ܼ ൌ σ ݓ ݂൫ ݕ ǡ ݂Ӗ ǡ ߪ൯
(1)
kk
Where y is a decision variable concerned with the connection between two intervals. ݂Ӗ is the flow of the desired component i leaving the plant and σ are the known parameters of the system. wn are the weights of importance given to each of the objectives which can be 0 or 1. The following constraints apply: Logical constraints determining which intervals may or may not be connected at each stage (pretreatment, hydrolysis, etc): σ ݕ ͳ
(2)
The process models consist of the mixing, separation and reaction operations. Mixing operation is given as: ݂ ൌ σǡ ܨ ן ܴ כ
(3)
Where୧୩୩ represents the flow of the stream after mixing, ୧୩୩ is the flow entering the is the fraction of the chemical consumed and ୩୩ interval, ן୩୩ ୧ ୧ is the amount of the chemical added with respect to entering stream. Reaction operation is given as: ೖೖ
൰ ܹܯ כ ݂ҧ ൌ ݂ ൬σ ߛǡǡ ߠ כǡǡǡ כெௐ
(4)
Where݂ҧ is the flow rate after the reaction,݂ is the flow rate of the reactant, γ represents the stoichiometry, θ is the fraction of converted reactant, and MW contains the molecular weight of the components. Separation operation is given as:
݂Ӗ ൌ ݂ҧ ሺͳ െ ܹܵ ሻ
(5)
SW contains the produced quantities of waste and also represents the separation factor and ݂Ӗ is the flow rate after the separation. Finally there are so called structural constraints that set the allowed flow rates for the selected processing routes: ݃ଵ ൫ ݕ ǡ ݂Ӗ ǡ ߪ൯ ൌ Ͳ
(6)
݃ଶ ൫ ݕ ǡ ݂Ӗ ǡ ߪ൯ Ͳ
(7)
3. Case Study Ethanol, Succinic Acid, Butanol, Ethanol-Gasoline Blends and Butanol-Gasoline Blends are selected for the current case study. Figure 1 shows the superstructures of the
Design of a Sustainable Biorefinery
373
biorefinery model. Where ‘X’ are the number of alternatives and Yi, j are the connections between different alternatives.
Figure 1: Superstructures approach for Biorefinery model
3.1. Production of Ethanol The production of bioethanol has increased all over the world in the last decade through expansion of existing plants and the construction of new facilities. The economic competitiveness of bioethanol as a liquid fuel strongly depends on the energy resources used during its production. This implies the need for generation of sustainable alternatives to determine the optimal values for the conditions of operation and design. Bioethanol can be produced from lignocellulosic biomass, which has been documented by NREL (Wooley et al. 1999). Zymomonas mobilis can be used as the bio-catalyst for the fermentation process for ethanol production. After fermentation several separation steps are needed for the purification of the ethanol. 3.2. Production of Succinic Acid Succinic acid is an important chemical which has use in various industries. Currently it is being produced from crude oil based feed stock but it can be produced by microbial conversion of lignocellulosic biomass. The most promising method reported is production of succinic acid from Escherichia coli AFP184 (Berglund et al., 2007) as it is able to utilize both the xylose and glucose. Numerous separation techniques may be applied for the purification of succinic acid, and most of them are related to three separation processes: electro-dialysis, reactive extraction and filtration. 3.3. Production of Butanol Like ethanol, butanol may be used as a biofuel, most especially as a biofuel in vehicles for which no modification is required in the engines. Butanol can be produced from fermentation of biomass using Clostridium beijerinckii (Qureshi et al., 2007). As butanol, acetone and ethanol are produced together; purification methods are needed to separate them. Several types of separation techniques may be applied. For our case study, we include: gas stripping, adsorption, extraction, membrane separation and membrane extraction. All these methods separate Acetone-Butanol-Ethanol from the other chemicals in the fermentation broth. We have investigated 16 scenarios, divided into four classes. For each class we have investigated different objective functions and also identified the optimal flow sheets. The first class concerns the production of ethanol; the second concerns the production of ethanol and succinic acid; the third concerns the production of ethanol, butanol and
374
M. Nawaz et al.
succinic acid and the fourth concerns the production of ethanol, succinic acid, butanol and the blends of these chemical except succinic acid with gasoline. We have optimized the flow sheets for each scenario, by optimizing different objectives; scenario-1 (maximization of product yields); scenario-2 (scenario-1 plus minimization of the costs of chemicals used in the processing routes; scenario 3 (scenario-2 plus minimization of wastes); scenario-4 (scenario-3 plus fixed costs of equipment used in the processing routes). Class four contains further four cases that are for E5 (5% Ethanol) Blend case, E10 (10% Ethanol) Blend case, B5 (5% Butanol) Blend case and B10 (10% Butanol) Blend case. For this specific class it considered that amounts of ethanol and butanol are blended with gasoline according to heating value demand. While remaining amounts are considered as separate products. In this paper only one scenario is presented, that is, class 4-scenario 4. Detailed results for all other scenarios can be found in (Zondervan et al., 2010). Equation 8 is used for the objective function for class4-scenario4. ݉ܽ ܼݔൌ ݓଵ ݂ כӖ െ ݓଶ ݐݏܥ כെ ݓଷ ݁ݐݏܹܽ כെ ݓସ ݐݏܿ݀݁ݔ݅ܨ כ
(8)
Where Cost is the cost of the chemicals used and w1, w2, w3 and w4 are weights having values equal to 0 or 1.
4. Results and Discussion For the case of class4-scenario 4, the model is formulated as a MINLP containing 452,179 constraints, 438,693 continuous variables and 68 binary decisions. The core model is linear, and we have reformulated all nonlinear multiplications between binary and continuous variables by a linearization into linear constraints. The nonlinearity of the model originates in the nonlinear equation for the fixed costs and demand consideration ratios. Details of the model can be found in (Zondervan et al., 2010). The model was formulated in GAMS and solved with DICOPT/MINOS using a 2.66 GHz core 2 duo Intel processor with 3 GB RAM. Figure 2 shows the optimized flow sheet for class 4-scenario 4-B10 Blend case.
Figure 2: Optimized flow sheet for Ethanol, Butanol, Succinic Acid and B10 Blend
The sequence of the processing steps followed and amounts of the chemicals produced are shown in the figure 2. The sugars flow after the hydrolysis is divided according to the chemical demands which are for this case 15.7% for butanol production, 8.4% for Succinic acid production and 75.9% for the ethanol production. The fixed cost of the equipment is 90605$/100Kg of biomass, amount of waste water produced is 267Kg and the cost of the chemicals used is 6.761$/100Kg of biomass. Amount of gasoline saved for B10 case is 0.059Kg/Kg of blend e.g. if 1 Kg of B10 blend is used 0.059Kg of the
375
Design of a Sustainable Biorefinery
gasoline can be saved. For all these cases profit is also calculated which is taken as the difference of the price of all chemicals produced and cost of the chemicals used for the process. The profit for the B10 blend case is 35.322$. Calculation of the profit also includes the amount of gasoline that is saved, that is, additional fuel is now being sold.
E5 Case E10 Case B5 Case B10 Case
Ethanol (Kg)
Succinic Acid (Kg)
Butanol (Kg)
Blend (Kg)
Cost ($)
Gasoline Saving (Kg/Kg of Blend)
Profit ($)
19.694 19.181 20.171 20.171
1.625 1.625 1.625 1.625
2.266 2.266 1.551 0.721
20.176 20.366 20.168 20.343
6.761 6.761 6.761 6.761
0.015 0.0306 0.0292 0.059
35.253 35.278 35.269 35.322
Table 1: Numerical results for Ethanol-gasoline and butanol-gasoline blend production for class4scenario4
Table 1 gives a summary of the numerical results for class4-scenario4. The table shows results corresponding to different cases involving different gasoline blends (E5, E10, B5, and B10) and the corresponding gasoline saved in each case. In addition, the corresponding (optimal) ethanol, succinic acid and butanol productions are also given together with the costs of the chemicals and the profit. All the amounts listed are on the basis of 100 Kg of biomass material. Note that the highest profit is obtained when B10 blend is produced. This model converged to optimality in 86 minutes and needed 34186 iterations.
5. Conclusion We have developed a MINLP model for computation of optimal processing routes in a biorefinery. Numerical results have been reported for a biorefinery network consisting of 2 types of feedstock, 8 final products and 72 processing steps, including reaction, separation, mixing and fuel blending. Another unique aspect of this model is that it integrates the production of biofuels with the classical production of fuels from crude oil. Different optimization objectives were formulated and tested. As for the solver, it is clear that the DICOPT performs adequate and that the results converge to optimality within less than two hours, the relaxation (integrality) gap found for the different cases is small (between 0.02% and 3.91%) and suggests convergence to global optimality.
References Wooley, R. R., M. Sheehan, J. Ibsen, K. Majdeski, H. Galvez, A. (1999), Lignocellulosic biomass to ethanol process design and economics utilizing co-current dilute acid prehydrolysis and enzymatic hydrolysis-current and futuristic scenarios, National Renewable Energy Laboratory (NREL)/TP-580-26157, Golden Colorado USA. K. Bergslund, C. Anderson and U. Rova, (2007), process for production of succinic acid, Patent Number WO/2007/046767 N. Qureshi, B. C. Saha, M. A. Cotta, (2007), Butanol production from wheat straw hydrolysate using Clostridium beijerinckii, Bioprocess and Biosystems Engineering 30(6): 419.
376
M. Nawaz et al.
E. zondervan, M. Nawaz, A. B . D.Haan, J. Woodley and R. Gani, (2010), Optimal design of a multi-product biorefinery system, submitted to Computers and Chemical Engineering.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Novel Design Concept for the Oxidative Coupling of Methane Using Hybrid Reactors Stanislav Jašoa , Harvey Arellano-Garciaa, Günter Woznya a
Chair of Process Dynamics and Operation, Berlin Institute of Technology, Str. des 17. Juni 135, Sekr. KWT-9, D-10618 Berlin, Germany
Abstract In this work, the relevance of hydrodynamics, reactor geometry, and feeding policy on the performance of a hybrid fluidized-bed-membrane reactor for the Oxidative Coupling of Methane (OCM) is shown. For this purpose, several case studies are simulated in full 3D geometry under the same reaction conditions, but with different reactor geometries and feeding policies. These studies show the significance of hydrodynamic parameters for the reactor performance, and moreover, how reactor performance can be improved by a careful study of coupled momentum-mass transportreaction phenomena. Furthermore, it can be demonstrated that a suitable distributed feeding policy of oxygen provides a significant improved of yield compared to traditional reactor concepts. Keywords: Oxidative Coupling of Methane, Membrane reactors, Fluidized bed reactors, Natural gas processing
1. Introduction Oxidative coupling of methane (OCM) is suggested to be a promising process for the conversion of the abundant natural gas into useful chemicals. However, this reaction faces many drawbacks such as low yields for higher hydrocarbons, fast catalyst deactivation, and huge heat effects of the reaction (Mleczko et. al, 1995). Only fluidized bed reactors have been able to remove huge heat of the reaction and provide isothermal operation (Dautzenberg, et.al, 1992). Membrane reactors are known to provide an increase in selectivity, but their technical drawbacks like high pressure drop, hot spot formation, and requirement to work with diluted feeds limits their applicability (Jašo et. al, 2010a,b). A hybrid fluidized-bed-membrane reactor represents an interesting alternative because of its potential to control the heat and recalculate catalyst (reactorregenerator system for deactivating catalysts) like the fluidized bed reactor, and on the other hand, to provide higher yields of desired products by fine oxygen distribution such as in the case of a membrane reactor. Design approaches for fluidized bed reactors are still based on models developed during 70’s and 80’s, which can not take into account various hydrodynamic effects on the reactor performance. Thus, a reactor designer has usually to rely on extensive experiments in order to improve the classical fluidized bed reactor design (Levenspiel, 1992). This design approach is useful and very successful for traditional fluidized bed reactor design, however, a successful design of more complex react geometries including redistributors, baffles and internals, and membranes is highly questionable. Computational fluid dynamics has been used previously for designing different types of equipment in process industry. In the last 15 years, there has been a significant breakthrough in modeling fluidized beds using CFD tools (Gidaspow, 1994). Although modeling hydrodynamics of geldart B and C particles has been reported to be
378
S. Jašo et.al
successful, it still is challenging to simulate and design a whole reactor using CFD because of the computational demand. In this work, we report a successful implementation of a detailed CFD model for a fluidized bed and fluidized bed membrane reactor for the OCM using the most appropriate kinetic model available in literature. This study shows the capabilities of CFD for modeling coupled hydrodynamic – mass transfer – reaction phenomena in multiphase systems. Moreover, the main focus of this work is to prove the applicability of a novel reactor concept to be used for the OCM. Fluidized bed membrane reactor has been proposed for ethane oxydehydrogenation (Ahchieva, 2005) and butane oxidation (Marin,2010), but so far never for the relevant OCM reaction. According to the simulation results, they represent potentially the best alternative considering reaction heat management, high yields, and catalyst recirculation among many other reactor concepts proposed so far (Jašo et. al 2010c).
2. Simulation Simulation of the fluidized bed and fluidized bed membrane reactor has been performed using the kinetic theory of granular flow developed by the group of Prof. Gidaspow. This model has been shown to predict reasonably the behavior of the fluidized beds of geldart AB, B, and C particles. The commercial CFD tool Ansys Fluent 12® has been used as a tool for such simulation. Prior to the simulation, it was necessary to design a mesh of considerable size in order to resolve the complex geometries. In most cases, hexahedral cells have been used for mesh generation. However, in the case where hexahedral mesh was impossible to be developed (in the case of horizontal membranes), a polyhedral mesh was used. In all investigated cases, a mesh size of 200 000 – 500 000 was used for the simulation. Each simulation was performed on a 32 core computing cluster. A simulation of approximately 3 seconds of real time requires 5-10 days of calculation. Based on the applied flows and geometry, a pseudo steady state was observed already after approximately 0.5 s of real time (Fig 1). The kinetic model used for the simulation is a formal kinetic model suggested by (Stansch, 1997). This kinetic model consists of 10 reaction steps, out of which 3 are primary, and 7 are consecutive. This model includes gas phase ethane dehydrogenation as well as ethylene steam reforming and water gas shift reaction, which makes it the most comprehensive model available in literature. The use of mikrokinetic models for OCM in CFD is limited considering their complexity and number of species included in the model (>50), which increases the computational time considerably.
0.05 s
0.12 s
0.22 s
0.34 s
0.52 s
0.75 s
Fig. 1. Transient behavior of the fluidized bed membrane reactor. Isosurfaces of the volume fraction of 0.2 (bubble boundaries) at different time.
A Novel Design Concept for the OCM Using Hybrid Reactors
379
3. Results and Discussion A fluidized bed reactor with an ideal gas distributor was simulated as a base case study. The methane to oxygen ratio was kept at 2.5:1, as most of the catalysts operate with this ratio near optimal operation conditions. A significant feed dilution of 86% was made in order to avoid any hot spot formation seen in the simulation and to reduce the volume expansion of the feed mixture due to the reaction. The temperature was kept at 750°C, which is a characteristic temperature for the OCM reaction. All investigated cases were simulated keeping the total flow and component flow constant. In case of fluidized bed membrane reactors, 2/3 of the total flow was fed through the reactor bottom, while 1/3 was fed through the membrane. The fluidized bed reactor as a base case shows a typical performance. Thus, a yield of 12.7% was obtained, which is in good agreement with experimental findings. Mleczko (Mleczko et.al 1996) obtained 12.6% yield with very similar reactant conversion and selectivity at the same reaction conditions. Strong concentration gradients are observed near the distributor plate, due to high concentrations of both methane and oxygen, and therefore, high reaction rates. However, there is a thin zone near the distributor plate with a high voidage, and bubbles, which are released, quickly grow into bigger bubbles, which contain less catalyst. These bubbles protect unreacted methane and oxygen, which consequently exit the catalyst bed (Fig 2). Therefore, a better reactant distribution can improve not only selectivity, but also the conversion level in a fluidized bed reactor.
Fig. 2. Concentration profiles of the most important species in the fluidized bed reactor Three case studies were employed for the simulation of the fluidized bed membrane reactor. The first investigated case is a fluidized bed reactor with several membrane tubes, which are introduced from the bottom of the reactor. The second case is a fluidized bed reactor with submerged membranes, while in the third one membranes are introduced horizontally. The first case study showed a significant improvement in the reactor performance. There are no significant amounts of oxygen coming out of the reactor, and the reactant distribution is significantly better than in the previous case. However, the membranes
380
S. Jašo et.al
introduced from the bottom of the reactor change the bed hydrodynamics significantly. It was observed that most of the gas passes either near the membranes or near the reactor walls. Therefore, a severe channeling was observed, and thus, a bubble formation is suppressed. In this arrangement, most of the methane passes near the reactor wall nonreacted, while higher oxygen amounts near the membrane can combust formed ethane and ethylene. Nevertheless, this case study proves that a distributed oxygen feeding policy even with non-optimized geometry improves the yield, which was calculated to be 16.4%. When membranes were implemented from the top of the reactor and submerged into the bed, no significant influence on the bed hydrodynamics is observed. The oxygen distribution is improved further and there are only traces of oxygen at the reactor outlet. Because of the smaller influence on the bed hydrodynamics, less channeling effect is observed, and thus, oxygen is able to finely diffuse in a radial direction and to maintain very small concentrations, which are useful for high selectivity. As a result, a yield of 17.3 % was calculated, which represents an increase of 36% over the base case. Even though methane conversion did not increase significantly, one can observe a great improvement in selectivity over the base case. In the reactions zones, selectivity was around 60%, which is considerably higher than 40% in the base case. The concentration profiles in the reactor can be seen in detail on Fig 3. In the last case study where tubes were arranged horizontally, a decrease of yield occurred because of the significant negative influence on the bed hydrodynamics. It was observed that a sequential feed of oxygen enables locally high oxygen concentrations in the zones where the products of the reaction, ethane and ethylene are already formed in significant amounts. Thus, they would be irreversibly combusted in those zones.
Fig.3 Concentration profiles of the most important species in the fluidized bed membrane reactor.
4. Conclusions Simulation of several configurations of fluidized bed and fluidized bed membrane reactor were successfully carried out showing that a distributed feeding policy of a membrane reactor can successfully be translated into a fluidized bed, and thus, improving its yield. Fluidized bed membrane reactors are able to maintain an isothermal
A Novel Design Concept for the OCM Using Hybrid Reactors
381
operation like most of the fluidized bed reactors and improve yield by fine oxygen distribution such as a membrane reactor. These benefits make this reactor type very promising for industrial implementation. Therefore, further theoretical and experimental investigations are necessary in order to improve its performance even further, which is a part of a current ongoing research. So far, a yield improvement of more than 30% over the conventional fluidized bed has been achieved. Currently, we are investigating experimentally the potentials of both fluidized bed and membrane reactors for the OCM process. The fluidized bed membrane reactor in its most promising structure shown in this study has been already constructed, and the first experimental test runs are being done so as to verify the proposed reactor concept.
Acknowledgements The authors acknowledge the support from the Cluster of Excellence "Unifying Concepts in Catalysis" coordinated by the Technische Universität Berlin and funded by the German Research Foundation - Deutsche Forschungsgemeinschaft.
The authors also acknowledge the support of the North-German Supercomputing Alliance (Norddeutscher Verbund zur Förderung des Hochund Höchstleistungsrechnens - HLRN) for the provided supercomputing resources necessary for this study.
References D. Ahchieva, M. Peglow, S. Heinrich, L. Mörl, T. Wolff, F. Klose, 2005, Oxidative dehydrogenation of ethane in a fluidized bed membrane reactor, Appl. Catal. A ,296 ,176. F. M. Dautzenberg, l J. C. Schlatter, J. M. Fox, J. R. Rosuup-Nielsen, L. J. Christiansen , 1992, Catalys and Reactor Requirements for the Oxidative Coupling of Methane, Catal. Today, 1992, 13, 503. D. Gidaspow, 1994 Multiphase Flow and Fluidization: Continuum and Kinetic Theory Descriptions, Academic Press; 1st edition S.Jašo, H. R. Godini, H. Arellano-Garcia, M. Omidkhakh, G. Wozny, 2010a, Analysis of attainable reactor performance for the oxidative methane coupling process , Chem. Eng. Sci., Vol. 65, Issue 24, 6341. S.Jašo, H. Arellano-Garcia, G. Wozny, 2010b, Oxidative Coupling of Methane: Reactor Performance and Operating Conditions, Computer Aided Chemical Engineering, Vol. 28, 781. S.Jašo, H. R. Godini, H. Arellano-Garcia, G. Wozny, 2010c, Fluidized bed membrane reactors for improved yields in oxidative conversion of methane, European patent application, pending O. Levenspiel, D. Kunni, 1992, Fluidization Engineering, 2nd Edition, Butterworth-Heinemann P. Marin, C. Hamel, S.Ordonez, F.V.Diez, E. Tsotsas, A. Seidel-Morgerstern, 2010 Analysis of a fluidized bed membrane reactor for butane partial oxidation to maleic anhydride: 2D modelling, Chem. Eng. Sci. ,65, 3538. L. Mlezko, M. Baerns, 1995, Catalytic oxidative coupling of methane - reaction engineering aspects and process schemesFuel Processing Technology ,42, 217. L. Mleczko, U. Pannek, M. Rothamel, M. Baerns, 1996, Oxidative Coupling of Methane Over a LaO/CaO Catalyst. Optimization of Reaction Conditions in a Bubbling Fluidized-bed Reactor, Can. J. Chem. Eng. ,76, 279. Z. Stansch, L. Mleczko, M. Baerns, 1997, Comprehensive Kinetics of Oxidative Coupling of Methane over the La2O3/CaO Catalyst, Ind. Eng. Chem. Res, 36, 2568
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Comparison of Extractive and Pressure-Swing Batch Distillation for Acetone-Methanol Separation Gabor MODLA* and Peter LANG Budapest University of Technology and Economics, Department of Building Services and Process Engineering, H-1521 Budapest, Muegyetem rkp. 3-5, *[email protected]
Abstract The performance of extractive (BED) and pressure swing distillation (PSBD) for the separation of the mixture acetone(A)-methanol(B) forming a minimum azeotrope are compared. For the BED, where the solvent(S) is water, three different operational policies (basic, modified, controlled) are studied. For the PSBD the double column batch stripper (DCBS) is applied with and without heat integration (HI). The rigorous simulation calculations are made with a professional dynamic simulator. From the same equimolar binary charge A and B are produced in the same purity with all methods. The specific overall energy consumptions and recoveries are compared. Keywords: Batch Distillation, Separation of Azeotropes, Pressure-Swing, Extractive Distillation, Dynamic Simulation and Control
1. Introduction For the continuous separation of binary homogeneous azeotropes widespread methods are pressure swing (PSD) and extractive distillation (ED). PSD is effective if the composition of the azeotrope varies significantly with pressure. Its main advantage is that it does not need the application of a separating agent contrary to the ED requiring an efficient solvent (S). Luyben (2008) compared the two methods for the separation of the mixture acetone-methanol which is a frequent waste in pharmaceutical industry forming a pressure sensitive minimum azeotrope. The column producing acetone was operated at 1.013 bar while the methanol column at 10 bar for PSD and 5 bar for ED, respectively. Applying heat integration (HI) for both methods he stated that the controllability of continuous PSD and ED is quite similar and total annual cost of ED is somewhat lower. The batch distillation is more advantageous than the continuous one in many cases (eg. when the amount and composition of the mixture frequently changes). Moreover in a single batch column an arbitrary number of products can be produced. However neither PSD nor ED is well-known in the industry in batch. The basic operational policy (BOP) of the batch extractive distillation (BED, Fig. 1a.) in a single column was suggested by Yatim et al. (1993) for the mixture acetone (A) – methanol (B) + water (S). The main particularity of the BED is continuous feeding of S (F>0) in two steps of the batch process (purification under total reflux (R=f) and pro-
Comparison of Extractive and Pressure-Swing Batch Distillation for Acetone-Methanol Separation
383
At the begining the vessels are empty
From economizer PD=1.01 bar
ND=18
At the end: Acetone
Entrainer (Water)
At the end: Intermediate
PE=10 bar
E
PD < PE
D
LD
At the end: Methanol
To economizer
charge
NE=18
LE Ltotal
Booster pump
At the beginning: Uch, xch
At the end: Water
WD, xDspec Methanol product
Figure 1 a. Scheme of the BED
To free cooler
From column
Economizer D
total
IL=L /L
WE, xEspec Acetone product
Figure 1b. Scheme of the DCBS
duction of A). Several BED operational policies were studied by Lelkes et al. (1998). Based on industrial experiences Lang et al. (2006) modified the BOP of the BED. By the modified policy (MOP) the first two steps (heating-up and purification) of the BOP were combined. The solvent feeding was already started during heating-up of the column under R=f when the vapour reached the solvent feed plate. The BED was studied by Luyben and Chien (2010) with simulation for the above mixture (under fixed R and F) and the mixture isopropanol (A)-water (B)+DMSO (S). For the second mixture, where S is much heavier than A and B, during the step of production of A both R (on the basis of temperature of the reflux drum) and F (on the basis of temperature of the S feeding stage) were varied. The aim of varying feeding of S is to reduce its quantity applied. For the pressure swing batch distillation (PSBD) new double column configurations (for min. azeotropes double column batch stripper (DCBS, Fig. 1b)) were suggested by Modla and Lang (2008). DCBS provides several advantages, among others the possibility of HI (Modla and Lang 2010). The purpose of this paper is to compare BED and PSBD (with and without heat integration) in terms of specific overall energy consumption. For the rigorous simulation we used the dynamic simulator of CHEMCAD 6.2 (CC-DCOLUMN, Chemstations, 2007).
2. VLE data of mixture acetone-methanol For the two different pressures (1.01 and 10 bar) the calculated vapour-liquid equilibrium diagrams and azeotropic data of the mixture acetone-methanol are shown in Table 1. The large shift in the azeotropic composition from 78 to 37 mol % acetone indicates that a pressure swing separation should be feasible. TBP,A P xAZ TAZ [bar] [%] [qC] [qC] 1.01 78 55.2 56.0 acetone (A) – 10 37 142.9 142.9 methanol (B) Table 1. Calculated data of the azeotropes Mixture
TBP,B [qC] 64.4 136.7
384
G. Modla and P. Lang
3. Rigorous simulation results 3.1. Input data The initial charge contains 50 mol% acetone (xch,A=0.5). The specified purities are 94 mol% for Acetone and Methanol and 99 mol% for Water. The liquid hold-up is 4 dm3/plate. The number of theoretical stages (N) for each column is 18. (The total condenser and total reboiler do not provide a theoretical stage.) The S feeding stage is the 6-th for the BED (from the top). In each case the quantity of the charge is 0.9 m3 (Uch=15.96 kmol). At the start the columns are filled with boiling point liquid (at the pressure of given the column). For the BED the heat duty of reboiler is 400 MJ/h, the operation pressure of the column is 1.01 bar and the temperature of the water feeding is 80 °C. The DCBS columns are operated at 1.01 bar (producing methanol) and 10 bar (producing acetone).
3.2. Batch extractive distillation Besides the basic and modified operational policy a new one is also studied.
3.2.1. Controlled operational policy (COP)
The controlled operational policy (COP) consists of the following steps: 1. Heating-up and purification: Operation under total reflux with solvent feeding (R1=, F1>0), Step 1 is ended when acetone concentration in the distillate (xD,A) reaches 0.95. 2. Acetone production (P1): Operation under finite varying reflux ratio (xD,A is kept constant at its prescribed value: xD,A,spec=0.94) with solvent feeding (R2<, F2>0), Step 2 is ended when the acetone recovery takes its prescribed value (94%). 3. Collection of a slop cut (S1): Operation under finite fixed reflux ratio without solvent feeding (R3<, F3=0), Step 3 is ended when the methanol concentration in the distillate (xd,B) reaches the specified purity of methanol production (0.94) 4. Methanol production (P2): Operation under two different fixed reflux ratios without solvent feeding (R4<, F4=0): a. The reflux ratio at the beginning of this step equals with that of Step 3 (R3). b. When xD,B is higher than 0.98 the reflux ratio changes. Step 4 is ended when the mole fraction of B in the product tank (xD,M,av) falls to its prescribed value (0.94). 5. Production of water (P3) as bottoms (optional): If at the end of Step 4 the water product has not reached yet its purity specified another slop-cut (S2) must be collected. (In our case at the end of Step 4 the bottom product has already reached yet its purity specification therefore Step 5 was omitted.) The dynamic responses of xD,A and several stage temperatures in Step 2 were studied. We made oscillate xD,A by applying a PI controller (Fig. 2). Fig. 2a shows that the top temperature (Stage 1) can not be used as controlled parameter for ensuring xD,A,spec. However the temperature of stage 9 (Fig. 2b) varies simultaneously with xD,A. Hence R was varied by T9 (set point: 62.7 °C) with the PI controller.
Comparison of Extractive and Pressure-Swing Batch Distillation for Acetone-Methanol Separation 0.98
x_d,A
63
T [°C]
0.96
62
x_d,A
70
385
T [°C]
69 68
0.94 61
0.92
stage 6
0.9
60
0.88
59
67 66 65
0.86
58
stage 4
0.84
stage 1
0.8 40
50
60
70
80
63 62
57
stage 2
0.82
64
90
min
56 100
61
stage 8,9,10,12,18
60 40
50
60
70
80
90
min
100
Figure 2. Evolution of stage temperatures and acetone concentration in the distillate
The influence of the most operational parameters (e.g. R, F) on the overall specific energy consumption (SQ/SPr) was studied for all operational policy (BOP, MOP, COP) in order to estimate the minimal SQ/SPr. The results are summarized in Table 2. BED DCBS BOP MOP COP Without HI With HI Start-up&Purification [min] 122 33 33 Acetone production [min] 206.5 202 195 Slop-cut withdrawal [min] 64 62.5 56 Methanol production [min] 183 190 200.5 Duration of the process [min] 575.5 488 511 213 213 Acetone product [kmol] 8.73 8.68 8.61 8.23 8.23 Methanol product [kmol] 6.67 6.94 7.32 8.44 8.44 Acetone recovery [%] 95% 94% 94% 84% 84% Methanol recovery [%] 73% 76% 80% 86% 86% Methanol recovery [%] 73% 76% 80% 86% 86% Water consumption [dm3] 1095 785 760 Overall spec. energy consumption [MJ/kmol] 269 223 216 245 145 Slop cut Acetone [%] 22.8% 24.2% 44.5% Slop cut Methanol [%] 77.1% 75.7% 55.3% Slop cut Water [%] 0.1% 0.1% 0.2% Slop cut product [kmol] 2.37 2.34 2.34 Table 2. Results of the different BED and PSBD methods
3.3. Pressure swing batch distillation (PSBD) For the separation of the minimum azeotrope the DCBS is applied. The main operational parameter is the liquid division ratio (IL=LD/Ltotal). First the optimum value yielding the minimal overall specific energy consumption (SQ/SPr) is estimated by varying IL in the region 0.1-0.7 without applying the economizer. The best result (245 MJ/kmol) is obtained at IL=0.45. The boiling point of azeotrope (top stream temperature of the high-pressure Column E (133.9 °C at 10 bar) is higher than the boiling point of methanol product (bottom stream temperature of the low-pressure Column D (near to 64 °C at 1.01 bar), so heat integration could be attractive in terms of energy consumption. The vapour leaving the top of the high-pressure column is fed into an economizer. The economizer is used to recover energy from the top vapour of
386
G. Modla and P. Lang
Column E by (partial) vaporisation of the reboil stream of Column D. The operational parameter of the economizer is that the temperature of the hot stream outlet (the recycled top vapour from Column E is higher by 11 °C than that of the cold stream outlet. The overall specific energy consumption is much lower than it is without HI (Table 2).
4. Comparison of the BED and PSBD On the basis of the results obtained for the BED and DCBS (Table 1) we can state that -The specific energy consumption is the lowest for the DCBS with HI. However the energy demand of both of the MOP and COP of the BED was lower than that of the DCBS without HI. -By all BED policies higher recovery of acetone (A) was achieved than by the PSBD. However the recovery of methanol (B), which is produced between A and S by the BED, remained below that of the PSBD. By the BED the products A and B are polluted mainly with S contrary to the PSBD where they are polluted with the other original component obviously decreasing its recovery. -By the BED there is a slop-cut contrary to the PSBD, which must be recycled to the next charge and/or processed with another charge (of different composition). - The BED needs much more operation steps than the PSBD. - The control of the DCBS is easier than that of the BED since the columns are operating practically in steady state. - The capital costs of the DCBS (requiring two columns) are obviously higher than that of the BED.
Acknowledgement This work was financially supported by the Hungarian Scientific Research Fund (OTKA) (No: K-82070) and by the Janos Bolyai Research Scholarship of the HAS.
References - Chemstations, (2007). CHEMCAD Dynamic Column Calculation User’s Guide, Chemstations. - Lang P. Lang, P., Gy. Kovacs, B. Kotai, J. Gaal-Szilagyi, G. Modla, (2006). Industrial application of a new batch extractive distillation operational policy, Inst. Chem. Eng. Symp. Ser., 152, 830–839. - Lelkes Z. Láng P., B. Benadda, M. Otterbein, P. Moszkowicz, (1998). Batch Extractive Distillation: the Process and the Operational Policies, Chem. Eng. Sci,. 53, 1331. - Luyben W L, (2008). Comparison of Extractive Distillation and Pressure-Swing Distillation for Acetone-Methanol Separation, Ind. Eng. Chem. Res,. 47, 2696. - Luyben W L, I L Chien (2010). Design and Control of Distillation…, John Wiley & Sons. - Modla G. and Lang P. (2008). Feasibility of new pressure swing batch distillation methods, Chem. Eng. Sci., 63, (11) 2856-2874. - Modla G. Lang P, F Denes. (2010). Feasibility of separation of ternary mixtures by pressure swing batch distillation, Chem. Eng. Sci, 65, (2) 870-881 - Yatim H. P. Moszkowicz, M. Otterbein, Láng P., (1993). Dynamic Simulation of a Batch Extractive Distillation Process. Comp. Chem. Eng., 17, S57-62.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Constructive nonlinear dynamics for reactor network synthesis with guaranteed robust stability Xiao Zhaoa, Wolfgang Marquardta,* a
AVT-Lehrstuhl für Prozesstechnik, RWTH Aachen University, Templergraben 55,
52056 Aachen, Germany
Abstract This work presents a synthesis method which incorporates robust stability using the normal vector approach [2, 3]. A superstructure approach is used for the connection of the different process units and reflects all possible flowsheets. The goal of this work is to find an optimal flowsheet structure and a robustly stable operating point that minimizes an overall cost function. The parametric uncertainty considered may either result from uncertainties in parameters (e.g. reaction kinetic constants, heat and mass transfer coefficients etc.) or from process uncertainty (slow disturbance in load, quality of raw materials etc.). This work extends the normal vector approach, which so far assumes a fixed flowsheet and thus considers only continuous degrees of freedom for optimization. Hence, we generalize to integer degrees of freedom to support decision making on the flowsheet level. The problem formulation results in a nonlinear mixedinteger problem with robust stability constraints. Keywords: robust stability, reactor network synthesis, mixed-integer nonlinear programming, process synthesis
1. Introduction Stability in process synthesis can be tested by means of simulation or by computing the eigenvalue spectrum of the linearized model at the equilibrium point after fixing the flowsheet structure and the operating point. In case of an unstable design, an adjustment of the design parameters (e.g. equipment parameters, operating conditions, or control parameters) or even of the flowsheet structure is necessary. This kind of a posteriori adjustment results in additional manual interaction and typically leads only to suboptimal solutions. There are few suggestions to consider stability as part of the solution of the process synthesis problem. Most notably, Kokossis and Floudas [1] have proposed a methodology based on matrix measures to bound the eigenvalues of the system’s Jacobian matrix to account for stability. These authors illustrated their method by means of a reactor network synthesis problem (Fig. 1). However, their method does not consider parametric uncertainty and all the reactors are assumed to exist. Mönnigmann and Marquardt [2] have proposed the so-called normal vector approach, * Corresponding author. Email: [email protected]
388
Xiao Zhao et al.
which can be applied both to open-loop and closed-loop systems, to constrain the operating point inside a feasible and stable region in the parameter space and to guarantee a minimum distance to the feasibility and stability boundaries to account for parametric uncertainty. However, their approach only considers a fixed flowsheet and cannot be applied to process synthesis where the structure of the flowsheet is a degree of freedom. So far, to the authors’ knowledge, there is no synthesis method available simultaneously considering flowsheet structure selection and operating point optimization while guaranteeing robust stability in the presence of parametric uncertainty. This work extends the normal vector approach to reactor network synthesis with open-loop reactors by introducing integer variables. A superstructure-based modeling strategy [2] is proposed to model the reactor network. The robust stability constraints for a flexible superstructure are derived and an adapted branch & bound algorithm is proposed to integrate the normal vector method with mixed-integer Fig. 1 Complex reactor network programming. The method can be synthesis [1], simple illustrating example generalized to reactor networks with closed-loop reaction units by introducing the notation in [3]. The method also readily generalizes to those types of flowsheets, where the modeling approach carries over.
2. Problem formulation for reactor networks A new modeling strategy of reactor networks is proposed, which is different from the ones in [1] and [4]. The main idea is to model individual unit operations independently and then to connect their inputs and outputs using disjunctions. Fig. 2 sketches the idea. u, y and x denote the vectors of input, output and state variables, nc denotes the number of connections, triplet (k, Ș, ß) ={(k, Ș, ß)| connection k from unit Ș to unit ß, for k=1,…,nc} refers to flowsheet connectivity, zk is an integer variable deciding on the existence of connection k, and M is a very large positive constant. A standard method [5] to convert the disjunctions to integer constraints is used subsequently. The model for the reactor network can be formulated as follows: x ri f ri ( xri , u ri ,D ri , pri ) (2.1) i 1,..., nr reactor: (2.2) y ri g ri ( xri , u ri ,D ri , pri ) j j j j ,1 j,I j j y m g m ( x m , u m ,..., u m D m , p m ) j 1,..., nm mixer: (2.3) splitter: ysl ,t g sl ,t ( xsl , usl ,D sl , psl ) l 1,..., ns , t 1,..., O (2.4) 0 t h( z k , uK , y E ) ( k ,K , E ) 3 (2.5) connection: where Į and p are uncertain and certain parameters. Subscripts r, m, s and sy denote reactor, mixer, splitter and system. f and g are smooth functions defined on suitable domains. nr, nm and ns denote the number of reactors, mixers, and splitters, I and O denote the number of inputs for a mixer and the number of outputs for a splitter.
Reactor network synthesis with guranteed robust stability
ªzk º º « » » « yK 0 » uE ¼» « » ¬u E 0¼
ªzk « ¬« yK
389
§ M (1 z k ) uE yK · ¨ ¸ ¨ u E yK M (1 z k ) ¸ ¨ ¸ uE Mz k ¸d0 h( z k , uK , yE ) ¨ ¨ ¸ Mz k uE ¨ ¸ yK Mz k ¨ ¸ k ¨ ¸ Mz yK © ¹
Fig. 2 Connection between unit operations Ș and ȕ The optimization problem with robust stability constraints [3] can be represented by min M (V , Z ) (2.6) V ,Z s.t.
eqns. 0
(2.1) (2.5)
R (V ) (V (V ) , fX r(V ),i fD r(V ),i , fXX r(V ),i , fXD r(V ),i , r (V ) , Z )
0 D D
(V )
(V )
L
r
(V )
/ || r
(V )
||
0 d L( H ,V ) nD
(2.7) (2.8) (2.9)
V *, i 1,..., n r
j=1,…,nm and l=1,…,ns, X=((xr1)T,…,(xrnr)T)T, where for i=1,…,nr, i T j,1 T j,I T l T T i T j T l,1 T l,O T T U=((ur ) ,(um ) ,…,(um ) ,(us ) ) , Y=((yr ) ,(ym ) ,(ys ) ,…,(ys ) ) denote the vectors of state, input and output variables. In order to make Eqs. (2.4)-(2.5) feasible, Eq. (2.4) has to be replaced by ysl,t=gsl,t(xsl,usl,Įsl,psl)+ȟl,t and –M(1-zl,t)ȟl,t M(1-zl,t), where ȟl,t is a vector of slack variables and zl,t is the integer variable for the output ysl,t. The constraints (2.7), enumerated by the upper index ı Ƚ={1,…,ımax} refer to the type and number of critical manifolds, e.g. saddle-node bifurcations, Hopf bifurcations, etc., and where stability loss could occur. V=(XT,UT,YT,ĮT,pT)T (ı) (ı) T (ı) T (ı) T (ı) T (ı) T T V =((X ) ,(U ) ,(Y ) ,(Į ) ,(p ) ) denote the vector of continuous variables evaluated at the operating point and the controlling critical point ı, while Z={z1,…,znc} denotes the vector of integer variables for the connections. fXr(ı),i=dfri/dX|V(ı), fĮr(ı),i=dfri/dĮ|V(ı), fXXr(ı),i=d2fri/dX2|V(ı) and fXĮr(ı),i=d2fri/dXdĮ|V(ı) are defined as the 1st and 2nd order derivatives to the right hand side of Eq. (2.1) evaluated at the critical points V(ı). The optimization problem minimizes the cost function ij, whose value depends on the operating point V and the integer variables Z. The robust stability constraints, Eqs. (2.7)-(2.9), guarantee that the optimal operation point V keeps a distances L(ı) to the critical points V(ı) in the paramter space. r(ı) is the so-called normal vector, nĮ is the number of uncertain parameters.
3. Robust stability constraints The evaluation of the robust stability constraints, Eqs. (2.7)-(2.9), need a symbolic formulation of the 1st and 2nd order derivatives of the DAE model. The chain rule and the implicit derivation theorem are employed for this purpose. Assume that all reactors are fixed and denote H as the coupling matrix consisting only of integer variables which relates U to Y according to U=HY. Denote Eqs. (2.2)-(2.4) as Y=G(X,U,Į,p) and consider U as an implicit function of X and Į. Let xk, xl, uȘ be the scalar elements of X and U and let gȕ(xȕ, uȕ, Įȕ, pȕ) be the ȕ-th element of G(X, U, Į, p). Then, for i=1,…,nr and (k, Ș, ß) , the derivatives with respect to X can be computed from fX ri
wf ri wf ri U X wX wU
(3.1)
UX
H (G X GU U X )
(3.2)
Xiao Zhao et al.
390
( fXX ri ) k ,l
w 2 uK wx k x l
(
w 2 f ri wx k xl
z k {(
w 2 f ri w u j ) k u j wxl
¦ wx
w2gE wx k x l
j
¦ q
w 2 g E wu q wx k u q wx l
w 2 f ri j xl
¦ [( w u j
) ¦ [( q
2 w 2 f ri w u p w u j wf i w u j ) r ] u w x w x w u w x j p l k j k xl
¦ wu
w2gE wu q x l
p
¦ p
w 2 g E wu p wu q wg E w 2 u q ) ]} w u q u p w x l w x k wu q w x k x l
(3.3) (3.4)
The derivatives with respect to Į can be computed in the same way.
4. Solution strategy An “adapted” branch & bound algorithm is proposed as sketched in Fig. 3. This algorithm starts to solve a MINLP problem with only steady-state constraints. Before reaching a fully branched node (all integers are fixed), we compare the objective function (lower bound) of the relaxed problem to the best-found solution (upper bound). If the objective value is bigger than the upper bound, this node is neglected and further branching is not needed. Otherwise, we choose an integer variable that has not been branched, branch it and add the resulting two nodes to the stack. When we reach a fully branched node, we append the robust stability constraints of the normal vector method to the optimization problem in order to find an optimal robustly stable solution. Then we update the upper bound to the value of the objective function resulting Fig. 3 Adapted branch & bound from this optimization. Then, the algorithm algorithm for process synthesis with picks a new node from the stack, and iterates robust stability until the stack is empty.
5. Illustrating Example 5.1. Problem definition We consider a 2-bioreactor network as shown in Figure 1. The open-loop bioreactor is governed by the model in [6] with an input (ur,x, ur,s, ur,F)T and an output (yr,x, yr,s, yr,F)T, where symbols x, s and F denote cell concentration, substrate concentration and feed rate. Uncertain parameters are the system’s input substrate concentration ssy [0,1] and flow rate Fsy [0,15]. The objective function is to maximize the flow rate [kg s-1] of cells in the system’s output (obj=10-3 ysy,x ysy,F), where ysy,x and ysy,F denote the system’s output cell concentration and flow rate. 5.2. Result A direct solution of the process synthesis problem without robust stability constraints is done in GAMS. The model consists of 261 equations and 82 variables including 10 integer variables. The optimization results in an unstable solution with eigenvalues Ȝ = [-0.019, -0.050, 0.188, 0.233]T and an objective function value of 0.420 (see Fig. 4).
Reactor network synthesis with guranteed robust stability
391
The process synthesis method proposed in this work has been realized in Matlab, using Tomlab and SNOPT to solve the continuous nonlinear optimization problem and the numerical continuation tool MatCont to detect critical manifolds. The “adapted” branch & bound algorithm starts from a feasible and stable operating point and applies a depthfirst search strategy in order to find an updated upper bound. The optimal solution obtained in case of considering robust stability constraints is shown in Fig. 5. The eigenvalues are Ȝ = [-0.661, -0.547, -0.042±0.041i]T and the objective function value is 0.162, which is significantly lower than the one without considering stability. A numerical continuation study shows that the optimal operating point keeps a minimum distance of 0.3 to the critical manifold of a Hopf bifurcation (results not shown for brevity). This simple case demonstrates that the proposed method can simultaneously consider flowsheet selection and operating point optimization with guaranteed robust stability. Future work will generalize reactors to closed-loop reaction units, and consider more general reactor networks.
Fig.4 Unstable optimal solution by a direct solution without robust stability constraints
Fig.5 Robustly stable optimal solution with robust stability constraints and integer decision variables
Acknowledgements Financial support from “Deutscher Akademischer Austausch Dienst” (DAAD) within the scholarship program for Ph.D. studies in Germany is gratefully acknowledged.
References [1] A. C. Kokossis and C. A. Floudas, 1994, “Stability in optimal design: Synthesis of Complex Reactor Networks” , AIChE Journal, 40(5), 849. [2] M. Mönnigmann and W. Marquardt, 2002, “Normal vectors on manifolds of critical points for parametric robustness of equilibrium solutions of ODE systems”, Journal of Nonlinear Science, 12(2), 85. [3] M. Mönnigmann, 2004, “Constructive nonlinear dynamics for design of chemical engineering process”, Fortschritt-Berichte VDI, Nr. 801, VDI-Verlag, Düsseldorf, Germany. [4] C.A. Floudas, 1995, “Nonlinear and mixed-integer optimization”, Oxford university press. [5] R. Raman, and I. E. Grossmann, 1994, “Modeling and computational techniques for logicbased integer programming”, Computers & Chemical Engineering, 18(7), 563-578. [6] D. Brengel and W. Seider, 1992, “Coordinated design and control optimization of nonlinear processes”, Chemical Engineering Communications, 16, 861-886.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Systems Analysis of Benign Hydrogen Peroxide Synthesis in Supercritical CO2 Deborah B. Bacik, Wei Yuan, Christopher B. Roberts, Mario R. Eden Department of Chemical Engineering, Auburn University, AL 36849, USA
Abstract In this paper, experimental information on the in-situ generation of hydrogen peroxide (H2O2) for the production of pyridine oxide is integrated with process systems engineering analysis to enable systematic comparison between the different processing options. Process simulation models representing the traditional anthraquinone process have been developed from literature, while experimental studies provided the necessary data for the supercritical carbon dioxide (CO2) process option. Analysis of the optimized models shows that in-situ generation of H2O2 results in a more energy efficient and less environmentally harmful alternative to the conventional approach. Keywords: Hydrogen peroxide, green chemistry, supercritical carbon dioxide.
1. Introduction Most commercial processes for production of chemicals involve harsh organic solvents as well as numerous by-products that are detrimental to the environment. Increasing environmental awareness and more stringent environmental laws demand alternative synthesis routes in the production of various commercially useful products. Hydrogen peroxide is generally considered a benign oxidant that is a promising alternative to conventional oxidants if a more environmentally benign manufacturing route can be identified. 1.1. Traditional Production Route – Anthraquinone Autoxidation Process The current method for producing H2O2 involves a complicated anthraquinone autoxidation (AO) process consisting of two sequential steps, hydrogenation followed by oxidation [1,2]. An alkyl anthraquinone is dissolved in a solvent mixture, consisting of both polar and aromatic compounds, then hydrogenated in the presence of a supported palladium catalyst to give its respective hydroquinone. The hydroquinone solution is then separated from the catalyst and air-oxidized to regenerate the alkyl anthraquinone while splitting off hydrogen peroxide as shown in Eqs. (1) and (2): O
OH R +
H2
R
Pd
(1)
40oC
O
OH
O
OH
R
R +
OH
O2
+ 25oC O
H2O2
(2)
Systems Analysis of Benign Hydrogen Peroxide Synthesis in Supercritical CO2
393
Unfortunately, this route requires multiple unit operations, is highly energy-intensive, and results in the generation of several considerable waste streams. Despite the relatively high cost of production, the AO process is used to manufacture about 95% of the world’s hydrogen peroxide [3]. This process is commercially successful because it not only prevents the direct contact of hydrogen and oxygen, but also produces H2O2 continuously at mild conditions. Even so, there are several disadvantages associated with the AO method for H2O2 production. First, the product is contaminated during extraction from the organic phase into water, and must be distilled for product purification. Next, the organic solvent mixture can be oxidized during production, so the degraded materials must be removed and replaced to maintain the appropriate solubilities. Also, the expensive anthraquinones in the working solution undergo unwanted side reactions, so these side products must be removed for regeneration of the starting material or disposal. Make-up of the spent anthraquinones and solvents along with the wasted hydrogen all result in higher production costs. Development of a more direct and green synthesis approach is needed that utilizes less energy and consumes fewer resources, thereby producing H2O2 at a lower overall cost. 1.2. Green Chemistry Route – Direct Synthesis in Supercritical CO2 Perhaps the most efficient method for H2O2 production, in terms of the principles of green chemistry, is the direct reaction of hydrogen and oxygen. Titanium silicalite (TS1) catalysts have shown some success in promoting the direct production of H2O2 from hydrogen and oxygen [4]. Even though the direct generation of H2O2 is the most efficient method, it is also one of the most dangerous methods. The flammability limit for hydrogen in air is 4%; however, studies have shown that the non-explosive margin can be widened to 9.5% if mixed with carbon dioxide (CO2) [5]. The use of CO2 as the solvent for the direct generation of H2O2 was pioneered by Beckman and coworkers, having illustrated great potential for the process for numerous reasons [6-8]. Carbon dioxide is non-flammable, has a very low toxicity, is inexpensive, naturally abundant, and cannot be further oxidized [9,10]. Also, H2 and O2 are completely miscible with CO2 at temperatures above 304 K (the critical temperature of CO2). Carrying out the reaction in supercritical CO2 media allows for single phase operation [11,12]. Consequently, the transport limitation barrier that exists in the conventional AO synthetic route of H2O2 can be eliminated. Removing this barrier enhances the yield of the reaction while more effectively producing H2O2. In addition, single phase operation would reduce the number of unit operations required for production; thus, making the process more cost effective. Clearly the direct synthesis of H2O2 is a “green” alternative to the widely accepted AO route. Even so, it is very difficult to accurately measure hydrogen peroxide production in the direct route. Unfortunately, the same conditions necessary to generate H2O2 also catalyze its decomposition [13]. There are two reaction pathways responsible for the breakdown of H2O2, which are thermodynamically favored [14,15]. One is the disproportionation of H2O2 resulting in the formation of both water and oxygen as shown in Eq. (6). The second pathway is the reduction of H2O2 to give water as shown in Eq. (5). H2 + O2 o H2O2
(3)
H2 + 0.5 O2 o H2O
(4)
394
D.B. Bacik et al.
H2O2 + H2 o 2 H2O
(5)
H2O2 o H2O + 0.5 O2
(6)
Coupling the generation of hydrogen peroxide with another oxidation reaction provides a unique solution [16]. The generated hydrogen peroxide can be consumed through oxidation of another reactant before it has a chance to enter one of the decomposition pathways.
2. Simulation and Optimization of Pyridine Oxide Production Experimental data was collected on the performance, i.e. yield, conversion and selectivity, of the pyridine oxidation reaction along with analysis of the product distribution. This data serves as the basis for the development of rigorous process simulation models used to analyze the potential of the green supercritical phase (SCF) process and compare it to the traditional AQ process. Proven PSE tools for mass and energy integration are employed to optimize the processes and provide targets for further experimental efforts. The environmental impact of the processes is evaluated using the US-EPA Waste Reduction (WAR) algorithm [17]. The design objective is to produce 100,000 tpa H2O2 for converting pyridine into pyridine oxide. 2.1. Anthraquinone (AQ) Process Using data provided by Hess (1995), a model of the traditional hydrogen peroxide production process was developed and combined with an additional reactor to represent the pyridine oxidation step. A schematic of the process is provided in Fig. 1, where the solvent consists of 75 vol% t-butyl benzene and 25 vol% trioctyl phosphate. The concentration of AQ in the working solution is 10 g/liter.
Figure 1. Oxidation of pyridine using hydrogen peroxide from the anthraquinone process.
The extent of hydrogenation of AQ to AHQ is limited to 60% to minimize secondary reactions, while the oxidation reaction to regenerate AQ has near 100% conversion [3]. After extraction with water, the aqueous H2O2 product is distilled to 35 weight% and sent to the pyridine oxidation reactor. 2.2. Supercritical Phase (SCF) Process Using experimental data collected as part of this work, a model was developed to investigate the large scale potential of the in-situ generation of H2O2 in supercritical CO2. A schematic of the process is provided in Fig. 2, where it should be noted that the hydrogen and oxygen streams are actually diluted with CO2 for safety reasons and additional CO2 is added separately to create the dense reaction media. The hydrogen
Systems Analysis of Benign Hydrogen Peroxide Synthesis in Supercritical CO2
395
conversion was measured to be 56.4% with a H2O2 selectivity of 62.5%. The peroxide product immediately reacts with pyridine to form pyridine oxide, thus eliminating the need for additional purification of H2O2. Unreacted raw materials are recycled along with the CO2 solvent, while the pyridine oxide is recovered in aqueous solution.
Figure 2. Oxidation of pyridine using in-situ generated hydrogen peroxide.
3. Results and Analysis Once the base case models had been developed and all material recycles implemented, thermal pinch analyses were performed to minimize the energy usage. The results of these analyses are shown in Table 1 below. It should be noted that the reactor duties were not included in the analysis as it is desirable to control the reactor conditions using external utilities to avoid any negative impact of upsets in other parts of the process. Furthermore, as a result of the larger number of processing steps, the AQ process has many more opportunities for heat integration and thus the relative reductions in energy requirements are much larger. Cooling water is much cheaper than steam and as such the cost of the larger cooling requirement in the SCF process is easily compensated for by the steam usage being reduced to 27% compared to the AQ process. Table 1. Energy Analysis for Pyridine Oxidation Processes Option
Utility Requirements
Base Case
After Heat Integration
AQ Process
Total Heating (GJ/hr)
64.6
17.1
(-73.5%)
Total Cooling (GJ/hr)
68.5
21.0
(-69.3%)
SCF Process
Total Heating (GJ/hr)
5.7
4.7
(-17.5%)
Total Cooling (GJ/hr)
54.0
53.0
(-1.8%)
The environmental impact of both process options was also evaluated and the results are presented in Fig. 3. It should be emphasized that the results for the AQ process represent a best case scenario with minimal solvent losses and secondary reactions. Even minor inefficiencies in the separation steps will increase the environmental impact drastically and further widen the performance gap in favor of the SCF process. Since the basic raw materials (hydrogen and oxygen) are the same for both processes the results in Fig. 3 illustrate the differences in the energy consumption, which currently is primarily derived from coal-fired power plants. Additionally, the environmental impact stemming from the production of the anthraquinone working solution should also be taken into account in order to make a fair comparison.
396
D.B. Bacik et al.
Total Cooling
Total Heating
Environmental Impact
3
2
1
0 AQ Process
SCF Process
Figure 3. Performance comparison for pyridine oxide production.
4. Conclusions In this paper, the large scale potential for in-situ production of hydrogen peroxide in supercritical CO2 has been studied. Experimental data for the oxidation of pyridine oxide has been used to develop a process model that was used to compare the performance of this direct synthesis method with the industrial standard anthraquinone autoxidation process. The results show that it is possible to efficiently couple the in-situ H2O2 generation with another oxidation reaction and achieve substantial benefits in terms of both energy consumption and reduced environmental impact.
References [1] W.T. Hess (1995) Hydrogen Peroxide. Kirk-Othmer Encyclopedia of Chemical Engineering. New York, Wiley. [2] C.W. Jones (1999). Applications of Hydrogen Peroxide and Derivatives. The Royal Society of Chemistry. [3] J.L.G. Fierro, J.M. Campos-Martin, G. Brieva (2006), Angewandte Chemie, 45, 6962-6984. [4] M. Taramasso, G. Perego, B. Notari (1983). U.S. Patent No. 4,666,692. [5] J.O. Pande, J. Tonheim (2001), Process Safety Progress, 20, 37-39. [6] D. Hancu, J. Green, E.J. Beckman (2002). Accounts of Chemical Research, 35, 757-764. [7] Q. Chen, E.J. Beckman (2007), Green Chemistry, 9, 802-808. [8] Q. Chen, E.J. Beckman (2008), Green Chemistry, 10, 934-938. [9] R. Wandeler, A. Baiker (2000), CATTECH, 4, 34-50. [10] E.J. Beckman (2004), Journal of Supercritical Fluids, 28, 121-191. [11] B. Subramaniam, M.A. McHugh (1986). Industrial and Engineering Chemistry Process Design and Development, 25, 1-12. [12] A. Baiker (1999), Chemical Reviews, 99, 453-473. [13] P. Landon, P.J. Collier, A.F. Carley, D. Chadwick, A.J. Papworth, A. Burrows, C.J. Kiely, G.J. Hutchings (2003), Physical Chemistry Chemical Physics, 5, 1917-1923. [14] G. Li, J.E. Edwards, A.F. Carley, G.J. Hutchings (2007), Catalysis Today, 122, 361-364. [15] D.J. Robinson, P. McMorn, D. Bethell, P.C. Bulman-Page, C. Sly, F. King, F.E. Hancock, G.J. Hutchings (2001), Catalysis Letters, 72(3-4), 233-232. [16] R. Meiers, W.F. Holderich (1999), Catalysis Letters, 59, 161-163. [17] D.M. Young, R. Scharp, H. Cabezas (2000), Waste Management, 20, 605–615.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Design of pervaporation modules based on computational process modelling Patrick Schiffmann, Jens-Uwe Repke Institute of Thermal, Environmental and Natural Products Process Engineering Technische Universität Bergakademie Freiberg
Abstract A membrane module design tool for organophilic pervaporation, based on modelling and simulation was carried out in this work. A short-cut model based on the MS Excel SOLVER routine and a rigorous model in Aspen Custom Modeler™ (ACM) were developed and used for the optimal design of the PV process. The mass and energy balances were computed simultaneously with both models and a comparison of the results showed that a robust and fast model, suitable for the basic design of membrane modules is possible with Excel, whereas for a rigorous design, the use of ACM becomes necessary. Keywords: Membrane module design, shortcut model, rigorous modelling 1. Introduction Pervaporation (PV) is an energy-efficient separation technology with increasing propagation in industrial applications [1], [3]. Widely used and established processes involving PV are e.g. the dehydration of organic solvents (bio ethanol) and the shifting of reaction equilibriums by removal of inhibiting components (production of MTBE). The removal of dilute organic solvents from aqueous streams and the retrieval of aroma compounds in the food and beverage industry are further applications of PV. The rapid development of new membrane materials and functionalised membranes [5] with very high and specific separation capacities allows the separation of numerous industrially relevant mixtures, including processes with large amounts of organics [4]. Although showing a high potential, PV is still limited to individual applications in the chemical process industry since the enhanced membrane characteristics are not fully exploited by the nowadays existing PV membrane modules. More than that, most of the used module concepts in PV are originally designed for ultra filtration and reverse osmosis applications. There are just a few simulation and design tools available in which relevant process parameters and transport phenomena are not sufficiently implemented. Instead, mostly “best-practice rules” are applied. To encourage the propagation of PV in chemical processes, a more detailed design of optimised membrane modules, based on the combination of fast and reliable shortcut rules and rigorous simulations becomes essential [6].
P. Schiffmann et al.
398
2. Theoretical background of the pervaporation process In PV, the separation of the liquid feed mixture components is achieved by their different rates of sorption, diffusion and evaporation from dense membranes. In the solution-diffusion model, these phenomena, i.e. the solubility and diffusion coefficients are combined into the membrane permeance Qi(T). The driving force can be assumed to be the difference of the partial pressures of the components and is maintained by application of a Figure 1: Sketch of the PV process low pressure (approx. 10 mbar) on the permeate side of the membrane, thus reducing the components partial pressures. On the feed side, the maximum compatible temperature (determined by membrane materials and components) is usually applied. The equations for the molar fluxes of the individual components through the membrane are:
(
n i ,TM = Qi (T ) ⋅ p i0 (T )x i γ iFeed − p Perm y i ϕ iPerm
)
Eq. 1
The temperature dependence of the permeance Qi(T) can be approximated with an Arrhenius-type equation:
ª E § 1 1 ·º Qi (T ) = Qi , 0 ⋅ exp«− Act ⋅ ¨¨ − ¸¸» «¬ R © T0 T ¹»¼
Eq. 2
Values for the reference permeance and the activation energy in this equation are usually obtained by experiments at different temperatures [7]. As the permeate stream evaporates when exiting the membrane (see Figure 1), the heat of vaporisation is drawn from the latent heat of the feed, leading to a temperature drop along the upstream side of the membrane and thus a reduction of the permeances and the driving forces. To include this effect, the energy balance of the feed side is considered as follows:
n i ,TM ΔH vap = n Feed H Feed − n Re t H Re t = n Feed c P , Feed T Feed − n Re t c P , Re t T Re t
Eq. 3
In combination with the mass and component balances, the basic system of describing equations of the PV process can be established. Furthermore, correlations for pressure drop and additional mass transfer effects (e.g. temperature polarisation…) become necessary, if a rigorous model of the process is required. 3. Computational implementation of the model In this work, two different simulation models were created for the design of PV modules. A shortcut model, in which the above mentioned equations for the mass and energy transfer are computed simultaneously, is implemented in MS Excel, since this software is widespread in the industry and easy to use. The basic module parameters
Design of pervaporation modules based on computational process modelling
399
(necessary membrane area, energy requirements) are calculated, leading to a robust and reliable module design. Secondly, the same basic equations are programmed in a rigorous ACM model. Additionally, the momentum balances, necessary for the calculation of the pressure drop and secondary mass transfer effects as concentration and temperature polarisation, are included in the model. Thus, a more detailed design and the optimisation of the model are possible and will be applied after the basic design. 3.1. MS Excel™ and the SOLVER routine The simulation of a stand-alone PV process in MS Excel and Visual Basic was achieved by Han et al. [8]. Verhoef et al. [9] developed a VBA user interface which is connected to Aspen Plus allowing the user to calculate hybrid processes. Klinkhammer et al. [10] presented an Excel tool in which the temperature and concentration profiles are estimated before the permeating fluxes are computed. In the present work, the simultaneous implementation of the temperature and the concentration profiles along the feed side is realised. To achieve the simulation of this process in MS Excel, the balance equations need to be rewritten into a discrete form. A section of the membrane module with the mass and heat fluxes Figure 2: Discrete membrane element and the local state variables are shown in Figure 2. In this model, the permeate stream is collected and withdrawn at the end of the module, both streams being in co-current flow. Other configurations (counter-current, perpendicular flow or ideal mixing) are also possible. The resulting system of equations is resolved using the built-in SOLVER which is embedded in a VBA macro applying the routine on five lines of the system at once. Thus, a higher number of discretisation cells become possible, increasing the numerical accuracy. 3.2. Aspen Custom Modeller (ACM) To validate the Excel shortcut model and allow the simulation of hybrid processes in further research, a rigorous model is programmed in ACM. In addition to the basic balance equations, the flow velocities and pressure drops (Press.drop) as well as transmembrane heat and mass transfer effects (Transmem.) and concentration polarisation (Conc.Pol) are implemented as submodels in ACM. Moreover, the geometrical parameters (Geom.) are calculated for the different module types such as
Figure 3: ACM model structure
400
P. Schiffmann et al.
cushion (C), frame-and-plate (FP) and tubular (T) modules. The scheme of the model setup is shown in Figure 3. Within this structure, it is possible to customize the program and to change or add further submodels. The resulting model can be exported and integrated in the model library of Aspen Plus and Aspen Dynamics. 4. Simulation with ACM and Excel In a first step, the Excel model and its results are compared with the rigorous simulation of an actual membrane process (dehydration of IPA with a tubular module [11]). The feed flow (0,176 mol/s) is a mixture of 78 °C containing 30 %mol IPA. The material parameters of the used membrane are 12,6 and 12,5 kJ/kmol for the activation energy EAct and 0,046 and 0,0012 mol/sm²bar for the reference permeance Qi,0 for water and IPA respectively. The membrane area is 0,7 m² and for the sake of simplicity, the membrane is calculated as a single sheet. In a second step, the number of modules necessary for dehydration of IPA from 70 to 98 % is calculated in Excel and in ACM. It is assumed that the temperature in the modules should not drop below 320 K and reheating to 350 K is necessary. Test simulations revealed that for the programmed membrane models, the Excel SOLVER and ACM both provide good results for a number of 10 discrete cells. In both models, assumptions of ideal thermodynamic behavior and the temperature drop in the feed due to the evaporation of the permeate stream are considered. 5. Results and Discussion
5.1. Model comparison Table 1 shows the flow and composition of the Flow IPA TRet retentate stream. The molar flow and the final [mol/s] % [K] mol composition calculated with the Excel shortcut Excel 0,171 71,9 344,7 model are in good agreement with the rigorous model in ACM. The temperature shows a Aspen 0,167 73,5 335,1 discrepancy of +15 %, which means that more Diff % + 6,2 - 2,1 + 15,5 permeate evaporates and thus the temperature drop is higher in ACM. As the permeate stream is Table 1: Comparison of the simulations bigger, it is obvious that the retentate stream in the ACM model is smaller. The reason is that the component properties in the Excel model are calculated with approximating relationships and temperature and influences of concentration polarisation and pressure drops are not considered.
Design of pervaporation modules based on computational process modelling
401
5.2. Module simulations In Figure 4, the retentate composition of the flow and the 0,4 335 temperature profiles in the modules are plotted. The simulations with the 0,3 315 Excel model show that a membrane Temp ( Excel) area of 30 m³ is needed in three Temp (ACM) 0,2 Ret Water (ACM) 295 modules with reheating. The rigorous Ret Water (Excel) model in ACM confirms the results 0,1 275 of the comparison. The temperature 0,0 255 drop in the modules is much higher, 0 10 20 30 while the separation efficiency is Membrane area (m²) enhanced. A membrane area of 22,5 m³ in four modules is needed. Figure 4: Module design with Excel and ACM This work shows that a fast and robust design of membrane modules and their configuration is possible using shortcut models in Excel. This basic design step is helpful to identify potential process solutions. Nevertheless, for an exact module design including fluid dynamic effects, the use of a rigorous model in ACM is necessary. The further investigations will aim at the optimization of the process by comparing different membrane and module types. 355
Temperature (K)
Retentate H2O (mol/mol)
0,5
References [1] T. Melin and R. Rautenbach, 2007, Membranverfahren – Grundlagen der Modul- und Anlagenauslegung [2] R.W. Baker,Membrane Technology and Applications, Second Edition, 2004 [3] A. Jonquières et al., Industrial state-of-the-art of pervaporation and vapour permeation in the western countries, Journal of Membrane science 206 (2002) pp. 87-117 [4] B. Smitha et al., Separation of organic-organic mixtures by pervaporation – a review, Journal of Membrane Science 241 (2004) pp. 1-21 [5] H. Matuschewski and U. Schedler, MSE – modified membranes in organophilic pervaporation for aromatics/aliphatics separation, Desalination 224 (2008) pp. 124-131 [6] W.R. Berendsen et al., Pervaporative separation of ethanol from an alcohol – etser quaternary mixture, Journal of Membrane Science 280 (2006) pp. 684-692 [7] M. T. Del Pozo, Red. of Energy Cons. in the Process Ind. Using a Heat-Integrated Hybrid Distill Pervaporation Process, Ind. Eng. Chem. Res 48 (2009) pp. 4484-4494 [8] B. Han et al, Computer Simulationand optimization of pervaporation process, Desalination 145 (2002) pp. 187-192 [9] A. Verhoef et al., Simulation of a hybrid pervaporation – distillation process, Computers and Chemical Engineering, 32 (2008) pp. 1135-1146 [10] B. Klinkhammer et al, Inorganic membrane module design: Modelling of Fluid dynamics, Int. Conf. on Inorganic Membranes, ICIM6, Montpellier, France, 2000, p. 36 [11] M. Schleger et al, Module arrangement for solvent dehydration with silica membranes, Desalination 163 (2004), pp. 281-286
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Surrogate-based VSA Process Optimization for Post-Combustion CO2 Capture M. M. Faruque Hasan,a I. A. Karimi,a,* S. Farooq,a A. Rajendran,b M. Amanullah,b a
Department of Chemical & Biomolecular Engineering, National University of Singapore, 4 Engineering Drive 4, Singapore 117576 * E-mail: [email protected] b School of Chemical and Biomedical Engineering, Nanyang Technological University 62 Nanyang Drive, Singapore 637459
Abstract Post-combustion CO2 capture in existing power plants is essential to arrest the current rise in atmospheric CO2 and the consequent alarming trend of global warming. While absorption and pressure swing adsorption are well-known carbon capture technologies, vacuum swing adsorption (VSA) is a potential candidate. In this work, a comprehensive non-isothermal model is first developed and implemented in the multi-physics software COMSOL to simulate various modes of VSA operation. Our extensive parametric study suggests that even a simple basic VSA cycle can capture CO2 with high purity & recovery at comparable or lower energy penalty than published data. The rigor of the full transient VSA simulations to reach the cyclic steady state, however, make fully rigorous VSA optimization intractable. To this end, we present a sequential optimization strategy based on response surface models with synergistic combination of COMSOL simulation model with Design and Analysis of Computer Experiments (DACE). Unlike most optimization studies which either focus on maximizing CO2 purity/recovery or minimizing energy penalty, we use the total-ownership-of-cost approach to rationally drag technology performance, technology economics, energy penalty and environmental impacts to a single basis ($/ton of CO2). The effectiveness of this approach to assess carbon capture economics by combining costing with system analysis is also discussed. Keywords: Carbon Capture, VSA, Total Ownership of Cost, Simulation, Optimization
1. Introduction Fossil fuels meet about 85% of the global energy demand. Despite the aggressive efforts for clean energy, the projections for the next 10-20 years remain similar. A major portion of world's electricity production comes from coal-fired, oil-fired, and gas-fired power plants. It is estimated that fossil-fuel power plants emit nearly 40% of the CO2 released annually to the atmosphere. Post-combustion CO2 capture in existing power plants is, therefore, essential to arrest the current rise in atmospheric CO2 and the consequent alarming trend of global warming. CO2 capture and sequestration (CCS) can play a potential role in reducing industrial CO2 emissions while continuing the use of fossil fuel like coal. The power industry needs a robust, economical, and compact CO2 capture and concentration technology that can be readily retrofitted into the existing plants. While amine-based absorption is the most popular technology at present, the energy required to capture each ton of CO2
Surrogate-based VSA Process Optimization for Post-Combustion CO2 403 Capture remains high. MEA based absorption consumes ~1028 kWh/ton of CO2 captured, which is about 88.5% of the electrical energy generated by the power plant. Such high energy penalty is a major bottleneck in CCS using absorption. The total energy consumption is even more if the compression of CO2 to sequestration site is considered. It is possible to compute the thermodynamically minimum energy that must be spent to separate CO2 from power plant flue gas, a mixture of predominantly N2 (85% or more) with balance CO2. Assuming a completely reversible process, the minimum energy penalty (the percentage of energy generated by the power plant that must be spent as the minimum work to separate each ton of CO2) is about 3.5 ~ 4%. Recently, Agarwal et al. (2010) optimized a pressure swing adsorption (PSA) process which requires 465 kWh/ton of CO2 captured with 90% purity and 85% recovery. Clearly, PSA offers an energy penalty (about 40%) which is less than half of that offered by absorption. However, one reason among others for which even the optimized PSA uses ten times more energy than the theoretically minimum is because it requires excess energy to compress large volume of flue gas. One recent advancement in adsorption-based CO2 capture is the use of vacuum swing adsorption (VSA). It is a cyclic process similar to PSA and each cycle includes two or more steps. However, unlike PSA for which the operating pressure levels are between atmospheric and higher than atmospheric, the operating pressure levels for VSA are between low vacuum and atmospheric. One advantage of VSA is that it does not require flue gas compression during pressurization and adsorption. As presented and discussed later, even a simple 4-stepVSA process can significantly reduce the energy consumption for each ton of CO2 captured, compared to that of absorption and PSA. To realize the full potential of a VSA-based CO2 capture, it is imperative to optimize its performance, even for a fixed configuration. However, VSA optimization is not trivial and poses several challenges. First, modeling of a multi-component, adsorbent-packed distributed process results in a system of highly nonlinear algebraic and partial differential equations. The rigor of the model and the computational demand for the full transient VSA simulations to reach the cyclic steady state make fully rigorous optimization difficult and often intractable. Agarwal et al. (2009) used proper orthogonal decomposition (POD)-based reduced order models for PSA optimization. However, their model did not include heat effects. Since heat effects are critical for CO2/N2 system, such simplified model could result in suboptimal VSA synthesis. Second, depending on the column size, cycle configuration, step durations and pressure levels, VSA performance varies. Several indicators are used to measure the VSA performance, e.g., the resultant CO2 purity, recovery (% of CO2 recovered from feed), productivity (tonnes of CO2 captured/m3 of adsorbent/day) and/or the energy penalty incurred by the separation process. Current optimization of post-combustion carbon capture processes is mostly driven by these separate and often conflicting objectives/perspectives. For instance, unit level optimization often focuses on maximizing purity and recovery (a common target/standard in absorption is to recover CO2 with 90% purity and recovery). In contrast, process level consideration is mostly to minimize energy penalty. Such piecemeal optimization leads to suboptimal CCS chain. Moreover, a common trend has been to treat critical infrastructure economics as alien by ignoring capital and operating expenditures for separation, transportation and sequestration, although trade-offs between costs assigned to various entities of a CCS chain affect the overall cost. While a multi-objective optimization is a possibility, it only computes and views 2D Paretos for equally good pairs of objectives, rather than suggesting the ‘single best’ solution.
404
M.M.F. Hasan et al.
In this work, we first develop a comprehensive non-isothermal model for the VSA process. Then, we present a sequential optimization strategy based on response surface models with synergistic combination of COMSOL simulation model with Design and Analysis of Computer Experiments (DACE). Our objective is to minimize the total cost of CO2 capture in a VSA-based CCS chain, computed based on the total ownership of cost (TOC) approach. The strategy is applied to concentrate CO2 from a feed containing 10-15% CO2 in balance nitrogen, which is representative of coal-fired power plants. We begin with the development of the non-isothermal process model.
2. VSA Modeling and Simulation A comprehensive model (Table 1) is developed for the multi-component adsorption system in an adsorbent-packed column. It includes temperature and pressure/velocity effects and heat transfer resistance across the column wall. A linear driving force is assumed and the model is applicable to both single and dual-site Langmuir type isotherms. The model has been successfully implemented in multi-physics software COMSOL to simulate various modes of a 4-step VSA process. The steps are namely pressurization, adsorption, blowdown at intermediate vacuum, and evacuation at lowest vacuum. 13X zeolite is used Table 1. General model for non-isothermal adsorption process including frictional pressure drop. Model Equations Dimensionless Variables: Pint PL P T ; PL ; Pint ;T ; Tw P PH PH PH T0
xi
qi * ; xi qs
qi* ; uz qs
uz ;Z u0
z ;W L
Tw ; Ta T0
Ta ; T0
tuo L
Linear driving force model: w xi wW
D i ( xi* xi )
Gas-phase mole fraction: wy A 1 § w 2 y A 1 wy A wP 1 wy A wT · ¨ ¸ wW Pe © wZ 2 P wZ wZ T wZ wZ ¹
wy A \ T § wxA wxB · u z yA ¨ y A 1 ¸ wZ wW wW ¹ P ©
J
wP wZ
Dimensionless pressure:
wP P wT wW T wW
§ P wT · wuz wP w uz uz xA xB ¨ P ¸ T\ T wZ ¹ wZ wZ wW ©
Dimensionless bed-temperature: wT wW
S4
wx w 2T wT N S 5u z ¦ S 6i S 7T i S 8 T Tw wZ 2 wZ i 1 wW
Dimensionless wall-temperature:
wTw wW
w 2T S1 2w S 2 T Tw S 3 Tw Ta wZ
Langmuir isotherm: E i yi * xi
+
Tw
Ta
P
f W ; PL o 1
N
1 ¦ E i yi i 1
x*B
yB 0
-
at z = 1 wy A 0 wZ wT 0 wZ Tw Ta
at z = 0 1 wy A u z ( y A 0 y A ) Pe wZ 1 wT u z (1 T ) PeH wZ
wP wZ
0
Adsorption: +
Velocity: uz
Conditions Initial condition: at W 0, 0 d Z d 1 y A y A0 , P PL , T 1, xA x*A y , xB A0 Boundary conditions: Pressurization:
-
at z = 1 wy A 0 wZ wT 0 wZ Tw Ta
at z = 0 wy A Pe( y A0 y A ) wZ 1 wT (1 T ) PeH wZ
Tw
Ta
P 1
P 1
' PH
Blowdown/Evacuation: +
-
at z = 0 wyA 0 wZ wT 0 wZ Tw Ta
at z = 1 wy A 0 wZ wT 0 wZ Tw Ta
wP wZ
P
f W :1 o Pint (blowdown)
P
f W : Pint o PL (evacuation)
0
Surrogate-based VSA Process Optimization for Post-Combustion CO2 405 Capture The VSA model is used to predict CO2 purity, recovery, productivity and energy consumption per ton of CO2 captured for a fixed column dimension and a given set of operating conditions such as duration of and pressure level in each step. Using a feed containing 15% CO2 in balance nitrogen, which is representative of coal-fired power plants, a parametric study is performed for the 4-step VSA process. Results (blue line in Figure 1) indicate that a significant reduction in energy penalty compared to the values reported in literature is achievable. 120
Kothandaraman (2010), MEA
Absorption
Knudsen (2009), MEA, 1 ton/h pilot plant
100
Kothandaraman (2010), K2CO3
% Energy Penalty
Biegler (2010), 90% Pure CO2 Webley (2008), 90~95% Pure CO2
80
VSA Simulation, 95~98% Pure CO2, Recovery increases with vacuum intensity Bhown & Freeman (2007), Reversible, 100% Pure CO2
60
PSA
40
20
VSA
0 0
10
20
30
40
50
60
70
80
90
100
% CO2 Recovery
Figure 1. Parametric results for a 4-step VSA process.
3. VSA Optimization The rigor of the simulation model described in the previous section and the computational demand for the full transient VSA simulations to reach the cyclic steady state make fully rigorous optimization of VSA process intractable at this time. Therefore, the development of faster alternative approaches is imperative. To this end, we have developed a sequential optimization strategy (Figure 2a) based on response surface models. In this approach, we synergistically combine the COMSOL simulation model with Design and Analysis of Computer Experiments (DACE) model. First, we design a set of experiments (simulations) over a range of design and operating variables (feed velocity (vo), cycle pressures (Pads, Pbd, Pvc) and durations (pr, tads, tbd, tvc)), using the Latin hypercube method. Then, for each experiment, we use the simulation model to get the cyclic steady state and obtain recovery, purity, productivity and energy consumption as response data. We fit response surfaces based on kriging metamodels to these simulation data, and identify a new set of design and operating parameters that give the best performance locally. We then construct new response surfaces centered at the currently best solution, and repeat the above procedure until no improvement in VSA performance is observed. Kriging-based surrogate/response surface models have been successfully used for process synthesis in recent times (Caballero & Grossmann, 2008; Henao & Maravelias, 2010). However, our development of surrogate models based on kriging for VSA is somewhat different. While most literature on surrogate-based VSA/PSA optimization uses one or more system performance indicators (such as purity, recovery, etc.,) as responses, we use CO2 flow rates of each of the 4 steps in VSA as a separate response and fit them using kriging (Figure 2b). These flow-based responses are then used to compute VSA purity, recovery and productivity. This results in an operationally realistic constrained optimization formulation with analytical details.
406
M.M.F. Hasan et al.
Plan ResponseRuns atCurrentDesign
Pressurization
Simulate PilotPlant (COMSOL)
SolveNonconvex Optimization (BARON/GAMS)
Internal species flows
Generate ResponseSurface viaRegression
Adsorption Blowdown Evacuation
(a)
Extended & Constrained Response Surface Mass balances
(b)
Figure 2. (a) Proposed sequential VSA optimization framework, (b) Surrogate models for constrained optimization with analytical details.
3.1. Total ownership of cost (TOC)-based objective Our objective is to minimize total cost of CO2 capture considering an entire CCS chain that uses VSA to capture CO2 from flue gas. A combined, systems level and holistic approach is taken into consideration for developing the objective function which rationally drags performance (purity, recovery, productivity, etc.), economics (capital investment, logistics costs, etc.), total energy penalty (kWh/ton) and environmental impacts (carbon trade) to a single basis ($/ton of CO2 captured).
4. Optimization results To demonstrate, we consider the design of a 4-step VSA process for a 500 MW coalbased power plant producing 3 mtpa (million tones per annum) of CO2. We perform the VSA optimization for 4 iterations. Results (Table 2) indicate that even a simple 4-step VSA process can capture CO2 with high purity & recovery, with capture cost as low as $30 per ton of CO2. Note that this capture cost is subject to the cost values chosen as basis, higher energy and infrastructure costs will increase the capture cost. Table 2. VSA optimization results. Iteration 1 2 3 4
vo
Pbd
Pvc
tpr
tads
tbd
(s) 1.00 1.00 0.82 0.80
(atm) 0.100 0.100 0.091 0.050
(atm) 0.010 0.050 0.030 0.013
(s) 10.0 19.5 10.0 30.3
(s) 20.0 21.8 10.0 18.4
(s) 20.0 18.3 13.1 16.8
tvc
Energy
Economics Environment Capture Cost
(s) ($/ton CO2) ($/ton CO2 ) 50.0 28.28 5.72 20.0 26.06 6.47 23.4 22.02 5.30 29.5 16.56 6.14
($/ton CO2)
($/ton CO2 )
1.09 0.93 4.85 8.07
35.08 33.46 32.17 30.77
5. Conclusions A sequential strategy based on surrogate models is proposed for VSA optimization. The TOC-based optimization approach for carbon capture leads to process-level design with systems-level impact under consideration and allow comparing the relative cost contributions from energy, economical and environmental factors while designing a capture process. Proposed framework can be extended to general techno-economic assessment by taking carbon tax and energy penalty into consideration.
References A. Agarwal, L.T. Biegler, S.E. Zitney, 2009. Simulation and optimization of pressure swing adsorption systems using reduced-order modeling. Ind. Eng. Chem. Res. 48 (5), 2327. A. Agarwal, L.T. Biegler, S.E. Zitney, 2010. A superstructure-based optimal synthesis of PSA cycles for post-combustion CO2 capture. AIChE J. 56 (7), 1813. J.A Caballero, I.E. Grossmann, (2008). An algorithm for the use of surrogate models in modular flowsheet optimization. AIChE J. 54, 2633. C.A. Henao, C.T. Maravelias, (2010). Surrogate-based process synthesis. In Processedings: S. Pierucci and G. Buzzi Ferraris (Editors), ESCAPE20.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Design of flexible process flow sheets with a large number of uncertain parameters Mihael Kasaš, Zdravko Kravanja, Zorka Novak Pintariþ University of Maribor, Faculty of Chemistry and Chemical Engineering, Smetanova 17, SI-2000 Maribor, Slovenia
Abstract This work presents a solution strategy for design and synthesis of flexible process flow sheets with a large number of uncertain parameters, i.e. 50-100. The strategy relies on the standard two-stage stochastic formulation for fixed degree of flexibility, and combines the following solution techniques: a) prescreening analyses for reduction of uncertain parameters used in the approximate stochastic optimization, b) determination of the reduced set of critical points for feasibility, c) decomposition technique for individual stochastic suboptimizations of uncertain parameters and determination of basic point for approximate stochastic optimization, and d) approximate stochastic optimization for flexible design. These methods are merged into the strategy for process flow sheet optimization with a large number of uncertain parameters, which is illustrated with bioethanol case study. Keywords: uncertainty, approximate stochastic optimization, MINLP, bioethanol.
1. Introduction Every optimal engineering design obtained by means of mathematical programming is optimal only for specified values of input data used during the optimization. In real life, however, the majority of input data changes frequently, e.g. model data (transfer, conversion and efficiency coefficients), process data (temperatures, pressures), and external data (demand, prices). These fluctuations are responsible for changing optimal solutions into the suboptimal or even infeasible. Considering uncertainty in all engineering problems is thus of paramount importance. The metrics of process flexibility was developed in the eighties by flexibility test and index (Halemane and Grossmann, 1983; Swaney and Grossmann, 1985). Various stochastic algorithms have been used since then for process design and synthesis under uncertainty (see e.g. Acevedo and Pistikopoulos, 1998). As the solution of stochastic problems is very difficult, a lot of effort has been made to solve such problems by means of approximations and simplifications, e.g. aggregation function (Raspanti et al., 2000), simplicial approximation (Goyal and Ierapetritou, 2007), deterministic approximations (Novak Pintariþ and Kravanja, 2008), and stochastic approximations (Novak Pintariþ and Kravanja, 2004; Karuppiah et al., 2010). Liu et al. (2010) presented a design of polygeneration system under uncertainty. Despite the great effort put into the area, real-
408
M. Kasaš et al.
world chemical processes have not been successfully solved by now. The problem is even more difficult in biochemical processes which are subject to even higher uncertainty, while reliability of models and input data is much lower. In this paper, an approach is presented for mixed integer nonlinear programming (MINLP) design and synthesis of large and complex (bio)chemical process flow sheets. A bioethanol case study with around 70 uncertain parameters is presented.
2. An approach to large-scale approximate stochastic optimization A standard two-stage stochastic formulation of flow sheet model is applied in this work. Equipment sizes are the first-stage variables, and operating conditions the second-stage. First-stage variables appear in the investment term of the economic objective function, while the second stage in the continuous cash flow terms. Deterministic equivalent of such model is derived through discretization of uncertain parameters by e.g. Gaussian integration or randomly generated points in sampling procedures. These methods cannot be applied directly to large-scale problems because of enormous increase in discretized models. Therefore, the main idea is to extract as many information as possible from the results of easily solvable deterministic problems, obtained by individual variations of uncertain parameters during the prescreening analyses. 2.1. Prescreening analyses The main goal of prescreening analyses is to reduce the number of uncertain parameters for the approximate stochastic optimization. Prescreening methods involve grouping of parameters, and analyses of their sensitivity, monotony and symmetry. Several uncertain parameters have similar characteristics and can be treated as one single parameter, e.g. heat transfer coefficients, efficiencies, split fractions etc. Further reduction process is divided into two parts: determination of those uncertain parameters that influence the first-stage (design) variables, and those that influence the objective function. Sensitivity analyses are performed by solving the original problem at several different values of each uncertain parameter within the predefined intervals. If the influence on design variables is observed, the parameter is considered for critical points determination. If the influence on design variables is monotonic, the parameter is fixed to its extreme critical value. In the case that the influence on the objective function is observed, the uncertain parameter is considered in the approximate stochastic optimization, otherwise it is fixed at the nominal value. The later is true also for those parameters that show the symmetrical deviations of objective function when changing in the negative and positive directions from the nominal point. 2.2. Determination of critical points Critical points are the worst-case combinations of uncertain parameters (scenarios). By including these points in the deterministic-equivalent problem, the feasibility of design is guaranteed in a robust way for predetermined variations of uncertain parameters. These scenarios could be identified by enumeration of vertices, or by one- or two-level procedures (Novak Pintariþ and Kravanja, 2008). In this work, however, a more suitable procedure for large problems is applied that uses the results of prescreening analyses. These results identify those parameters with non-monotonic influence on design variables. These parameters are then used to generate the minimum set of critical points
Design of flexible process flow sheets with a large number of uncertain parameters 409
by using the method based on the set covering procedure. The procedure is repeated for each uncertain parameter by optimizing easy solvable deterministic models. The maximum number of critical points obtained is equal to the number of the first-stage design variables in the flow sheet, but on our experience it is often considerably lower. 2.3. Determination of basic point The central basic point represents the one single scenario at which the expected objective value is approximated fairly well (Novak Pintariþ and Kravanja, 2004). This point measures the deviation of the expected value from the nominal value. It is obtained by decomposing a multi-dimensional uncertainty into one-dimensional suboptimization problems. The decomposed problem is solved for each uncertain parameter at 5 Gaussian quadrature points and at critical points, while other parameters are fixed at the nominal values. The conditional expected objective function of decomposed problem is calculated. The basic point at the conditional expected value is obtained from the regression curve fitted through the objective functions of 5 scenarios. The procedure is repeated for all parameters with asymmetric influence on the objective function. 2.4. Solution strategy for the approximate stochastic optimization The above mentioned techniques are merged into the strategy for large-scale flexible process design and synthesis. At the first step, deterministic MINLP synthesis is performed at the nominal values of uncertain parameters. The result of this step is the optimal flow sheet topology that is most likely inflexible even for small deviations of uncertain parameters. Anyway, maximum feasible deviations of uncertain parameters can be studied by using this model. At the second step, a deterministic-equivalent MINLP synthesis is performed. The prescreening analyses are done at major iterations of the MINLP algorithm in order to determine critical vertices for each alternative flow sheet structure. The problem is solved at the nominal point which approximates the expected objective criterion, and at the critical vertices. The result of this step is flexible flow sheet topology with rough deterministic approximation of the expected objective function. The third step of the procedure starts with the optimal flexible structure obtained at the second step. The basic point is determined for those uncertain parameters that showed asymmetric effects on the objective function during the prescreening analyses. Deterministic-equivalent problem is solved as NLP problem at the basic point and critical vertices (P1). The result of this optimization is a flexible flow sheet structure with approximate stochastic objective value and flexibility index equal to 1.
EZ | max Z ( xBP , y* , d , T BP ) xBP , d
s.t. hi ( xi , y* , d , Ti ) 0 gi ( xi , y* , d , Ti ) d 0 d t g d,i ( xi , Ti ) d t d LO x X , d D, y ^0,1`
i BP CV
(P1)
410
M. Kasaš et al.
where y* represents the optimal process topology determined in the second step, EZ is the expected objective value, BP the basic point, and CV a set of critical vertices.
3. Bioethanol case study A case study process produces 180 000 t/yr of bioethanol. Please note that process flow sheet is omitted due to lack of space. Corn grains are broken by means of physical treatment in grinding unit, and the thermal treatment with the steam. In the biological stages (liquefaction and saccharification), the starch is converted into sugars that are further fermented into ethanol. The diluted ethanol solution is concentrated in a beer column, followed by further purification in the molecular sieves and/or rectification. There are two discrete decisions in this case study: solid-liquid separation before or after the beer column. A model by ýuþek and Kravanja (2010) was applied for the deterministic synthesis, while the objective function was modified to the net present value (NPV). The model contains app. 6400 variables and 6600 constraints. In the optimal flow sheet, solid-liquid separation takes place after the beer column. The NPV amounts to 367.9 M$ and the total capital cost to 89.1 M$. 71 uncertain parameters were defined, e.g. product demand, feed compositions, prices of feedstocks, products and utilities, conversion coefficients, heat transfer coefficient etc. It was established that the design obtained is not able to tolerate deviations of some input data from nominal values. 3.1. Reduction of uncertain parameters in the prescreening procedure In this case study, the total number of vertices would be 271. A first reduction was achieved by combining the split fraction parameters of various components into one single parameter. The total number was thus reduced from 71 to 56. A distinctive nonmonotonic influence on design variables was determined for 7 parameters, monotonic influence for 16 parameters, while the influence of the others, especially external uncertain parameters, was negligible. 7 influencing parameters were combined into 3 critical vertices, while monotonic parameters are fixed at their critical bounds. It was determined that 28 parameters affect asymmetrically the objective function. Basic point was thus determined considering these parameters. 3.2. Flexible synthesis At the second step, MINLP synthesis of significantly reduced problem was performed at the nominal point and critical vertices. The same optimal topology was obtained as before. The NPV of optimal flexible solution is 357.6 M$ which is lower than the NPV of inflexible solution (367.9 M$). The total capital cost amounts to 96.9 M$ and is higher than in the case of inflexible design (89.1 M$). The size of the problem is increased to around 25 700 variables and 25 700 constraints. At the third step, the basic point was determined for 28 uncertain parameters identified during the prescreening analysis. The optimal structure obtained at the second step was considered. The decomposed optimization problem for each uncertain parameter has around 50 000 continuous variables and about the same number of constraints. Finally, the deterministic-equivalent problem was solved at the basic point and the critical vertices. The NPV of the approximate solution is slightly different than in the previous step, i.e. 356.8 M$, and the investment cost is 97.2 M$. This indicates that
Design of flexible process flow sheets with a large number of uncertain parameters 411 somewhat optimistic result is obtained at the nominal point. Although this result cannot be verified by other methods, it could be expected that the design is flexible for specified deviations of uncertain parameters, assuming the correct set of critical points is included in the approximate stochastic optimization.
4. Conclusion An approach to design and synthesis of (bio)chemical processes with a large number of uncertain parameters was presented. The main features of the approach are the reduction of uncertain parameters and the approximate stochastic optimization. The reduction is achieved by means of prescreening analyses which evaluate the influence of uncertain parameters on the first-stage design variables and the objective function. Parameters with distinctive non-monotonic influence on design variables are used for determination of critical points, while the parameters with distinctive asymmetric influence on the objective function are used for determination of basic point. The approximate stochastic optimization is performed at the basic point and critical points. The approach was illustrated by large-scale bioethanol process with around 70 uncertain parameters.
5. References J. Acevedo, E. N. Pistikopoulos, 1998, Stochastic optimization based algorithms for process synthesis under uncertainty, Computers & Chemical Engineering, 22, 647-671. L. ýuþek, Z. Kravanja, 2010, Sustainable LCA-based MINLP synthesis of bioethanol processes, Computer Aided Chemical Engineering, 27, 1889-1894. V. Goyal, M. G. Ierapetritou, 2007, Stochastic MINLP optimization using simplicial approximation, Computers & Chemical Engineering, 31, 1081-1087. K. P. Halemane, I. E. Grossmann, 1983, Optimal process design under uncertainty, AIChE Journal, 29, 425-433. R. Karuppiah, M. Martin, I. E. Grossmann, 2010, A simple heuristic for reducing the number of scenarios in two-stage stochastic programming, Computers and Chemical Engineering, 34, 1246–1255. P. Liu, E. N. Pistikopoulos, Z. Li, 2010, Decomposition based stochastic programming approach for polygeneration energy systems design under uncertainty, Industrial and Engineering Chemistry Research, 49, 3295–3305. Z. Novak Pintariþ, Z. Kravanja, 2004, A strategy for MINLP synthesis of flexible and operable processes, Computers & Chemical Engineering, 28, 1105–1119. Z. Novak Pintariþ, Z. Kravanja, 2008, Identification of critical points for the design and synthesis of flexible processes, Computers & Chemical Engineering, 32, 1603–1624. C. G. Raspanti, J. A. Bandoni, L. T. Biegler, 2000, New strategies for flexibility analysis and design under uncertainty, Computers and Chemical Engineering, 24, 2193–2209. R. E. Swaney, I. E. Grossmann, 1985, An index for operational flexibility in chemical process design. 1. formulation and theory. AIChE Journal, 31, 621-630.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Design methodology for Internally HeatIntegrated Distillation Columns (IHIDiC) with side condensers and side reboilers (SCSR) Sankari Maddu,a Ranjan K Malikb a,b
Department of Chemical Engineering, Indian Institute of Technology – Bombay, Powai, Mumbai, Maharashtra 400076, India
Abstract Distillation is the most widely used but an energy-intensive separation technology which consumes huge amount of thermal energy as a separating agent. Because of low thermal efficiencies of distillation columns, various energy integration methods have been explored in the past in order to reduce the energy requirements for a given separation task. Among them, Internally Heat Integrated Distillation Columns (IHIDiC), in which the rectification zone of the column is at a higher pressure than the stripping zone, are reported to be the best alternative to the conventional columns that give up to 50% energy savings when the rectifying and stripping zones are configured in an annular fashion. However, the structural complexity of IHIDiC draws one’s attention to search for an alternative which involves minimum design effort. In this work, a methodology has been proposed for an IHIDiC with side condensers and side reboilers (SCSR). This configuration introduces minimum structural complexity as the integrated side condensers and side reboilers are external to the columns. The side reboilers of the stripping column are driven by the side condensers of the rectifying column. The problem, however, is to determine the number of such units, their heat loads, and their locations that lead to an optimal design. Thus, a design methodology based on the well known Column Grand Composite Curves (CGCC) has been proposed algorithmically and tested on two separation problems of close-boiling mixtures, namely, PropanePropylene and Styrene-Ethyl Benzene splitters. A case study of a multi-component mixture has also been analysed. Economic evaluation studies show that the optimized configuration of the IHIDiC with SCSR for the close-boiling mixtures resulted in attractive payback periods and significant savings in operating expenses. The energy savings in the case study of multi-component system, however, is less attractive due to much higher capital investment. Keywords: Distillation, IHIDiC with SCSR, Close-boiling mixtures, CGCC, Optimization
1. Introduction An underlying principle of energy reduction in a distillation column is to decrease the vapor flow entering the condenser and to decrease the liquid flow entering the reboiler. This is done by condensing the top-going vapor along the rectifying zone, and by evaporating the down-coming liquid along the stripping zone. Since 1961, several authors have proposed different configurations for heat integration in the column itself based on the heat pump principle. Many years later, Mah and co-workers (1977) introduced the original concept of HIDiC under the name “Secondary Reflux and Vaporization (SRV)”. Subsequently, several patents were filed by Haselden, Seader, Glenchur and Govind (Olujic et al., 2003) related to different structural configurations
A Design methodology for Internally Heat-Integrated Distillation columns (IHIDiC) with side condensers and reboilers (SCSR)
413
of the SRV. More recently, a group of Japanese researchers, namely, Nakaiwa et al. (2001), and Naito et al. (2000) have made theoretical evaluations and pilot plant testings on IHIDiC. In 2005, Gadalla et al. proposed a pinch technology based design methodology which gives the design feasibility by using Stage-Temperature profiles. As the proposed concentric tube arrangements of the IHIDiC are more complex in their structure, extra attention is needed on aspects like dynamic modeling, optimal design and controlling of the system. Thus it is meaningful to explore alternative designs with minimum structural complexity but high energy efficiencies.
2. Proposed methodology for IHIDiC with SCSR In this work, a methodology has been proposed for an IHIDiC with SCSR using Column Grand Composite Curves (CGCC) which comprises the information regarding heat availability and deficiency data in a column. The suggested methodology would give the optimal design parameters such as the pressure ratio between the columns (compression ratio), the number of external heat exchangers, their stage locations, and the amount of heat duty for each heat exchanger by minimizing the total annual cost. 2.1 Algorithmic steps Step1: CGCC profiles of conventional column (e.g., generated by Aspen Plus) would give information regarding scope for side condensing or side reboiling, feed conditioning, and reflux modifications (Dhole and Linnhoff, 1993). These composite curves indicate the condensing loads on each stage of rectifying section and the reboiling loads on each stage of stripping section (Fig.1). To utilize the available heat from the rectifying zone, a sufficient temperature driving force has to be maintained by increasing the pressure ratio Fig.1: CGCC for conventional column between the two zones using a compressor and a throttle valve arrangement (see Fig. 5). Step 2: Pressure ratio between the separated rectifying and stripping columns widens the gap between the profiles which indicates available thermal gradient. Also a significant variation in reboiler and condenser heat duties can be observed due to change in the relative volatility of the system with pressure. For most of the systems, profiles obtained after increasing the pressure ratio are similar to the profiles shown in Fig 2. Here, it is feasible that QR amount of heat (overlapped area) can be withdrawn from the rectifying column to supply in the stripping column. The residual amount of energy, i.e., (QC -QR), left in the rectifying column, Fig.2: CGCC after raising pressure ratio can be withdrawn by employing a trim condenser. For systems, when the value of QR is greater than QC even after maintaining sufficient pressure ratio, the QC amount of heat can only be transferred between the columns and the remaining required amount of heat (QR-QC) can be supplied by using a trim reboiler.
414
S. Maddu and Ranjan K.Malik
Step 3: A selected number of external heat exchangers (here n=3, for the purpose of explaining the methodology) are considered between the two columns in an IHIDiC with SCSR system for carrying out the heat transfer. The T-H and Stage-H profiles are used to locate the positions of heat exchangers on each column and fix their heat duties. In the upper part of Fig 3, the locations for heat exchangers on T-H profiles are marked where the quantum of heat energy matches the value of QR/n. Step 4: Available temperature driving force (¨T) on each heat exchanger is measured as a vertical distance between their positions (indicated by inclined lines) as described in the upper part of Fig 3. In usual industrial practice, operation of a heat exchanger needs a minimum driving force (say ¨T§10K) which can be maintained by varying the pressure ratio between the columns. Changes in column pressures generate new energy distribution and thus steps need to be repeated from Step 3 to find new locations of heat exchangers and their heat duties. The exact stage locations for installing heat exchangers will be known by using the Stage-H profiles as depicted in the lower part of Fig 3. Here, the stage numbers are the positions where Fig3: ¨T values and stage locations for energy availability and energy deficits are heat exchangers exactly matched with the initially assigned heat exchanger duties and are located by the dotted vertical lines extending from the upper part of the figure and horizontal lines in the lower part. In the diagram, rectifying stages are located on the right side and stripping stages are located on the left side of the y-axis to make the diagram more readable. Step 5: Having maintained sufficient driving force, heat can be supplied to the aforementioned locations of the stripping column by withdrawing heat from the supposed locations of the rectifying column. The Heat Stream option (e.g., in Aspen Plus) can function like a heat exchanger where positive value of heat duty to the stripping column acts like a side reboiler and negative value of heat duty to the rectifying column as a side condenser. The simulations are done after giving heat data on each stream to observe the energy distribution. The variation in heat duty value at the reboiler (nth heat exchanger) and at the trim condenser can be Fig4: Relocating the positions for heat observed in the simulator. In this proposed new exchangers on TH diagram configuration, external reboiler of the stripping column (smaller in size compared to the conventional column reboiler) acts like one of the side condensers for the rectifying column as shown in Fig 5. Energy can be distributed uniformly on each exchanger by changing heat duty value of the streams. In case the heat duty value at the reboiler (QHXn) is not matched with the given heat duty
A Design methodology for Internally Heat-Integrated Distillation columns (IHIDiC) with side condensers and reboilers (SCSR)
415
value at the intermediate exchangers (QHXm), one can change the heat stream duty as described below. If QHXn >QHXm QHXm new value = QHXm old value + (QHXn – QHXm old value)/n If QHXn
3. Case studies for testing the methodology The proposed methodology is applied on the separation of two close-boiling mixtures and on a multicomponent mixture. The optimized economic analysis would give the required compression ratio, number of external heat exchangers, their stage locations, and their heat loads. 3.1. Separation of Propane-Propylene and Styrene-Ethyl benzene mixtures Separation of the chosen close-boiling mixtures is very difficult due to low relative volatility and needs very high reflux ratios to produce polymer grade purity of the products. Based on the methodology, the IHIDiC with SCSR system is configured for separating these mixtures up to 99.5% purity and compared with the conventional column. The required feed conditions and the product purity are mentioned in Table-1. Economic evaluations have been performed by using the cost correlations available in literature for finding the total capital investment. Whereas, the operating expenses are taken as the sum of utility costs (low-pressure steam, mid-pressure steam, high-pressure steam, cooling water, and electricity) required for a year containing 300 working days. The total annual cost is calculated as the sum of operating cost and the annualized capital cost based on a 10 year period of plant life (Douglas, 1988).
416
S. Maddu and Ranjan K.Malik
3.2 Separation of multicomponent mixture This example problem was discussed by Dhole and Linnhoff (1993) while demonstrating the column targeting principles. The feed and column details, and product specifications are taken from this paper for finding an optimized configuration. 4.0 Conclusions In this work, a design approach for IHIDiC with SCSR has been proposed to systematically determine the optimized parameters by minimizing the total annual cost. The results for the two case studies with close-boiling mixtures show that, with proper locations, only two external heat exchangers can save substantial operating expenses with attractive payback periods. However, for the multicomponent mixture separation, the optimal number of side exchangers is six and the payback period is not very attractive due to high capital investment. Table 1. Optimized configurations of CDC and IHIDiC with SCSR for Case Studies
References Dhole,V.R. and Linnhoff, B.,1993,“Distillation column targets”, Comp Chem. Engg. 17, 549-560. Douglas, J.M., 1988,“Conceptual design of chemical processes”, McGraw-Hill, New York, pp 568-577. Gadalla, M., Olujic, Z., Sun, L., de Rijke, A., Jansens, P. J., 2005,” Pinch analysis based approach to conceptual design of internally heat-integrated distillation columns”, Chem Eng Res Des.83 (A8), 987-93. Mah, R.S.H., Nicholas, J.J and Wodnik, R.B., 1977, “Distillation with secondary reflux and vaporization: a comparative evaluation”, AIChE J.23, 651-657. Naito, K., Nakaiwa, M., Huang, K., Endo, A., Aso, K., Nakanishi, T., 2000, “Operation of a bench-scale ideal heat integrated distillation column: an experimental study”, Comp Chem. Engg. 24, 495-9. Nakaiwa, M., Huang, K., Naito, K., Endo, A., Akiya,T.,Nakane, T., 2001,”Parameter analysis and optimization of ideal heat-integrated distillation columns”, Comp Chem. Engg. 25, 737-744. Olujic. Z., Fakhri. F., de Rijke. A., de Graauw.J., and Jansens. P. J.,2003, ” Internal heat integration – the key to an energy-conserving distillation column”, J Chem Technol Biotechnol 78, 241–248. Aspen Plus User Manual, Aspen Plus version 2004.1, ASPEN Technologies Inc.: Cambridge, MA, 2004.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Development of a synthesis tool for Gas-To-Liquid complexes. Jan van Schijndela, Nort Thijssena, Govert Baaka, Abhijeet Avhalea, Jerome Ellepolaa, Johan Grievinkb a
Shell Global Solutions International BV, P.O.Box 38000, Amsterdam, 1030 BN, The Netherlands b Delft University of Technology, Julianalaan 136,Delft, 2628 BL, The Netherlands
Abstract Optimal synthesis of a Gas-To-Liquids (GTL) complex is complicated due to many degrees of freedom in a highly constrained design space. There are alternative, competing syngas manufacturing technologies on offer and different types of FisherTropsch reactors, with numerous connectivity options and a range of operational conditions. On the other hand equipment, operational and knowledge constraints confine the design space. In addition, economic performance, such as NPV, needs to be aligned with carbon and energy efficiencies. To support GTL designs a computational synthesis tool is under continuous development and extension. The problem formulation, our approach to model development and validation and a sample result are presented. Keywords: Syngas manufacturing, Fischer-Tropsch, process synthesis, nonlinear optimization.
1. Introduction Liquid hydrocarbon energy carriers, such as gasoline and diesel, are very convenient transportation fuels because of high energy content per unit mass and the presence of worldwide distribution systems. It is to be expected that they will continue to play a major role in fuelling transportation in the 21-st century. Crude oil is expected to continue to be the major source for liquid transportation fuels. Shell is however developing a number of alternative routes to liquid fuels, such as natural gas (G), coal (C) or biomass (B). The vision is that in future manufacturing complexes, gas, coal or biomass will be used as feedstock for syngas manufacturing. Long chain hydrocarbons (‘heavy paraffins’) can be produced from syngas by means of Fisher-Tropsch synthesis. These long chains must be hydro cracked to proper length for the selected target products. Any imbalance in carbon to hydrogen ratios between feed and liquid products can be restored by hydrogen supply or utilized to generate additional power and utilities for use within the complex and for export. Manufacturing complexes for cogeneration of liquid hydrocarbons, hydrogen and power are called XTL complexes {X = G,C,B}. The initial focus will be on generalising and improving Gas-To-Liquid (GTL) designs. Shell [1] is operating a GTL plant in Malaysia (1993, 14,700 barrels per day) and starting up the large PEARL GTL complex in Qatar (2011, 260,000 barrels conventional and synthetic liquid product per day). The key conversion technologies for GTL are described by Schrauwen [2]. The financial stakes in a GTL complex are very high: depending on scale and complexity the initial investment in a world-class complex can reach ten billion US$ or
418
J. van Schijndel et.al.
more. E.g. the PEARL project in Qatar, comprising Offshore Gas Production facilities and an onshore Field Gas Processing plant and a GTL plant is announced to require an investment of 18-19 bln$. When designing such a complex, many interdependent value drivers require to be optimised. High capital productivity must be matched with excellent energy and carbon efficiencies. Such a design task becomes intricate as many combinatorial options arise. E.g., choices of feeds and products, different syngas manufacturing technologies, alternative Fischer-Tropsch synthesis & reactor technologies, superstructures for connectivity of units, energy integration and interactions with the utility system. When the architecture of the complex is given, process simulation tools as PRO/II¥ [3] can be used to simulate mass and energy balances and to assess the technical performance. However, it is believed that key opportunities for designing more efficient complexes arise from exploring different flow sheet structures and their impact on the capital intensity. This paper reports on the development of a synthesis tool for GTL plants. It should enable synthesizing alternative flow sheet structures, while optimizing under various operational scenarios the use of monetary and physical resources, like carbon, hydrogen, oxygen, energy and up-time. In view of the many degrees of freedom and strong interactions in the process network a model-based optimization approach (MINLP) is chosen, using AIMMS® [4] as a computational vehicle. Similar synthesis endeavours have been reported in the area of water treatment by Karuppiah & Grossman [5], polygeneration energy systems by Liu et.al. [6] while a recent simulation and economic analysis of GTL processes is presented by Bao et. al. [7]. Setting up this GTL synthesis tool is challenging in the following ways: creating a flexible (super) structure, doing systemic modelling with model reduction, proper model scaling and initialisation, analysing the robustness and significance of optimization results in view of underlying model uncertainty.
2. Structure of a Gas-To-Liquid complex 2.1. Diagrams of a GTL complex Figure 1 shows the main reactions to generate syngas and convert syngas into liquid hydrocarbons. Partial oxidation (POx) and steam reforming of methane (SMR) result in different hydrogen to carbon monoxide ratios. Syngas from each of these sources can be mixed for an optimal feed to the Fischer-Tropsch (FT) synthesis reactions. Syngas manufacturing main reactions: Oxygen Natural gas
(a) Partial oxidation: CH4 + 1/2O2 => CO + 2 H2 CH4 + 2 O2 => CO2 + 2 H2O CO + H2O => CO2 + H2
ΔH0R= - 71.8 kJ/mol ΔH0R= - 803 kJ/mol ΔH0R= - 41.0 kJ/mol
(b) Steam methane reforming: Water
CH4 + CO +
H2O H2O
=> CO + 3 H2 => CO2 + H2
ΔH0R= +206 kJ/mol ΔH0R= - 41.0 kJ/mol
Synthesis gas Water
Fischer-Tropsch synthesis reaction: CnH2n + CO + 2H2 => Cn+1H2n+2 + H2O
ΔH0R=
- 170 kJ/mol
Liquid product
Off gas
Off-gas combustion for energy generation: CO + 2 H2 + 3/2 O2 => CO2 + 2 H2O
ΔH0R= - 732
kJ/mol
Utilities
Figure 1: Main conversion steps in a GTL process.
Development of a synthesis tool for Gas-to-Liquid complexes
417
The stoichiometry of the FT synthesis reaction suggests a H2:CO ratio of 2:1 which is to be attained at the reaction sites in the catalyst. Because of a relatively high diffusivity of hydrogen as compared with CO, the preferred H2:CO ratio in the bulk of the gas phase is lower than two. Due to FT catalyst activity constraints the syngas can only be partially converted, inducing some recycle of unconverted syngas (with a low H2:CO ratio) over the FT reactors. In order to get the optimal combined syngas feed, this recycle syngas is mixed with fresh syngas from the syngas manufacturing section, with a H2:CO ratio exceeding two. A block diagram of a GTL process is shown in Figure 2, featuring the syngas supply and recycle structure. The syngas recycle is introduced in order to achieve high carbon efficiency, though it causes strong interactions between the syngas manufacturing and synthesis sections. Furthermore, the heat sources and sinks in the conversion units create a tight coupling with the utility system. Water Natural Gas
Steam Methane Reformer
Hydrogen Manuf. Unit
H2
Off-gas
Recycle syngas Naphtha Natural Gas
SynGas Partial Oxidation & Treatment
Heavy Paraffins Synthesis
Heavy Paraffins Cracking
Liquid Product Upgrading & Storage
O2 Air
Natural Gas
Legend:
Air Separation Unit
N-paraffins Kerosene Gas Oil Base Oil
Water Treatment
Water
Process - utility interface
Steam & Power generation
Steam & Power off-take
Export
Envelop of synthesis model for conversion units and utility (It covers 60 – 65 % of GTL capital and ~ 100 % Carbon efficiency)
Figure 2: Scope of synthesis model shown in block diagram of a GTL complex
The synthesis tool will primarily focus on the conversion units for syngas manufacturing and heavy paraffins synthesis and their coupling with the utility system. The downstream units, hydro cracking of heavy paraffins and product separations, are kept outside the scope for the time being. 2.2. Design decisions, constraints and performance metrics in a GTL complex The mode of operation is steady state with a specified product composition and hourly production rate. The following design decision variables are considered in the synthesis: • Type of syngas manufacturing units (e.g. POx, SMR, Autothermal reforming). • Operating temperature and oxygen to carbon ratio in feed per such unit. • Type of FT reactors (e.g., fixed bed, slurry) and number of reactors in parallel. • H2:CO ratio in syngas feed, temperature and amount of catalyst in FT reactors. • Recycle distribution of unconverted syngas to syngas manufacturing units and FT units. • Transfer rates of high level energy to lower levels in the utility system The constraints on feasible operation relate to equipment capacity, safety and dependability. The physical resources considered in this synthesis study involve the mass flows of chemical species, sources and sinks for thermal energy, power generation and mechanical energy for compression. Performance metrics involve profit (NPV), carbon and energy efficiencies.
420
J. van Schijndel et.al.
3. Models and optimization problem The process and utility model is set up as a network with processing units as nodes and connecting streams as (directed) arcs. Leading principles in modeling are that: • understanding of patterns of interactions in the network and of its overall performance is more relevant than being unduly precise in unit models; • approximate models of process units can be applied only within validated domains by imposing domain boundaries (‘knowledge’ inequality constraints). The plant model is made up of a structural model and behavioural & performance models. The structural model is conceptually characterised by means of a connectivity matrix showing how streams are connecting units in terms of in- and outflow. The behavioural model comprises the physical behaviour of all units, being modelled in a grey box fashion. Linear conservation balances are formulated, while source and sink terms are related to operating conditions and design parameters by non-linear statistical correlations. These correlations are obtained either by reduction of a rigorous model or from experimental data. As the magnitudes of energy sources & sinks in the process units overwhelm any energy in- and outflows, only the sources and sinks are represented; not the complete energy balances. Thus, a process unit model consists of: • Linear mass balances for all relevant species • Non-linear (reduced) equations for source and sinks terms in mass and energy • Pressure drop correlations • Inequality constraints on equipment related operating conditions, on physical feasibility and model validity domains. A stream is characterised by the flows of the physical resources (species, energy). For each utility an energy balance over all in- and outflows and generation is applied. The performance models comprise: • a profit model, reflecting income from product sales minus costs of operation and annualised investments. • Carbon and energy efficiency expressions One of these three performance metrics can be used as an objective function in an optimization, while demanding a threshold performance for the other two. For a fixed flow sheet structure the synthesis problem can be cast in a NonLinear Programming (NLP) format. When flow sheet connectivity is set free a Mixed Integer NonLinear Programming (MINLP) problem arises. MINLP options have not been exploited yet.
4. Computational and software engineering aspects The process synthesis tool has been implemented in AIMMS®, having links to state-ofthe-art solvers for major mathematical programming types, e.g., CONOPT & LGO for NLP and BARON for MINLP. The solution of the underlying problem is currently done using CONOPT [8]. Currently, a typical problem has 65 degrees of freedom (# variables - # equality constraints). There are some 2100 constraints (equalities and inequalities), of which 650 are nonlinear, with ~ 6500 non-zeros in the Jacobian matrix. The current model has a limited number of integer choices for which a brute force approach is taken. One of the major challenges in NLP is to initialize all variables in order to get a feasible solution. A sequential modular strategy is followed by initializing the process units in flow sheet ordering after tearing and fixing plant wide recycles. Thereafter the utility network is added to the entire problem.
Development of a syntheesis tool for Gas-to-Liquid complexes
421
5. A sample result Extensive verification and a validation tests were first performed to build up confidence in the models, the prroblem formulation and in our understanding of the t results. Verifications were obttained by comparison with analytical and existing solutions. Validations were done by b simulating complex business scenarios in the synthhesis tool in AIMMS and in compariison with more rigorous models used in PRO/II. The benefit of having thhis synthesis tool is demonstrated by means of a simplle example. It shows the effect of NG allocation to two syngas units with different efficiencies. e e and carbon efficient than unit 1. When graduaally shifting Syngas unit 2 is more energy natural gas intake from unit 1 to unit 2 while maximizing carbon efficiency of o the entire plant, the resulting proffit increases up to a point where recycle of unconverrted syngas with high inert contentt causes unit 2 to become bigger and to reduce thee profit by increasing investment coosts.
Figure 3: Trade-off curve for f Natural Gas allocation to syngas units with different efficciencies
6. Conclusions and prospects p A modular set-up of a process synthesis tool for GTL plants, implemented as an NLP P problems problem in AIMMS has been presented. To make this tool work for MINLP f the near and extending it to a wider range of conversion technologies are our goals for future. The developmennt of similar frameworks to support the design of Coal-To-Liquid and Biomass-To-Liquidd plants is a longer term ambition. Acknowledgement: The contributions by Bernadette Jona, Sinatra Khoo and Cor Hurkens of TU Eindhovven are highly appreciated.
References [1] Shell GTL process: htttp://www.shell.com/gtl [2] Schrauwen F.J.M. (20004), Shell Middle Distillate Synthesis (SMDS) process, Ch. 15.3 in Handbook of Petroleum m Refining Processes, 3-rd edition, by Robert A. Meyers, MccGrawHill Handbooks, p. 15.25-155.40 [3] PRO/II: http://iom.invvensys.com/AP/Pages/SimSci-Esscor_ProcessEngSuite_PRO OII.aspx [4] AIMMS manual, http:///www.aimms.com/services/documentation [5] Karrupiah R., Grossmaann I.E. (2006), Global optimization for the synthesis of inteegrated water systems in chemiical processes, Computers and Chemical Engineering, 30, 6550-673 [6] Liu P., Pistikopoulos E.N., E Li Z. (2009), A mixed-integer optimization approach foor polygeneration energy systeems design, Computers and Chemical Engineering, 33, 759--768 [7] Bao B., El-Halwagi M..M., Elbasi N.O. (2010), Simulation, integration, and econom mic analysis of gas-to-liquid processses, Fuel Processing Technology, 91, 703-713 [8] http://www.conopt.com m/
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Pareto-Navigation in Chemical Engineering Norbert Asprion,a Sergej Blagov,a Oliver Ryll,a Richard Welke,b Anton Winterfeld,b Agnes Dittel,b Michael Bortz,b Karl-Heinz Küfer,b Jakob Burger,c Andreas Scheithauer,c Hans Hasse c a
BASF SE, Carl-Bosch-Str. 38, 67056 Ludwigshafen, Germany Fraunhofer Institut für Techno- und Wirtschaftsmathematik, Fraunhofer Platz 1, 67663 Kaiserslautern, Germany c Lehrstuhl für Thermodynamik, University of Kaiserslautern, Erwin-Schrödinger-Str. 44, 67663 Kaiserslautern, Germany b
Abstract Process development in chemical industry is a multiobjective optimization problem. Objectives of this optimization are for example product quality, raw material and investment cost, energy efficiency, reliability or health, safety, and environmental issues. Parameters for this optimization are, e.g., feed stock, utilities, process configuration, equipment, operating parameters, and site. In process design, the developer usually generates a certain number of process variants by varying process parameters intuitively. The process is usually stopped if a solution is found that is acceptable regarding all or at least most criteria, optimality cannot be guaranteed. The present paper describes a feasible alternative approach, which is a real multiobjective optimization based on the Pareto optimality concept. The Pareto front describes a set of solutions where no improvement in one objective can be reached without at least getting worse in another objective. The Pareto front allows investigating the trade-offs between different objectives, but the final choice of the design is left to the user. In the present work the multiobjective optimization is used together with a decision support system, which allows navigating within the Pareto front and thereby finding the best compromise between the objectives in a rational and efficient way. Keywords: Pareto, Multiobjective Optimization, Simulation, Decision Support, Navigation.
1. Introduction At the time being optimization of a process design is usually an iterative process including subsequent steps of simulation, basic design and economic evaluation. This optimization process will typically end due to limited time resources. As a result an improved but probably sub-optimal process design will be obtained, since many different, partly conflicting objectives have to be met in an acceptable range. This is a typical multiobjective optimization problem which is not yet accounted for in the developer’s workflow. One step forward to overcome this situation is the concept of the intelligent total cost minimization approach (i-TCM) presented by Wiesel and Polt (2006). In that approach the different objectives are weighted with costs to a total cost function which can be minimized. One drawback of that method is that other, not cost related objectives can not be included. Furthermore, the approach suffers from the typical drawbacks of weight-based multiobjective optimizations (cf. Miettinen (1999)). One example is the
Pareto-Navigation in Chemical Enigineering
423
strong dependency of results on the chosen (or assumed) weighting factors. An alternative promising approach is studying the different objectives individually and investigating sensitivities. For example it might be interesting to investigate different price scenarios for the raw materials and to see the impact on the different objectives. In the present approach theses studies are carried out not taking the entire solution space of the process design problem as a basis but only on the relevant subset of the Pareto optimal solutions, i.e. on the Pareto front. As the Pareto front has a considerably lower dimensionality than the entire solutions space, this considerably facilitates the search for the most advantageous design, which will, when using the present approach, always be Pareto-optimal. To support the choice of the final design an interactive new decision support system the so-called “Pareto-Navigation System” (Monz (2006)) is used in the present work. The work does not aim at presenting a comprehensive overview of the technique, but rather illustrates the main ideas by a simple example. The tools and methods used for that preliminary study will be continuously improved in order to allow routine applications in the future.
2. Pareto-Navigation Concept The general problem in process engineering is that we are usually working with certain process parameters x, e.g., temperatures, pressures and concentrations which will result in different outcomes for the objectives f like variable and fixed costs. The projection from the parameter space to the objective space is usually a nonlinear transformation so that the outcome for the objectives upon a variation of x is hard to predict. It is also in general not easy to solve the inverse problem, i.e. to fix the objectives f and to determine the process parameters x so that the desired f is obtained. Multiobjective optimization using Pareto-navigation helps fulfilling this task. Multiobjective optimization especially with a large number of objectives is often very time-consuming. Therefore, efficient strategies for exploring the Pareto front are necessary. In the following the sandwiching approach and model reduction will be briefly presented. 2.1. Sandwiching Experience shows that within one process concept and with continuous objectives, it is reasonable to assume that most Pareto fronts in chemical engineering are convex. That assumption helps to considerably reduce the effort for constructing the Pareto front. In case of non-convex Pareto fronts it can be shown that these can be sub-divided in convex parts (Monz 2006). First the extreme compromises (i.e. the solutions with optimized single objectives) are estimated in the given parameter range. With an inner and outer convex hull the uncertainty region of the Pareto front can be estimated. The Pareto front will be explored until a certain accuracy is reached. In the first step the inner convex hull is defined by the already known Pareto points, which are the vertexes of the convex hull. For the two-dimensional case the inner convex hull connects adjacent Pareto points by straight lines, which is shown in Fig. 1a). The outer convex hull can be determined by the normal vectors to the Pareto front at the estimated Pareto points, which are also a result of the optimization process. The normal vectors are defining tangential hyperplanes of the Pareto front. Besides the extreme compromises the intersections of these hyperplanes are the vertexes of the outer convex hull. For the two-dimensional case the hyperplanes are straight lines as can be seen in Fig. 1a).
424
N. Asprion et al.
In the sandwiching approach always the region with the largest difference between the inner and the outer convex hull is explored. To determine this thickest part the inner convex hull is elongated until this elongated hull includes the complete outer hull (Fig. 1b)). It can be shown that the thickest part is always at a vertex of the outer convex hull. This vertex is than the starting point for a one-dimensional optimization. The direction of the optimization is in the direction of the thickest part (method of Pascoletti-Serafini (Monz (2006), see for example Fig. 1b)). The convex hull has to be recalculated in the neighborhood of this new Pareto point only (Fig. 1c)), which means the effort is relatively small.
Fig. 1: Sandwiching approach for determining the Pareto front: a) determine convex hulls, b) indentify thickest part of the sandwich and calculate Pareto point in this direction, c) update convex hulls
The goal of this approach is to estimate the Pareto front only on a few points which allow an interpolation as well in the objectives as in the parameter space. Usually the Pareto front will be estimated in an off-line mode and the results of each simulation will be stored. This allows to load the results afterwards and to navigate in the Pareto front. Here the different objectives together with certain process parameters will be shown. For example one might change the outcome of one objective and will see the impact on the other objectives and the parameters. It is also possible to change parameters and to see the influence on the objectives. The navigation tool gives the possibility to compare Pareto fronts of different process configurations or sites. One can use it also for comparison of different absorption or extraction solvents. From a chosen solution a simulation with the model can be restarted. The sandwiching approach gives a good estimate of the uncertainty in the Pareto front. So it is possible to start a multiobjective optimization with a lower accuracy and as soon as the parameter region with the interesting solutions is identified to perform a second multiobjective optimization with tighter limits for the parameters and higher accuracy in the Pareto front. Furthermore it is possible to add additional parameters and objectives and to explore the new Pareto fronts. Compared to other techniques like evolutionary algorithms or the ε-constraint method (i.e. kind of optimization with enumeration, Miettinen (1999)) the sandwiching approach has a superior performance due to the controllability of progress and the estimation of errors. 2.2. Model reduction A further possibility to improve computational efficiency of a multiobjective optimization is model reduction. With the help of the reduced model, interesting parts of the reduced solution space are identified and only these parts are then subjected to an
Pareto-Navigation in Chemical Enigineering
425
analysis with the full model. In the present work, the ∞/∞ concept (e.g., Blagov et al. (2009), Ryll et al. (2008), Ryll (2009)) was used for model reduction of distillation processes. Two Pareto optimizations can be carried out: one with the reduced model and one with the full model.
3. Example For the preliminary study which is reported here, the ε-constraint method (Miettinen (1999)) was used for constructing the Pareto front. In a later project phase the computational effort will be reduced by using the sandwiching approach and interpolation as presented above. In this study the separation of an acetone/chloroform mixture by a pressure swing distillation was investigated, cf. Fig. 2a). As can be seen in Figure 2b) the pressure (or temperature) dependency of the azeotropic data unfortunately is ambiguous. To account for the uncertainty of the data model 1 and 2 were considered. Here the pressure dependency of the azeotropic composition is much higher for model 1 than for model 2.
Fig. 2: Separation of a acetone/chloroform mixture: a) Pressure swing distillation concept, b) temperature (pressure) dependence of azeotropic composition.
Since a side-aspect of the investigation was to compare the Pareto fronts of the ∞/∞ model with the results using rigorous simulation (which will be shown in the presentation) the following three objectives were chosen: purity of the chloroform product (Stream 2), purity of the acetone product (Stream 4) and the amount of recycle (Stream 5). Here the basic assumption was: minimizing the recycle will give the most profit. For simplicity, the columns in the rigorous simulation had a fixed but reasonably chosen number of stages and a fixed reflux ratio and were operated at the pressure given in Fig. 2. Therefore the only parameters of this optimization were the reboiler duties of both columns. An optimization of the column design could straightforwardly be included in the procedure. In the simulation a short-cut tool for apparatus design of the columns, investment and variable cost estimates as well as prices for the products were included, so that it was possible to estimate the profit (here given without currency).
4. Results In Fig. 3 a snapshot from the navigation tool is shown which also includes a visualization of the Pareto front obtained from the rigorous simulations using model 1. As can be seen, interestingly, the highest profit is not achieved at low recycle rates as one could assume, but rather at high recycle rates. Furthermore, the highest profit is achieved for the lowest tolerable purity of chloroform (98%), but the highest purity of
N. Asprion et al.
426
pr ofit
tz
acetone (99.95%). All this is not obvious at a simple glance and such solutions could well be missed when the presently still common intuitive design procedures are applied. Of course, these findings can be explained: the reason is the higher price of chloroform compared to acetone. Due to increasing purity of acetone the yield of chloroform increases and, furthermore, the recycle gets further from the azeotrope at 1 bar, so that the separation in the first column becomes easier (i.e. less variable costs). When using model 2 (Fig. 2b) for the first column the profit decreases dramatically (about 2000, given without currency) since the reflux ratios have to be increased significantly to reach the desired purities of the products.
Increasing purity of chloroform
m recycle g
m recycle g m recycle profit tz purity of acetone
variable cost total cost profit
tz
x chloroform x acetone d m recycle
Fig. 3: Pareto-navigation tool applied to the separation of an acetone/chloroform mixture as shown in Figure 2, model 1
5. Conclusion and Outlook This very simple example of a pressure swing distillation shows that a manual optimization in chemical engineering will be sub-optimal, since some cases will be omitted due to certain expectations – especially when the flow sheets are complex and recycles are present. Pareto navigation helps understanding the influence of design parameters, carrying out a rational and efficient multiobjective optimization, and finally finding economically superior designs for chemical processes In the presentation the approach will be described, results of the application of the Pareto navigation tool will be presented for both the reduced and the rigorous model. The examples will be extended and will include the comparison of a pressure swing distillation to an entrainer distillation.
References A.Wiesel, A. Polt., Computer-Aided Chemical Engineering 21B (2006) pp 799-804. K. Miettinen, Nonlinear Multiobjective Optimization. Kluwer, Boston, 1999. M. Monz, Ph. D. thesis Univ. of Kaiserslautern, 2006. S. Blagov, Bessling, B., Schoenmakers, H., Hasse, H. Chem. Eng. Sci. 55 (2000) 5421-5436. O. Ryll, S. Blagov, H. Hasse, Chem. Ing. Tech., 80, No. 1-2 (2008) 207. O. Ryll, Ph. D. thesis Univ. of Kaiserslautern, 2009.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integration of ontology and knowledge-based optimization in process synthesis applications Dr. Franjo Ceceljaa, Prof. Antonis Kokossisb, Dr. Du Dua, a
Process & Information Systems Engineering, FEPS, University of Surrey, Guildford, U.K b School of Chemical Engineering, National Technical University of Athens, Zografou Campus, Athens, Greece
Abstract Previous research has shown that knowledge-based optimization models in process synthesis applications are more robust in both providing final outputs and improving computational performance. This expands this approach by implementing a general knowledge models which in turn enables interpretation of solutions so that non-experts understand detailed procedures of optimization. To this end, an automatic ontologybased optimization system that links rule-based optimization model and ontology has been introduced for the purpose to both improve optimization performance and to present new extracted knowledge at optimization run-time. A benchmark reactor network design synthesis case is studied for comparison of performance. The concomitant results show that not only can ontology-based optimization system improve robustness of solutions and computational performance, but also it enables a more accurate understanding of the process synthesis procedures and presents extracted knowledge in a decent format. Keywords: stochastic optimization, knowledge models, ontology, process synthesis, high throughput.
1. Introduction An extensive use of both deterministic and stochastic optimization algorithms to solve reactor network design synthesis problems have been reported (Achenie and Biegler, 1990; Kokossis and Floudas, 1990; Marcoulaki and Kokossis, 1996; Mehta and Kokossis, 2000; Yang, Linke and Kokossis, 2006). To improve computational performance, recent research effort was directed towards the use of computer grid techniques with Tabu Search algorithm (Du, Yang, Kokossis and Linke, 2007) and Simulated Annealing (SA) and Cascade algorithm (CA) (Yang, 2009; Du, 2009). In addition, knowledge-based optimization models have been applied in process synthesis experiments (Labrador-Darder, 2009; Du, Cecelja and Kokossis, 2009). It has been shown that the knowledge-based optimization model on grids accelerates optimization process as it guides optimization search towards promising regions with the assist of production rules (Du, 2009). Although they have shown robustness in achieving final solution and computational performance, none of these approaches have yet been upgraded to general knowledge level and interpretation in human languages. This paper presents an ontology based synthesis approach that was designed for the extraction, interpretation and exploitation of design knowledge in process synthesis and combines stochastic optimization algorithm, ontologies and analytical tools. Defined as …an explicit specification of a conceptualization (Gruber, 1993), the ontology is an attractive formalism for knowledge modeling in two distinct representation
428
F. Cecelja at al
environments: frames and XML based languages such as OWL (Ontology Web Language). The knowledge model here is based on OWL representation using all four components: classes which represent concepts in the domain, instances (or individuals) which represent the objects of a domain, properties which are attributes of classes and relationships between them, and restrictions which express constraints on the values of properties. A commercially available inference engine RaserPro is used for ontology classification and hence inferring selective knowledge.
2. Ontology-supported optimization model Figure 1 shows the proposed knowledge-based optimization model with Cascade Algorithm (CA) (Yang, 2009; Du, 2009). In general terms, the idea of the CA is to split a single Markov chain normally used with Simulated Annealing algorithm into sections and submit these sections for parallel processing by different workers. The objective values of the solutions are stored into the pools together with the concomitant intrinsic parameters which are stored in associated partitions, all under the control of the optimization server. In the case of optimizing reactor networks presented in superstructure format (Du, Cecelja and Kokossis, 2009) and which is used for verification, the intrinsic parameters include number of reactors, reactor types, reactor sequence, reactor volumes, feed flow, split fraction, concentration of desired product (objective value) etc. The optimization server selects a temperature and a solution in the pool that is associated with the temperature based on a group of pre-defined production rules (IF conditions, THEN actions) for each of the workers in the computer grid to execute one section of SA Markov chain (Du, Cecelja and Kokossis, 2009). For example, if the percentage of promising solutions with one type of reactor sequence is found to be higher than all the others, then we focus the optimization search towards the regions that can generate solutions with this kind of reactor sequence. In general terms, the purpose of applying production rules on solution selection is to classify solutions by the patterns of intrinsic parameters and to guide optimization search towards promising regions where more high quality solutions are found, hence accelerating the convergence. The termination criterion of the knowledge-based optimization model is determined by the frequency of the number of solutions in the lowest pool (Yang, 2009). The whole process is illustrated in Figure 1 where c is the total number of workers identified as W1, W2, W3, …, Wc. New generated solutions S k' , k ∈ (1,2,..., c ) are sent to the pools P1, P2, …, Pw, with each one being associated with the temperature T j ,
j ∈ (1,2,..., w) . S ks is selected initial solution from pool temperature Tks for worker Wk. Both objective values and intrinsic parameters of solutions are populated continuously during the optimization process and analyzed and new knowledge based on intrinsic parameters are updated in ontology. By interpreting the extracted knowledge to formulate production rules, actions (biased search) are taken for next few sections of SA Markov chains until new production rules are formulated. In order to make the whole process automated, a java base agent is added to interface the knowledge-based optimization model with ontology. The operation of the agent is as follows: 1. Populate ontology with intrinsic data of solutions from the optimization server continuously; 2. Analyze the solution data statistically, such as calculating the percentage of promising solutions with parameters (e.g. reactor sequences) and present the results
Integration of ontology and knowledge-based optimization in process synthesis
429
by adding an instance about these percentages in ontology between arrival of new solutions; 3. Infer ontology to generate new knowledge, e.g. preferred reactor sequence; 4. Formulating new rules for changing search direction for next few sections of Markov chain; 5. Check if termination criterion is met: if yes, stop the process, otherwise go back to Step 1 for the next loop.
Figure 1 An ontology-supported optimization model
3. Results 3.1. An ontology for an intrinsic parameter – reactor sequence In this paper the superstructure representation of reactor network is selected to verify the proposed optimization concept with intrinsic parameters which include the number of active reactors (up to 4), two reactor types (CSTR – continuous stirred tank reactor and PFR – plug flow reactor), reactor sequence, single/multiple feeds, reactor volumes, and volumes passed through bypasses and recycles (Mehta and Kokossis, 2000; Yang, Linke and Kokossis, 2006). The aim of optimization with superstructure model is to find out a combination of all these parameters, which can generate the maximum concentration of desired products with minimal complexity of the structure. Figure 1 shows a part of reactor network ontology which presents reactor sequence of the 1st reactor r1 and 2nd reactor r2 (ReactorSequence class), and percentages of number (e.g. 20) of top solutions with each r1+r2 sequence by objective values described as instances (distributionX, X ∈ (1,2,3,...) ) of NetworkSequenceDistribution class. Since r1 , r2 ∈ (CSTR, PFR ) , there are maximum 6 possible sequence combinations of r1+r2 which are: 1) CSTR, 2) PFR, 3) CSTR+CSTR, 4) CSTR+PFR, 5) PFR+CSTR, and 6) PFR+PFR, which correspond to CSTR, PFR, CSTR_CSTR, CSTR_PFR, PFR_CSTR, PFR_PFR ontology classes, respectively. Thus, each of distributionX instances has 6 datatype properties – hasCPercentage, hasPPercentage, hasCCPercentage, hasCPPercentage, hasPCPercentage and hasPPPercentage, which respectively show the percentage of CSTR, PFR, CSTR+CSTR, CSTR+PFR, PFR+CSTR and PFR+PFR
430
F. Cecelja at al
solutions among the top solutions ordered by their objective values. PreferredSequence class is used to show the r1+r2 sequence with the highest percentage from top solutions at optimization run-time. 3.2. A case study of Van de Vusse application Extensively studied application known as Van de Vusse, for which global solution has not been claimed so far, is chosen to test the ontology-supported optimization (Achenie and Biegler, 1990; Kokossis and Floudas, 1994; Mehta and Kokossis, 1998). First to examine is the impact of the sequence of the 1st reactor r1 and 2nd reactor r2 on the optimization performance. With optimization search progressing, the values for r1 and r2 of a fixed number of top best solutions ordered by objective value, e.g. 20, are collected continuously every 10 seconds. The collected data from top solutions are statistically analyzed and percentages of the solutions with different r1+r2 sequences are calculated. Then, a new instance distributionX is added to ReactorSequenceDistribution class to present the percentages of appearance of each r1+r2 sequence in current top solutions, as shown in Figure 2. Note that KDVUUU3HUFHQWDJH are data properties that present the percentage of the solutions with r1 or r1+r2 sequence and & and 3 indentify CSTR and PFR reactors, respectively. For example, KDV&33HUFHQWDJH means the percentage of the solutions with CSTR as the 1st reactor and PFR as the 2nd reactor. By comparing the percentages of solutions with each combination of r1 and r2, the highest percentage is selected and the relevant reactor sequence r1+r2 is considered as a promising solution, hence strengthening the optimization towards the promising regions. For the Van de Vusse case, 55% the best solutions have CSTR+PFR sequence at the run-time. Thus, optimization search is directed in the region with solutions taking CSTR as the 1st reactor r1 and PFR as the 2nd reactor r2 by changing search direction for next few sections of SA Markov chain.
Figure 2 An instance of the percentage of top solutions with different combinations of r1 and r2
Figure 3 shows the percentages of promising solutions with different r1+r2 sequences that are presented by distributionX instances during the optimization process. As shown, for Van de Vusse application, 55% of the top solutions have CSTR+PFR sequence at the beginning of the optimization search, which leads optimization search towards CSTR+PFR directions. Then the percentage of PFR+PFR solutions (from 35% to 40% to 50%) increases and exceeds the percentage of CSTR+PFR top solutions. Consequently, the optimization search direction is adjusted towards PFR+PFR regions. With the optimization search progressing, CSTR+PFR solutions become dominant in the promising solutions again, with a percentage of 70% at the termination point. The comparison of the whole optimization of Van de Vusse case using ontology based knowledge based model and without is shown in Table 1 and clearly indicates more robust and faster convergence. Note that the values in Table 1 are average values from repeated 10 experiments.
Integration of ontology and knowledge-based optimization in process synthesis
431
Figure 3 percentages of promising solutions with different r1+r2 sequences Table 1 Performance of Van de Vusse application with different optimization models Optimization model
Number of solutions NS
Final objective value yb
Ontology-supported
589
3.6266
Original
1509
3.6567
4. Conclusions By studying Van de Vusse case, the ontology-supported optimization model has proven that it can not only accelerate optimization search, but also explore and present new knowledge visually. Also automation of the ontology-supported optimization model can be achieved by designing a proper ontology and Java application.
References Achenie L. K. E. and Biegler L. T., (1990) Superstructure based approach to chemical reactor network synthesis, Computers and Chemical Engineering, 14-23 Du D., Yang S., Kokossis A. C. and Linke P., (2007) Experience on gridification and hyperinfrastructure experiments in optimization and process synthesis, 17th European Symposium on Computer Aided Process Engineering, Bucharest, Romania Gruber T. R., (1993) “A translation approach to portable ontology specifications”, Knowledge acquisition, 5, 199-220 Kokossis A. C. and Floudas C. A., (1990) “Optimization of complex reactor network – I. Isothermal operation”, Chemical Engineering Science, 45(3): 595-614 Labrador-Darder C., (2009) “Semantically enabled process synthesis and optimization”, PhD Thesis, Chemical and Process Engineering, Faculty of Engineering and Physical Sciences, University of Surrey Marcoulaki E. C. and Kokossis A. C., (1996) “Stochastic optimization of complex reaction systems”, Computers and Chemical Engineering, 20: S231-S236 Mehta V. L. and Kokossis A. C., (2000) “Nonisothermal synthesis of homogenous and multiphase reactor networks”, AIChE Journal, 46(11): 2256-2273 Neches R., Fikes R. E., Finin T., Gruber T. R., Senator T., Swartout W. R., (1991) “Enabling technology for knowledge sharing”, AI Magazine, 12(3), 36-56 Yang S., (2009) “On the Development of SA Cascade Optimization Algorithm – Application to Reactor Network Synthesis and Distributed Computing”, PhD Thesis, Chemical and Process Engineering, Faculty of Engineering and Physical Sciences, University of Surrey Yang S., Kokossis A. C. and Linke P., (2006) “Toward a novel optimization approach with simultaneous knowledge acquisition for distributed computing environments”, ComputerAided Chemical Engineering, 21A: 327-332
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Feasibility analysis of black-box processes using an adaptive sampling kriging based method Fani Boukouvala,a Fernando J. Muzzio,a Marianthi G. Ierapetritou,a a
Dept. Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ, 08854,USA
Abstract Accurate knowledge of the effect of parameter uncertainty on process performance is vital for optimal and feasible operation. The objective of this work is to develop a systematic methodology for performing feasibility analysis over a multivariate factor space when the explicit form of a process model is lacking or when its evaluation is expensive. For this purpose a Kriging based surrogate approximation of the process model based on experimental or simulated data is used. In this work, two issues are addressed: feasibility evaluation of black-box processes using Kriging and introduction of an adaptive sampling methodology in order to minimize sampling cost, while maintaining feasibility space accuracy. The adaptive sampling strategy identifies critical regions and directs the search towards regions where feasibility boundaries exist or where the Kriging prediction uncertainty is high. The average error of Kriging prediction as well as cross-validation methods are used to validate the robustness of the produced model of the initial experimental design which is found to highly affect the final prediction. Keywords: feasibility analysis, black-box processes, adaptive sampling, kriging.
1. Introduction Design and optimization under uncertainty is a key procedure in process systems design, since often, decision making has to be made with limited knowledge about the process model and the variations in the environmental parameters. In the last decades, several approaches have been proposed to systematically address uncertainty in process design based on different optimization formulations. The concept of process flexibility is one of the fundamental developed tools in order to express, quantify and evaluate the ability of a process to tolerate variations in its operating parameters or deviations of uncertain parameters from their nominal values. The problem of quantifying process flexibility has been well studied in the literature following the original rigorous formulation of Grossmann et al.[1]. Even though significant effort has been devoted in literature to develop or improve approaches and metrics for quantifying the feasibility and flexibility of processes described by model equation [2-4], fewer attempts have been made in the area of feasibility analysis of processes where closed form models are not available (black-box). It is often the case that the explicit form of the model connecting the input parameters to the output is not available, while the only knowledge of the system consists of a set of measured output values at different operating conditions. In [5], High Dimensional Model Representation methodology (HDMR) is used for input-output mapping of black-box processes, where the design under uncertainty problem is then solved.
Feasibility analysis of black- box processes using an adaptive sampling kriging based 433 method The key to using black-box methods for performing feasibility analysis lies in balancing the need to minimize expensive and time consuming sampling with the need to accurately map the feasible region and ensuring any form of overprediction is avoided. In this work, the performance of a different surrogate technique- Kriging- is evaluated for performing black-box feasibility analysis. Kriging is a data-driven methodology that has a long history in many fields such as geology [6], statistics and optimization- where it is referred to as the Design and Analysis of Computer Experiments (DACE) stochastic process model [7]. This approach is chosen for two reasons. First because it has been shown that it requires fewer function evaluations than other competing methods [7], and second because the calculated variance for each test point can identify regions where subsequent sampling is required [8]. One of the main limitations of sampling based approaches is that there is no a priori knowledge of the number of sampling points that are needed or the location of those points in order to provide maximum information for the accurate prediction of the output surface. The literature of sampling based techniques for optimization focuses on optimizing both the number and the spatial arrangement of sampling points for the identification of a global optimum [9], while the efficiency of sampling based techniques for performing black- box feasibility analysis has not been addressed so far in the literature.
2. Black- Box Feasibility Analysis 2.1. Kriging Kriging is a methodology used in optimization of black-box problems due to its computational efficiency, its ability to model non-linearity and its advantage of providing a confidence interval for each prediction. In principle, Kriging computes the predicted output at an unsampled location as a normal distribution with a mean value and standard deviation that depend on the values and locations of neighbouring sampled points. Kriging can be described as an interpolation technique through which a prediction at test point is made according to a weighted sum of the observed function values at nearby sampling points [8]. As a result, the sampling value is expected to fall within the interval specified by the prediction and corresponding variance. The variance mapping describes prediction uncertainty, which will be high in regions with low number of sampling points. The basic principles behind Kriging do not differ from those of traditional regression methods, such as linear regression. In fact, the Kriging response surface is assumed to be given by the following model: y( x i ) P H ( x i ) i 1,..., n (1) where ȝ is the mean of the stochastic process, and H ( x i ) is Normal(0, ݏଶ ሻ. It is shown that if the correlation between errors is modeled through a function which is inversely proportional to the distance between the points, one can afford to dispense regression terms and replace them with a constant term. There are a number of correlation models that capture this desired trend: exponential, Gaussian, linear and spherical. Their efficiency to capture the characteristics of the data depends on the nature of the process.
434
F. Boukouvala et. al
2.2. Adaptive Sapling Strategy The problem of minimization of sampling cost and time for the optimization of expensive or black-box models has attracted a lot of attention. In [7] the Expected Improvement function is introduced to direct the search to the global optimum. In feasibility analysis, however, the search should not be directed to a single optimal (minimum or maximum) value. On the contrary, it is important to identify boundaries of feasible operation within the entire range of uncertain parameters. Boundaries can be regarded as a collection of points where the value of the feasibility function takes the value of zero, whereas all feasible regions should have a negative feasibility function and infeasible regions should be positive. Thus the following approach is proposed. Initially a small sampling set is chosen to produce the initial Kriging response surface based on a space filling experimental design. The experimental design should cover the entire range of uncertain parameters. Two factors have been identified as critical when choosing the initial sampling design: (a) the range of uncertain parameters, and (b) the complexity of the feasible regionranging from linear and convex to non-linear and non-convex feasible regions. The parameter range plays an important role because it is very important to initially get a spatially representative sample of the process. The feasible region non-convexity, however, plays the most important role in sampling. The purpose of a successful adaptive sampling strategy lies in identifying the minimum number of samples of a representative initial experimental design which will be adaptively augmented by a set of samples in critical regions that belong near feasibility boundaries. In order to assess sufficiency of the initial DoE, two critical metrics are used: (a) overall average Kriging prediction variance and (2) normality plots based on a leave-one-out cross-validation technique. Once the initial DoE fulfils the desired predefined criteria, the algorithm identifies possible critical regions as those where the Kriging variance is higher than a given tolerance. In addition, new search directions are identified as ones between points that have a negative product signifying points close to the boundary of the feasible region. 2.3. Model Validation The validation of black-box models is achieved through cross validation techniques allowing the assessment of the accuracy of the produced model without the need of increasing the sampling cost [10]. Leave-one-out cross-validation methodology is an iterative procedure during which one observed sample is “left-out” at each step and a new model is constructed based on the reduced sampled set. If the sampling set is inadequate for the specific region or there are major outliers in the sampling set, then the removal of one point should not affect the new model parameters. This can be verified by simply using the new reduced sampling set based model in order to predict the left out sample (for which the real output is known). A commonly used diagnostic test that can provide a fast evaluation of the robustness and validity of a surrogate model is a plot of the actual function value versus the cross-validated prediction. In the proposed adaptive sampling methodology, the robustness of the initial sampling set is very important because if the region is not sufficiently sampled, the method may fail to locate critical boundary regions.
Feasibility analysis of black- box processes using an adaptive sampling kriging based method 435
3. Results 3.1. Example 1: Feasibility analysis of convex model with linear constraints The following example is a linear problem with two uncertain parameters ߠଵ ǡ ߠଶ and one control variable ݖand a convex feasible region. The feasibility problem takes the following form: ݑ ݖ ߠଵ െ ߠଶ െ Ͷ ݑ, െ ݖെ ߠଵ Τ͵ െ ߠଶ ͶΤ͵ ݑ ݐݏǤ ݖെ ߠଵ ߠଶ Τʹ ݑ, (2) ߠଵ אሾͲͺሿǡ ߠଵ௺ ൌ Ͷ, ߠଶ אሾͲͶሿǡ ߠଶ௺ ൌ ʹ The initial experimental design is a rectangular grid of 25 (5 2) points. Figure 1 shows the cross-validation diagnostic plot of the initial experimental design. After two iterations, a total number of 30 samples is needed to predict the feasible region (Figure 2) compared to a predefined number of 121 samples that are reported as necessary to obtain the same results using the methodology in [5].
Figure 1. Cross- validation diagnostic plot of initial experimental design of 25 samples
Figure 2. Predicted vs. Real Feasible region of Problem 2
3.2. Example 2:Feasibility analysis of non- convex problem with non-linear constraints This problem is characterized by a non-convex feasible region, due to a highly nonlinear constraint that causes discontinuities in the region. Staring by a small sampling set (25 points), the initial experimental design that satisfies the criteria contains 81 samples (Figure 3). After 3 iterations, Figure 4 presents the real feasible region bounded by the constraints as well as the predicted feasible region after sampling at 105 locations. Even though the necessary sampling set is larger, it is still less than the necessary set (> 200) needed for HDMR feasibility analysis [5]. ݑ ݐݏǤ െ ʹߠଵ ߠଶ െ ͳͷ ݑǡ ߠଵଶ Τʹ Ͷߠଵ െ ߠଶ െ ͷ ݑ െሺߠଵ െ Ͷሻଶ Τͷ െ ߠଶ ଶ ΤͲǤͷ ͳͲ ݑ (3) ߠଵ אሾെͳͲͷሿǡ ߠଵ௺ ൌ െʹǤͷ, ߠଶ אሾെͳͷͳͷሿǡ ߠଶ௺ ൌ Ͳ
4. Conclusions In this work, a Kriging based methodology is introduced for accurately mapping the feasible region of operation of black-box processes as a function of the range of uncertain parameters. Several key aspects are identified in order to develop a systematic and reliable method for identifying the feasible operation by simultaneously minimizing the required sampling cost.
F. Boukouvala et. al
436
Figure 3. Cross- Validation diagnostic plot of initial experimental design of 81 points
Figure 4. Predicted vs. Real feasible region of Problem 3
First, it is important to ensure that the initial experimental design is representative of the entire feasible region and this is assessed by cross-validation diagnostic plots and the average Kriging error estimate. Once the initial experimental design samples are obtained, the sampling set is adaptively refined only in critical regions that are likely to provide maximum information to the model.
Acknowledgments The authors acknowledge the support provided by the ERC (NSF-0504497, NSF-ECC 0540855).
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Swaney, R.E. and I.E. Grossmann, An index for operational flexibility in chemical process design. Part I: Formulation and theory. 1985. p. 621-630. Floudas, C.A. and Z.H. Gumus, Global Optimization in Design under Uncertainty: Feasibility Test and Flexibility Index Problems. Industrial & Engineering Chemistry Research, 2001. 40(20): p. 4267-4282. Grossmann, I.E. and C.A. Floudas (1987) Active Constraint Strategy for Flexibility Analysis in Chemical Processes. Computers & Chemical Engineering 11, 675-693. Ierapetritou, M.G., New approach for quantifying process feasibility: Convex and 1-D quasi-convex regions. 2001. p. 1407-1417. Banerjee, I. and M.G. Ierapetritou, Design Optimization under Parameter Uncertainty for General Black-Box Models. Industrial & Engineering Chemistry Research, 2002. 41(26): p. 6687-6697. Rasmussen, C.E. and C.K.I. Williams, Gaussian Processes for Machine Learning, ed. M. Press. 2006, Boston. Jones, D.R., M. Schonlau, and W.J. Welch, Efficient Global Optimization of Expensive Black-Box Functions. Journal of Global Optimization, 1998. 13(4): p. 455-492. Davis, E. and M. Ierapetritou, A kriging based method for the solution of mixed-integer nonlinear programs containing black-box functions. Journal of Global Optimization, 2009. 43(2): p. 191-205. Davis, E. and M. Ierapetritou, A centroid-based sampling strategy for kriging global modeling and optimization. AIChE Journal. 56(1): p. 220-240. Browne, M.W., Cross-Validation Methods. Journal of Mathematical Psychology, 2000. 44(1): p. 108-132.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Multiobjective Optimization for Plastic Sheet Production M. Rivera-Toledo, G. Meneses-Castellanos, and A. Flores-Tlacuahuac∗ Depto. Ing. y Ciencias Químicas, Universidad Iberoamericana, México D.F., México
Abstract In this work, we formulate a Multiobjective Optimization problem using conÀicting performance objectives in polymerization systems such as maximize monomer conversion and minimize molecular weight distribution. The problem is subject to a mathematical model comprising highly nonlinear Partial Differential Equations. These describe the dynamic response of the poly(methyl methacrylate) cell-cast process. A full discretization approach was used for solving the associated Nonlinear Programming problems. We analyze the effect of different process constraints on the Pareto curve and select suitable operating policies in an open-loop environment. Keywords: multiobjective, sheet reactor, PDEs
1 Introduction One of the main issues addressed in the optimization of chemical processes so far has been optimization for one objective at a time. However, many practical applications involve several objectives to be considered simultaneously. The appropriate objectives for a particular application are often conÀicting, which means achieving the optimum for one objective requires some compromise on one or more other objectives. Multiobjective optimization (MO), particularly outside engineering, refers to ¿nding values of decision variables which correspond to and provide the optimum of more than one objective. MO can be applied to handle conÀicting performance objectives. The goal is to obtain a set of equally good solutions, known as Pareto optimal solutions. In a Pareto set, no solution can be considered to be better than any other solution with respect to all objective functions. When one moves from one Pareto solution to another, at least one objective function improves while at least one other gets worse. Hence, MO involves special methods for considering more than one objective and analyzing the results obtained. The most often exploited approaches to generate this Pareto set are the weighting method and the ε -constraint method[see Rangaiah (2009)], although more recent approaches are available [see Das and Dennis (1998)]. In this work we address the MO of an experimentally validated model of an industrial polymerization reactor whose dynamic model is described in terms of a set of highly nonlinear partial differential and algebraic equations. As conÀicting objectives we have selected the maximization of both monomer conversion and molecular weight distribution which, for some polymerization systems, are commonly in conÀict. ∗ [email protected]
438
M. Rivera-Toledo et al.
2 Cell-Cast Process In the typical casting of acrilic sheet, molds formed by two glass plates which are separated by a peripheral gasket sealer and clamped together, are filled with casting syrup through a gap left in the gasket. The casting syrup is made up of partially polymerized monomer (20%) which, once placed in the mold, is inserted into a furnace which is heated by circulating warm air (see Figure 1). It is extremely important to control the progress of the polymerization throughout the procedure and to create suitable mild thermic conditions which, in turn, requires speedy and effective dissipation of excess heat, due to the low heat capacity of air, effective control of the thermal conditions during the operation is very important, so that, the heating is affected by the circulating air as it has been showed by M. Rivera and Vílchis (2006). Besides the conventional chemical kinetics, physical phenomena related to the diffusion of various chemical reactive species are very important in free-radical polymerization reactions. Several models have been published dealing with the mathematical description of diffusion-controlled kinetic rate constants in free-radical polymerization [see Dubé et al. (1997)]. The reaction mechanism adopted here consists of a simple approximation for the well know free-radical polymerization kinetics featuring straightforward initiation, propagation, and termination reactions described by Achilias and Kiparissides (1992). The following assumptions are taken (1) the diffusion effect is negligible since we are interested in the thermal process behavior, then the polymer processing is controlled by the chemical kinetic, (2) only the mass balance for the monomer conversion, initiator concentration and the growing radical concentration are considered. The sheet reactor model is considered for the PMMA plastic sheet production. For lack of space, in this paper we indicate briefly some assumptions on which the mathematical model rests, nevertheless, on previous work, M. Rivera and Vílchis (2006), we discuss in detail the limitations and scopes of the model. This model was derived assuming that the heating source resulting from polymer reaction is a function of the local temperature. It was also assumed that polymer properties, like density, heat capacity, thermal conductivity are constants. To get the one-dimensional dynamic energy balance, the total heat entering and leaving at the z coordinate was modeled by the Fourier law and the rate of change of energy in the control volume was obtained applying the shell energy balance method. Dynamic mass and energy balances coupled through polymerization kinetics describe monomer conversion, initiator concentration,the growing radical concentration, and Mw dynamic time evolution. Air is circulated through the forced convection mechanism inside the oven to provide the required energy to rise
Figure 1. Cell-cast process for PMMA plastic sheet manufacture
Multiobjective Optimization for Plastic Sheet Production
439
up the plastic sheet temperature until a point where signi¿cant polymerization rates take place. Inside the monomer, the dominant heat transfer mechanism is conduction. The dimensionless modeling equations for the sheet reactor consist of the following energy and mass balances:
∂θ ∂τ dX dτ d λ¯0 dτ d I¯
= a2
∂ 2θ (1 − X)2 λ¯0 −A2 /θ e + Bi(θa − θ ) + A9 2 ∂ζ 1 + εX
(1)
¯
= A1 (1 − X)eλ0 −A2 /θ
(2)
1 − X λ¯0 −A2 /θ ¯ ¯ e + A6 e−λ0 −A5 /θ − A7 eλ0 −A8 /θ 1 + εX 1 − X λ¯0 −A2 /θ e = A4 − A3 e−A5 /θ dτ 1 + εX and the initial (IC) and boundary (BC) conditions are given by IC : τ = 0, ∀ζ ∈ [0, 1], θ = θ0 , X = X0 , λ¯0 = ln(λ0 ), I¯= 1 = −A4
(3) (4)
0
BC1 : ∀τ > 0, ζ = 0, BC2 : ∀τ > 0, ζ = 1,
∂θ = Bi(θa − θ ) ∂ζ ∂θ =0 ∂ζ
(5)
Here, the dimensionless variables for the polymer temperature, air temperature, position, time, growing radical concentration and the initiator concentration are de¿ned as follows: θ = T /T0 , θa = Ta /T0 , ζ = z/L, τ = α t/H 2 , λ¯0 = ln(λ0 ), I¯ = I/I0 , respectively; and Bi = hH/k is the Biot number, T is the polymer temperature, T0 is the initial monomer temperature, Ta is the surrounding temperature, t is the polymerization time, z is the axis for the sheet length, X is the monomer conversion, λ0 is the growing radical concentration, I is the initiator concentration, L is the sheet length , H is the sheet thickness, ε is the volume expansion factor, α is the thermal diffusivity, and h is the heat transfer coef¿cient.
3
Multiobjective Optimization Problem
In the present work, we formulate a MO problem using conÀicting performance objectives in polymerization systems such as maximize molecular weight distribution, fMw , and maximize monomer conversion, fX , as follows:
τ τ Mw 2 θ 2 min fMw = + 1− d Xd τ (6) 1− d d τ , and max fX = θ Mw 0 0 then we use the ε -constraint method for generating the Pareto frontier, in which one of the solutions will be an "ideal" solution. The MO problem (equation 6) is subject to the partial differential and algebraic equations (PDAE) and the initial and boundary conditions (equations 1-5). In the above equation θ d and Mwd stand for the desired values of plastic sheet and Molecular Weight distribution, respectively. The ratio between the rate of propagation and the rates of propagation and termination of polymerization was used to obtain the Mw . Using the simultaneous approach [Biegler et al. (2003)] for solving the dynamic optimization problems, these are converted into a non linear programming ¯ λ¯0 , Mw ) and control (θa ) variables by (NLP) problem by approximating the states (θ , X, I, the application of the method of lines for the spatial coordinate and orthogonal collocation on ¿nite elements for handling the time coordinate.
440
M. Rivera-Toledo et al.
4 Results In Figure 2, we present comparison between pareto optimal set and ideal solutions when (a) Mwd = 1x104 , and (b) Mwd = 1x106 , for (ii) dynamic temperature, (iii) Mw and growing radical concentration, (iv) monomer conversion and initiator concentration pro¿les for 3 mm plastic sheet thickness. Solutions are labelled for the edge (z = 0) and right extreme (z = L) along the longitudinal z-coordinate. The numerical results were obtained using next dimensionless parameter values for PDAE: A1 = 6.1539x108 , A2 = 6.7786, A3 = 1.3184x1018 , A4 = −1.4626x108 , A5 = 47.6514, A6 = 1.3812x1018 , A7 = 1.2266x1011 , A8 = 1.0916, A9 = 4.4036x108 , a = 5.555x10−4 , and Bi = 5.3366. When the ε -constrained method was used, the MO problem was written as min fMw subject to fX ε and PDAE, and ε lies in the interval [0.75,0.99]. The NLP problem was solved using the CONOPT NLP solver embedded in the GAMS algebraic modelling system. An ideal solution was obtained by determining the pareto solution which is closest to the utopia point according to the approach suggested in Grossmann and Jain (1982). As can be seen from the results displayed in Figure 2 acceptable monomer conversion and molecular weight distribution results are obtained by setting a trade-off between the addressed conÀicting objectives. Although the complexity of the underlying dynamic system, the Pareto frontier was computed using a relative modest computational time.
5 Conclusions In this work the multiobjective optimization of a highly nonlinear polymerization reactor was addressed. Because in polymerization systems normally conÀicting objectives emerge it makes sense to compute the set of optimal solutions that reÀect a trade-off among conÀicting objectives and let the designer to pick up the solution he/she considers to meet operating objectives in the best possible manner. Even when the set of optimal solutions are off-line computed presently we are extending this work to compute and implement on-line optimal dynamic solutions by using nonlinear model predictive control strategies.
References Achilias, D., Kiparissides, C., 1992. Development of a General Mathematical Framework for Modeling Diffusion-Controlled Free Radical Polymerization Reactions. Macromolecules 25, 3739–3750. Biegler, L., Ghattas, O., Heinkenschloss, M., van Bloemen Waanders, B., 2003. Large-Scale PDEConstrained Optimization. Springer, Berlin. Das, I., Dennis, J. E., 1998. Normal-boundary intersection: A new method for generating the pareto surface in nonlinear multicritria optimization problems. SIAM J. Optim. 8, 631. Dubé, M. A., Soares, J. B. P., Penlidis, A., Hamielec, A. E., 1997. Mathematical Modeling of Multicomponent Chain-Growth polymerizations in Batch, Semibatch, and Continuous Reactors: A Review . Ind. Eng. Chem. Res. 36, 966–1015. Grossmann, I. E., D. R., Jain, R. K., 1982. Incorporating toxicology in the synthesis of industrial chemical complexes. Chem. Eng. Commun 17, 151. M. Rivera, L. E. García, A. F., Vílchis, L., 2006. Dynamic modeling and experimental validation of the mma cells cast process for plastic sheet production. Ind. Eng. Chem. Res. 45 (25), 8539–8553. Rangaiah, G. P., 2009. Multi-objective optimization: techniques and applications in chemical engineering. World Scienti¿c, Singapur.
Multiobjective Optimization for Plastic Sheet Production
441
(a)
(b)
Figure 2. Plot of (i)Pareto optimal set and ideal solutions when (a)Mwd = 1x104 , and (b)Mwd = 1x106 , for dynamic (ii)monomer and air temperatures, (iii)Mw and growing radical concentration, (iv)monomer conversion and initiator concentration profiles for 3 mm plastic sheet thickness. Solutions are labelled for the edge, z = 0 (subscript 1), and right extreme, z = L (subscripts Nz ) along the longitudinal z-coordinate.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Systematic identification and robust control design for uncertain time delay processes Jakob K. Huusoma, Niels K. Poulsenb, Sten B. Jørgensena, John B. Jørgensenb a
CAPEC, Department of Chemical and Biochemical Engineering Department of Informatics and Mathematical Modelling Technical University of Denmark, DK-2800 Lyngby, Denmark
b
Abstract A systematic procedure is proposed to handle the standard process control problem. The considered standard problem involves infrequent step disturbances to processes with large delays and measurement noise. The process is modeled as an ARX model and extended with a suitable noise model in order to reject unmeasured step disturbances and unavoidable model errors. This controller is illustrated to perform well for both set point tracking and a disturbance rejection for a SISO process example of a furnace which has a time delay which is significantly longer than the dominating time constant. Keywords: Model Predictive Control, Autoregressive models, Time delay systems.
1. Introduction Many chemical engineering processes contain time delays and a degree of measurement noise. Often unmeasured step disturbances appear, e.g. when a feed source changes which may give rise to offset in the controlled variable. This rather common control problem deserves development of a systematic methodology. In the present contribution a model based methodology which combines model identification with unmeasured disturbance estimation and robust offset free control design is developed. The methodology is illustrated on a furnace control problem. Model Predictive Control (MPC) is a state of the art control technology which utilizes a model of the system to predict the process output over some future horizon and solve a quadratic optimization problem with the control signal as decision variables. Early achievements and industrial implementations of Model Prediction Control include IDCOM and Dynamic Matrix Control [1,2]. These algorithms were based on step or impulse response models. More general linear input-output model structures were used in Generalized Predictive Control [3], but an interest in MPC implementations based on state space models were created by the seminal paper [4]. The state space approach provides a unified framework for discussion of the various predictive control algorithms and is well suited for stability analysis [5]. Therefore, MPC based on state space models is useful as an implementation paradigm when other linear model classes are identified.
2. Model Predictive Control based on Autoregressive models It is proposed to base MPC on autoregressive models with exogenous inputs (ARX). ܣሺି ݍଵ ሻݕሺݐሻ ൌ ܤሺି ݍଵ ሻݑሺݐሻ ߝሺݐሻǡ ߝሺݐሻ ࣨ אሺͲǡ ߪ ଶ ሻ ARX models can be reliably identified from data using convex optimization since they are linear in the system parameters. This feature presents an advantage in embedded applications for robust and automatic system identification. Furthermore MIMO systems can be identified as easily as for SISO systems, which is not the case for the
Model Predictive Control with dead-band for uncertain time delay systems
443
class of ARMAX models. The ARX model can be converted to a state space model in the observer canonical, innovation form, hence the special correlation between process and measurement noise can be exploited. The innovation, ݁ ൌ ݕ െ ݕොȁିଵ , and future predictions of the states and the process output are calculated based on the Kalman filter. The optimal control input sequence is based on the minimization on the following quadratic objective, subject to constraints on the input, u, and the control move, ǻX. ଶ ଶ ଵ ߶ ൌ σேିଵ ොାଵାȁ െ ݎାଵା ฮ ฮȟݑାȁ ฮ ୀ ฮݕ ଶ
ொ
ௌೠ
where Q and Su are weight matrices. The constrained optimal control problem can be converted into a standard convex quadratic program [6]. 2.1. Soft output constraint Model Predictive Control The above objective function has the disadvantage that presence of noise gives rise to an active controller even though the process control operates with zero error on average. Therefore a new performance objective which includes soft constraints is introduced: ଶ ଶ ଶ ଵ ் ොሺାଵାȁሻ െ ݎାଵା ฮொ ฮȟݑሺାȁሻ ฮௌ ฮɄାଵା ฮௌ ʹݏఎ ߟାଵା ߶௦௧ ൌ σேିଵ ୀ ฮݕ ଶ
ೠ
ആ
where Q6XDQG6ȘDUHZHLJKWPDWULFHVDQGȘLVYHFWRURIDX[LOLDU\YDULDEOHV7KH03& controller with soft output constraints will solve the quadratic programming problem ߶ௌ௧ ݆ ൌ Ͳǡͳǡʹǡ ǥ ǡ ܰ െ ͳ ൛௨ೖశೕ ǡఎೖశభశೕ ൟ
Such that the input, u, and the control move, ǻX, are constrained to an interval. The soft output constraints are imposed by demanding that the auxiliary variable, Ș, positive and ୫୧୬ǡ୩ାଵା୨ െ Ʉ୩ାଵା୨ ୩ାଵା୨ ୫ୟ୶ǡ୩ାଵା୨ Ʉ୩ାଵା୨ Fig. 1 shows the penalty function on the tracking error for nominal and soft output constrained MPC. It is seen that inclusion of the soft constraints gives a detuning of the controller within the limits on the tracking error. An in-depth discussion of implementation of soft output constrained MPC is given in [8], which also covers an improved robustness compared to the nominal MPC.
Figure 1. Penalty function for the tracking error for nominal and soft MPC.
2.2. Offset free performance Since one objective of the control is to ensure offset-free tracking, it is proposed to identify an ARX model from a set of input/output plant-data and base the control implementation on the following model [7] ܣሺି ݍଵ ሻݕሺݐሻ ൌ ܤሺି ݍଵ ሻݑሺݐሻ ߟሺݐሻ ͳ െ ߙି ݍଵ ݁ሺݐሻ ߟሺݐሻ ൌ ͳ െ ି ݍଵ Due to inclusion of the integrator in the noise model, the effect of a sustained non-zero disturbance can be eliminated. The first order moving average part of the noise model balances the speed of estimating an unknown disturbance versus the noise sensitivity of
444
J. K. Huusom et al.
the disturbance estimate. The effect that Į has on this trade-off is depicted in Fig 2. It is seen that Į offers a good trade-off independent of the system. The variance of the disturbance estimate is less than 20% of the process noise and 95% of an unmeasured step is estimated in approximately 10 samples. The system description with the linear noise model can be realized as an ARMAX model which means that it can be converted to the state space descriptions used by the controller ܣҧሺି ݍଵ ሻݕሺݐሻ ൌ ܤത ሺି ݍଵ ሻݑሺݐሻ ܥሺି ݍଵ ሻ݁ሺݐሻ ିଵ ିଵ ܣҧሺ ݍሻ ൌ ሺͳ െ ݍሻܣሺି ݍଵ ሻ, ܤതሺି ݍଵ ሻ ൌ ሺͳ െ ି ݍଵ ሻܤሺି ݍଵ ሻ and ܥሺି ݍଵ ሻ ൌ ͳ െ ߙି ݍଵ .
Figure 2. The variance of the disturbance estimate and response time for the disturbance estimator given a step, as function of the free parameter Į in the noise model.
3. Process example – The Gas-Oil Furnace This problem deals with a process where a liquid oil stream is heated and evaporated in a furnace. This example is inspired by a set of papers by Rivera and co-workers [9]. The goal when operating this plant is to maintain a constant gas temperature in the product stream by manipulating the fuel flow rate to the furnace such that oil feed flow rate disturbances are rejected. The disturbance is random while its mean value may change stepwise in order to change the unit throughput. The process and the signals are depicted in Fig. 3. The temperature dynamics of the process can be described by the following second order plus delay transfer functions with real valued poles. ʹͲ ݕሺݏሻ ൌ ܩ ሺݏሻ ൌ ݁ ିହ௦ ሺͶͲݏ ͳሻሺͶ ݏ ͳሻ ݑሺݏሻ െͷ ݕሺݏሻ ൌ ܩௗ ሺݏሻ ൌ ݁ ିଵ௦ ሺͷ ݏ ͳሻሺͷ ݏ ͳሻ ݀ሺݏሻ The process output is measured at discrete time instants every 2 minutes for the purpose of feedback control. This measurement is noisy and assumed to be Gaussian distributed. ݁௧ ࣨ אሺͲǡ ͲǤͷଶ ሻ ݕ௧ ൌ ݕ௧ ݁௧ ǡ The disturbance signal used for simulation of this process is assumed to behave as ݀ ࣨ אሺͲǡ ͲǤʹͷଶ ሻ ݀௧ ൌ ݀ௗ௧ ݀ ǡ where ddet is the desired rate of production and dran is the stochastic element of the disturbance when the system is discretized with a sample time of 2 minutes.
Model Predictive Control with dead-band for uncertain time delay systems
445
Figure 3. The Oil-Gas Furnace process.
3.1. System Identification In order to implement a MPC for the furnace temperature it is necessary to require a discrete time linear model of the dynamics between the input and the process. This dynamic relation will be estimated as an ARX-model based on data which is generated from the true plant model. The structure of the noise model used in the MPC, will be exploited in the system identification. The model is identified from a set of data {ݕ,ݑ} which is the set of plant data, {ݕ,}ݑ, filtered through the inverse of the noise model. A PRBS signal with 360 samples, corresponding to a 12h experiment, has been design for the probing signal. In order to avoid too rapid changes the signal has been design with banded frequency content. The signal will exhibit changes between its extreme values every 10 samples or slower. The most promising model among a large range of low order ARX models with a fixed time delay is found. Models with delays ranging from 24 to 27 samples have been investigated. The model parameters, with the 99% confidence interval for the most promising model, are reported in Table 1. This model performs well when comparing estimate and observed step responses. Table 1. Estimated ARX model parameters of the Gas-Oil Furnace process are presented with 99% confidence limits. The delay was estimated to be 26 samples. The sample time is 2 minutes. A(q-1)
Estimate
B(q-1)
Estimate
a1
-0.5683 (±0.1547)
b0
0.6892 (±0.2485)
a2
-0.3782 (±0.1555)
b1
0.4783 (±0.2819)
3.2. Closed loop performance The identified model is used in an ARX-MPC implementation with a prediction and control horizon of 60 samples. The actuator is constrained between ±1 and the control move is constrained to ±0.5. For the soft output constraint, a band of 5 degrees around the reference temperature is chosen. The closed loop systems responses to a set point change at 3.5 and 7h and for an unmeasured step disturbance at 1h are show in Fig 4.
Figure 4. Responses to a set point change and an unmeasured step disturbance with nominal (dotted line) and soft (full line) ARX-MPC. The dashed lines indicate constraints.
Satisfactory performance is found using a tuning with {Q=10-3, Su=103, SȘ=1} for the soft MPC which corresponds to {Q=1, Su=103, SȘ=0} for the nominal MPC. It is seen that for both implementations, the closed loop system tracks the set points and rejects an unmeasured disturbance satisfactorily despite the long time delays in the system. This long delay is the reason for the very high value of Su which detunes the controller. For lower values, the response is faster but not sufficiently damped while a higher value
446
J. K. Huusom et al.
gives a too slow response. In general it is seen that the nominal MPC keeps the output closer to the reference at the price of a more aggressive input signal.
4. Conclusions The almost standard control problem in the process industries where processes with large delays and noise are exposed to infrequent step disturbances is proposed solved using a selected set of systems engineering methods. The paper shows that combining an ARX model based MPC control design with soft output constraints and a system independent tuned filtered noise model provides a sound basis for development of a systematic method for the above standard control problem. The proposed filtered ARXMPC control strategy with soft constraints provides a systematic approach to obtaining offset-free performance and reducing sensitivity to noise. The proposed strategy is especially advantageous for processes with delays longer than the dominant time constants. Since the identification method is guaranteed convergence, the control design is robust towards noise and model uncertainties, the combined methodology is expected to be robust.
5. Acknowledgements The first author gratefully acknowledges the Danish Council for Independent Research, Technology and Production Sciences (FTP) for funding through grand no. 274-08-0059.
References 1. J. Richalet, A. Rault, J. L. Testud, and J. Papon. 1978. Model predictive heuristic control: Application to industrial processes. Automatica, 14(5):413 – 428. 2. C. Cutler and B. Ramaker. 1980. Dynamic matrix control – A computer control algorithm. In Proceedings of the Joint Automatic Control Conference. 3. D.W. Clarke, C. Mohtadi, and P. S. Tuffs. 1987. Generalized predictive control - part 1. The basic algorithm. Automatica, 23(2):137 – 148. 4. K. R. Muske and J. B. Rawlings. 1993. Model predictive control with linear models. AIChE Journal, 39(2): 262 – 287. 5. D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. 2000. Constrained model predictive control: Stability and optimality. Automatica, 36(6):789 – 814. 6. J. K. Huusom, N. K. Poulsen, S. B. Jørgensen, and J. B. Jørgensen. 2010. Tuning of methods for offset free MPC based on ARX model representations. In Proceedings of the American Control Conference, pages 2355 – 2360. 7. J. K. Huusom, N. K. Poulsen, S. B. Jørgensen, and J. B. Jørgensen. Noise Modelling and MPC Tuning for Systems with Infrequent Step Disturbances. Submitted for the IFAC World Congress 2011. 8. G. Prasath and J. B. Jørgensen. 2009. Soft Constraints for Robust MPC of Uncertain Systems. In Proceedings of the International Symposium on Advanced Control of Chemical Processes. 9. D. E. Rivera, K. S. Jun, V. E. Sater and M. K. Shetty. 1996. Teaching process dynamics and control using an industrial-scale real-time computing environment. Computer Applications in Engineering Education, 4(3), 191–205.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Control and dynamic optimization of a BTX dividing-wall column Anton A. Kiss,a Rohit R. Rewagadb a
AkzoNobel – Research, Development and Innovation, Velperweg 76, 6824 BM, Arnhem, The Netherlands. E-mail : [email protected] b University of Twente, Faculty of Science and Technology, Enschede, The Netherlands. E-mail: [email protected]
Abstract This work presents simulation results of the control and dynamic optimization of a dividing-wall column (DWC) used for the separation of a benzene-toluene-xylene (BTX) ternary mixture. Rigorous simulations were carried out in Aspen Plus and Aspen Dynamics. Several conventional control structures based on PID control loops (DB/LSV, DV/LSB, LB/DSV, LV/DSB) were used as a control basis. These control structures were enhanced by adding an extra loop controlling the heavy component composition in the top of the prefractionator, by using the liquid split as an additional manipulated variable, thus implicitly achieving minimization of energy requirements. The results of the dynamic simulations show short settling times and low overshooting especially for the DB/LSV and LB/DSV control structures. Keywords: dynamic optimization, PID control, energy efficiency, dividing-wall column
1. Introduction Industrial applications of dividing-wall columns (DWC) for ternary separations is nowadays considered as proven technology, with over 100 DWC reported in operation worldwide (Dejanovic et al., 2010). Remarkably, DWC is not limited only to ternary separations, but it can be used also in extractive distillation (Bravo-Bravo et al., 2010), azeotropic separations and reactive distillation (Kiss et al., 2009). DWC is considered a major breakthrough in distillation, as it brings significant reduction in CapEx and OpEx – up to 30-40%. Remarkably, DWC is the only known large scale process intensification example, where both capital and operating costs can be vastly reduced, with the additional benefit of reducing also the required installation space, by up to 40%. Basically, DWC is a practical implementation of the well-known Petlyuk configuration that features a prefractionator and a main column with interconnected vapor and liquid streams (Figure 1, where A – lightest, B – mid boiler, C – heaviest component). In spite of being a technology already implemented at industrial scale, the dynamic control and optimization of DWC was explored only in just a few papers (Diggelen et al., 2010; Ling and Luyben, 2009, 2010). Compared to a conventional separation sequences, the control of DWC is more difficult due to the increased interaction among the controlled and manipulated variables. This paper proposes several multi-loop PID control structures (DB/LSV, DV/LSB, LB/DSV, LV/DSB) that keep under control the product purities while at the same time implicitly minimize the energy requirements. This is achieved by manipulating the liquid split (rL) in order to control the composition of the heaviest component (C) in the top of the prefractionator side of the DWC.
A. A. Kiss and R. R. Rewagad
448
A
Liquid split
A
AB
Dividing wall
LIQ
ABC
B
PF
ABC
DC
Prefractionation section
VAP
B
Main column
BC
C
Vapour split
C
Figure 1. Schematics of Petlyuk configuration (left) and dividing-wall column (right).
2. Problem statement The integration of two columns into one shell leads also to more interactions among the controlled and manipulated variables, and ultimately in the controllability of the system. Although much of the literature focuses on the control of binary distillation columns, there are just a few studies on the controllability and dynamic optimization of DWC (Halvorsen et al., 1997; Adrian, 2004; Ling, 2009, 2010; Diggelen et al., 2010). The problem is that different DWC separation systems were used hence no fair comparison of controllers is possible. To solve this problem, we explore the DWC control issues on one system (BTX) and compare various multi-loop PID control strategies enhanced with implicit dynamic optimization – minimization of the energy requirements.
3. Steady-state and dynamic models Aspen Plus and Aspen Dynamics were used as powerful CAPE tools, in order to build the rigorous steady-state and dynamic simulations. Figure 2 (left) illustrates the schematics of the modeled DWC, consisting of 6 sections of 8 stages each. The feed stream consisting of benzene-toluene-xylene (noted as ABC for convenience) is fed into the prefractionator side, between section 1 and 2. Benzene is obtained as top distillate, xylene as bottom product, and toluene is withdrawn as side stream of the main column (between sections 4 and 5). The ternary diagram showing the composition profile along the column is illustrated in Figure 2 (right). In this work, the steady-state purity of all product streams is considered to be 97% in order to allow comparison to previous work. The converged AspenPlus simulation was exported to Aspen Dynamics, where several PID loops within a multi-loop framework were applied. Note that the PID controllers remain the most used controllers in the chemical industry, for several practical reasons: • Simplicity of the control structure. • Robustness with respect to model uncertainties and disturbances. • Quite easy manual stabilization of the process, when an actuator or sensor fails. In case of a DWC, two multi loops are needed to stabilize the column and another three to maintain the set points specifying the product purities. From a practical viewpoint, there are only a few configurations that make sense. The level of the reflux drum and the reboiler can be controlled by the variables L (liquid reflux), D (distillate), V (vapor boil-up) or B (bottoms), respectively. Consequently, there are four inventory control options to stabilize the column and to control the level in the reflux tank and the level in the reboiler, namely the combinations: D/B, L/V, L/B and V/D (Diggelen et al., 2010).
Control and dynamic optimization of a BTX dividing-wall column
1
QC
N1 + N2+2 . . .
rL
(ABC)
0.8
D
0.6
N1 + N2+N3 + 1
2
F
L
N1 + N2+N3
1
1
. . .
. . .
4
S
N1 + N2+N3 + N4
N1
0.4
N1 + N2+N3 + N4 + 1
N1+1 N1+2
. . . .
5
2
. . .
N1 + N2
Side product
XB
N1 + N2+1
3
449
Prefractionator M ain column
0.2
N1 + N2+N3 + N4 + N5
rV
Feed
N1 + N2+N3 + N4 + N5 +1 . .
6
N1 + N2+N3 + N4 + N5 +N6 -1
V
0
B
N1 + N2+N3 + N4 + N5 +N6
Bottom product
0
QB
0.2
Top product
0.4
0.6
0.8 X 1 A
Figure 2. Schematics of the simulated DWC: 6 sections of 8 stages each (left). Ternary diagram showing the composition profile along the dividing-wall column (right). Figures 3 shows the multi-loop PID control structures considered in this work: DB/LSV, DV/LSB, LB/DSV, and LV/DSB. The part for the control of product purities is often called regulatory control. One actuator is left (rL) that can be used for optimization purposes such as minimizing the energy requirements. Note that the control loops were tuned by the direct synthesis method (Luyben and Luyben, 1997). All these control structures are based on PID loops within a multi-loop framework, with an additional optimization loop that manipulates the liquid split in order to control the heavy component composition in the top of fractionator, and implicitly achieving minimization of the energy requirements. Ling and Luyben (2009) have already shown that implicit optimization of the energy usage is achieved by controlling the heavy impurity at the top of the prefractionator. Note that any heavy component (C) going out the top of the wall will appear also in the liquid flowing down in the main column and thus strongly affecting the purity of the sidestream (S). Since the sidestream is collected as a liquid product, it means that any small amounts of light impurity in the vapour phase will not significantly affect its composition. However, even tiny amounts of heavy impurity in the liquid phase will greatly affect the composition of the side stream. DV/LSB
DB/LSV
LC
rL
CC
CC
LC
A
CC
CC
B
rL
CC
YC
YC
B
F
B
F
CC
CC
CC
A
A
rL
CC
F
B
LC
LC
CC
YC
F
A
rL
CC
YC
LV/DSB
LB/DSV
CC
LC
LC
LC CC
C
C CC
LC CC
C
C CC
Figure 3. Control structures based on PID loops within a multi-loop framework.
A. A. Kiss and R. R. Rewagad
450
4. Results and discussion Sensitivity analysis was used to determine the optimal parameters corresponding to the minimum energy requirements. The diagrams shown in Figure 4 illustrate the optimal liquid split ratio (rL) – as well as the heavy component mole fraction in the top of fractionator (YC-PF1) – corresponding to the minimum reboiler duty (Qreb). 2500 Base
1400
+10% F
1200
-10% F
1000
Qreb / [kW]
Qreb / [kW]
2000 1500 1000
800 600 Base
400
500
+10% F
200
-10% F
0
0 0
0.1
0.2
0.3
0.4
0
0.5
0.01
0.02 YC_PF1 / [-]
rL / [-]
0.04
0.03
Figure 4. Reboiler duty vs liquid split ratio (left) and molar fraction of the heavy component on the first stage of the prefractionator (right), at base case and ±10% F.
0.975
0.975
0.970
0.973
Mole fraction / [-]
Mole fraction / [-]
For the dynamic simulations performed in this study, the purity set points (SP) are 97% for all product specifications, while persistent disturbances of +10% in the feed flow rate (F) and +10% in the feed composition (xA) were used for the dynamic scenarios. Although the reported disturbances are not exerted at the same time, no serious problems – such as instability or lack of capability to reach the setpoints – were observed in case of simultaneous disturbances. Due to space limitations, we present here the dynamic response only for the best two control structures: DB/LSV and LB/DSV.
0.965 0.960
SP xA xB xC
0.955 Disturbance +10% F
0.950 0
1
2
3
4 5 6 Time / [hr]
0.971 0.969
Disturbance +10% xA
0.965 7
8
9
SP xA xB xC
0.967
10
0
1
2
3
4 5 6 Time / [hr]
7
8
9
10
0.986
1.000
0.980
0.986
Mole fraction / [-]
Mole fraction / [-]
Figure 5. Dynamic response of DB/LSV control structure, at a persistent disturbance of +10% in the feed flow rate (left) and +10% xA in the feed composition (right).
0.974 0.968 SP xA
0.962
xB xC
Disturbance +10% F
0.956 0
1
2
3
4 5 6 Time / [hr]
7
8
9
0.972 0.958
SP xA
0.944
xB
10
xC
Disturbance +10% xA
0.930 0
1
2
3
4 5 6 Time / [hr]
7
8
9
10
Figure 6. Dynamic response of LB/DSV control structure, at a persistent disturbance of +10% in the feed flow rate (left) and +10% xA in the feed composition (right).
Control and dynamic optimization of a BTX dividing-wall column
451
The mole fractions of components A in the top distillate (xA), B in the side stream (xB) and C in the bottom product (xC) are returning to their set point (SP) within reasonable short settling times. The dynamic response of the DB/LSV control structure is shown in Figure 5, being characterized by low overshooting and short settling times. Figure 6 illustrates the case of the LB/DSV control structure, which shows similar performance. The overall results of the dynamic simulations demonstrate that these control structures cope well with persistent disturbances in the feed flowrate and in the feed composition. Moreover, the DV/LSB control structure has a dynamic response similar to DB/LSV, while the LV/DSB control structure is similar to LB/DSV. However, the LV/DSB control structure leads to oscillations and longer settling times – which are in fact in line with the previous reports (van Diggelen, 2010). Basically, using the reboiler duty – instead of the bottom flowrate – to control the liquid level, and the reflux to control the level in the reflux drum (when L>>D) leads to oscillation in the dynamic response.
5. Conclusions The DWC control structures proposed in this paper – based on PID controllers in a multi-loop framework – are able to simultaneously control the products compositions and minimize the energy requirements in a very practical way. The dynamic optimization is based on a simple strategy, namely to control the heavy component composition at the top of the prefractionator side of the DWC by manipulating the liquid split ratio. Remarkably, this optimal control condition is implicitly sufficient. The steady-state relationships show that maintaining or minimizing this composition leads to energy requirements that are near or at the minimum values as feed composition change. The results of the dynamic simulations illustrate the feasibility of the control structures. The DB/LSV and LB/DSV control structures are the best, in terms of low overshooting and short settling times. Based on the successful application to other relevant mixtures and the excellent performance similar to MPC (Kiss and Bildea, 2011), we consider that these control structures are well applicable to other ternary separations in DWC.
Acknowledgements We thank Costin S. Bildea (‘Politehnica’ University of Bucharest, RO), Zarco Olujic (TU Delft, NL), Igor Dejanovic (University of Zagreb, HR), Ivar J. Halvorsen (SINTEF, NO) and Sigurd Skogestad (Norwegian University of Science and Technology) for the very helpful discussions. The financial support given by AkzoNobel to Rohit Rewagad (University of Twente, NL) during his MSc internship is also gratefully acknowledged.
References 1. 2.
R. Adrian, H. Schoenmakers, M. Boll, 2004, Chem. Eng. & Proc., 43, 347-355. C. Bravo-Bravo, J. G. Segovia-Hernandez, C. Gutierrez-Antonio, A. L. Duran, A. BonillaPetriciolet, A. Briones-Ramirez, 2010, Ind. Eng. Chem. Res., 49, 3672-3688. 3. I. Dejanovic, Lj. Matijasevic, Z. Olujic, 2010, Chem. Eng. Proc., 49, 559-580. 4. I. J. Halvorsen, S. Skogestad, 1997, Comput. Chem. Eng., 21, 249-254. 5. R. C. van Diggelen, A. A. Kiss, A. W. Heemink, 2010, Ind. Eng. Chem. Res., 49, 288-307. 6. A. A. Kiss, J. J. Pragt, C. J. G. van Strien, 2009, Chem. Eng. Comm., 196, 1366-1374. 7. A. A. Kiss, C. S. Bildea, 2011, Chem. Eng. Proc., In press, DOI: 10.1016/j.cep.2011.01.011 8. H. Ling, W. L. Luyben, 2009, Ind. Eng. Chem. Res., 48, 6034-6049. 9. H. Ling, W. L. Luyben, 2010, Ind. Eng. Chem. Res., 49, 189-203. 10. W. L. Luyben, M. L. Luyben, 1997, Essentials of process control. New-York: McGraw-Hill.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Process Dynamic Optimization Using ROMeo Flavio Manentia,1, Guido Buzzi-Ferrarisa, Sauro Pieruccia, Maurizio Rovagliob, Harpreet Gulatib a
Politecnico di Milano, Dipartimento di Chimica, Materiali e Ingegneria Chimica “Giulio Natta”, Piazza Leonardo da Vinci 32, 20133 Milano, ITALY b Invensys Operations Management, 26561 Rancho Parkway South Lake Forest, 92630, California, USA 1 Corresponding author. Phone: +39 02 2399 3273; E-mail: [email protected]
Abstract The present research activity is aimed at demonstrating the feasibility of dynamic realtime optimization (D-RTO) on the industrial scale. Some well-established and fieldproven tools, such as ROMeo™ for real-time optimization (RTO) and DynSim™ for dynamic simulation, are combined with very performing solvers for differential systems (BzzMath library) and specific methods (multiple shooting) to obtain a full-integrated solution for D-RTO. A steam-cracking furnace is selected as validation case: the SPYRO®-based dynamic simulation is developed using FORTRAN, C++, and DynSim™ and it is integrated in ROMeo™ to perform the D-RTO. A quantitative comparison between the traditional RTO and the D-RTO is also provided. Keywords: Dynamic Optimization; ROMeo; DynSim; BzzMath; SPYRO.
1. Introduction Process dynamic optimization is a challenging issue for many research groups of the computer-aided process engineering community (Kadam et al., 2002; Tosukhowong et al., 2004; Lang and Biegler, 2007; Manenti and Rovaglio, 2008; Dones et al., 2010) with the need of finding at the same time efficient and robust solutions as well as of ensuring its on-line feasibility for large-scale systems typical of process industry. In addition, no well-established field-proven solutions are nowadays able to overcome the traditionally strong inertia of process industries in implementing novel control and optimization methodologies, apart from their relevant effectiveness and economical benefits. From this perspective, it is not surprising that the process dynamic optimization is still perceived as an academic concept rather than as an industrial one and it seems to be far from a massive application by the field. The background above summarizes the main reasons pushing us to exploit the best architecture and the potential evolution of an existing, well-established, reliable, and widespread package like ROMeo™. This real-time optimizer is a commercial tool and field-proven in many industrial applications such as oil refineries, gas plants or petrochemicals. The idea of starting from ROMeo™ relies on the concept that, assembling and evolving a commercial/reliable tools, will create an easier and more suitable transition to dynamic real-time optimization applications (D-RTO) in the process industries.
2. Essentials of dynamic real-time optimization (D-RTO) The D-RTO is similar in its mathematical formulation and time-scale to the nonlinear model predictive control (NMPC), another optimization level of the so-called process
Process Dynamic Optimization Using ROMeo
453
control hierarchy (Busch et al., 2007). They are based on the moving horizon methodology (Rawlings, 2000) and lead to multidimensional, constrained, nonlinear programming (NLP) problems based on convolution models, often requiring specific optimizers and differential solvers (Manenti et al., 2009; Buzzi-Ferraris and Manenti, 2010a). Differences between D-RTO and NMPC problems and the solution strategies are summarized in many papers (Biegler and Grossmann, 2004). Very performing, solvers and the use of parallel computing are also explained elsewhere (Manenti et al. 2009; Buzzi-Ferraris and Manenti, 2010a). The multiple shooting technique belonging to the family of simultaneous methods is adopted in the present research activity.
3. Software integration The kernel of the present research activity is to combine three worlds to achieve an integrated and reliable tool for the D-RTO: x DynSim™ (Invensys): powerful dynamic simulator for a wide set of processes. x ROMeo™ (Invensys): package for RTO. ROMeo provides a complex and wellestablished architecture for the process optimization. x BzzMath library (Politecnico di Milano): a comprehensive numerical library to significantly speed-up calculations, especially to integrate large-scale differentialalgebraic systems (Buzzi-Ferraris, 2010). to which it is necessary to add a fourth point to set up the selected study to check the industrial feasibility of the D-RTO and to validate the newborn integrated tool: x SPYRO® (Pyrotec-Technip): a well-established tool to simulate the coil of the radiant section of the steam cracking furnaces of olefins plants. ROMeo
DYNSIM
BZZMATH ... LARGE-SCALE ALGEBRAIC SYSTEMS
BZ
ZM
AT
H
ROBUST and EFFICIENT OPTIMIZERS
LI B
RA
RY
ODE/DAE and PDE/PDAE SYSTEMS
NUMERICAL SOLVERS
Figure 1. Integration path to an effective solution for industrial D-RTO.
This is possible by exploiting features of the object-oriented programming and MS Visual C++. Actually, as qualitatively reported in Figure 1, the differential and differential algebraic solvers of the BzzMath library can be fully integrated and synchronized in DynSim™ by replacing the default solvers so as to speed-up computations and to ensure the online feasibility of D-RTO. Next, it is possible to developed complex dynamic models in DynSim™ environment and solve them using BzzMath solvers with superior performances. At last, rather than using the traditional steady-state models implemented in ROMeo, a drag & drop technology has been developed to move the dynamic models developed in DynSim™ (together with the BzzMath solvers) into ROMeo™ and to use them as convolution models of a multiple shooting structure to solve the D-RTO problem.
F. Manenti et al.
454
4. Validation case: steam cracking furnace There are different methodologies to crack heavy hydrocarbons to obtain light-ends (i.e. fluid catalytic cracking, thermal cracking, hydrocracking). The validation case we selected focuses on the steam cracking, which produces ethylene and, in general, olefins from a feed of saturated hydrocarbons diluted with steam and then heated in a furnace. Before entering the radiant region, the feed flowrate is preheated in a series of heat exchangers placed in the convection region (see the qualitative scheme of Figure 2). In the thermal furnace, the temperature is considerably high (>800°C) and the residence time is in the order of some milliseconds (Dente et al., 1992). Here, one of the most important parameters to control and monitor the process performances is the coil outlet temperature (COT), which is measured before exiting the thermal furnace and is strictly related to the wall temperature. Therefore, the hot gas is quickly quenched in the transfer line exchanger (TLE) in order to stop the reaction and to produce high-pressure steam (about 100bar) as well. Assuming a fixed residence time, the outlet flowrate composition depends on the feed composition, the hydrocarbon to steam ratio, and the COT. The outlet flowrate is sent to the main fractionator and to the separation section (Pierucci et al., 1996). Specifically, since the main goal of this paper is to check the industrial feasibility of the D-RTO, a reduced portion of the furnace and of the control scheme is considered for the sake of simplicity. In addition, the reactor efficiency degradation is not considered in this work. Nonetheless, it is worth remarking that a steam cracking furnace can usually run for a few months only at a time between decoking operations. The selected control system related to the radiant section is reported in Figure 2. It consists of a direct-action temperature controller where the wall temperature of the radiant section, and hence the COT, is the controlled variable and the fuel fed to the burners is the manipulated variable. A higher fuel flowrate corresponds to a higher COT value. A ratio controller manages the air flowrate insufflated into the radiant section, in order to maintain the desired stoichiometric ratio. The optimal setpoint is assigned by D-RTO. The higher the fuel flowrate, the higher the air flowrate. Stack Damper
COT Feed
RADIANT SECTION
Breeching
Olefins
Convection Section
PV
OUT
High Pressure Steam
Steam Coil Outlet Temperature (COT)
FC Transfer Line Exchanger (TLE) >800°C 400°C
Main Factionator
Temperature Controller
TC
PV
OUT
Fuel PV Flowrate Ratio SP Controller Air
Radiant Section Burners / Air Blowers
Figure 2. Half plan slice of a thermal cracking furnace (left-hand side); radiant section and related control scheme considered for the D-RTO: PV stays for Process Variable, OUT stays for controller OUTput; and SP stays for Setpoint (right-hand side).
Process Dynamic Optimization Using ROMeo
455
5. Numerical results An all-in-one tool for SPYRO-based smart dynamic simulation and optimization of olefins plants is developed to check the D-RTO feasibility on a steam-cracking furnace. It has required a complex programming activity. A mixed-language approach (BuzziFerraris and Manenti, 2010b) was adopted to implement the SPYRO®, completely written in FORTRAN, a coil model into the C++ dynamic model developed for simulating the radiant section of the thermal furnace. The very performing solver of differential-algebraic systems from BzzMath library was implemented to obtain an efficient and stable (in the following “smart”) solution of the SPYRO-based dynamic simulation. The smart solution was implemented in DynSim™ to avail from the userfriendly interface and component, properties, and thermo database of such a commercial dynamic simulation suite. At last, the smart dynamic simulation was fully integrated and synchronized into ROMeo™ environment by means of the multiple shooting method. This step was possible thanks to the structure of OPERA® solver currently included in ROMeo™ and to the peculiarities of BzzDae solver from BzzMath library. A short selection of numerical results is reported in Figure 3. A severity change is imposed by the higher propylene price (current worldwide market situation). The traditional RTO approach presents a marked instability in driving the furnace from the initial condition to the optimized one. Also, the convergence towards the new optimal point is significantly slower than the D-RTO optimal path. Moreover, the variations in the fuel flowrate supplied to the furnace are so high throughout the RTO severity change that overcomes the physical upper bound of 7000kg/h. Consequently, the RTO must unavoidably perform a two-steps severity change (Figures 3e-3f) by significantly prolonging the process transient. 1.05
1.1
TRADITIONAL
1
4 SHOOTS
0.95
0.9
C3H6/C2H4
CH4/C3H6
1
0.8
0.7
TRADITIONAL 4 SHOOTS
0.9 0.85 0.8 0.75
16, 32 SHOOTS
0.7 0.6
0.65
16, 32 SHOOTS 0.5 0
20
40
60
80
100
120
Time [min]
a
0.6
b
0
40
60
80
100
120
Time [min]
9000
9000
TRADITIONAL
TRADITIONAL 8000
8000
7000
FUEL FLOWRATE
FUEL FLOWRATE
20
4 SHOOTS 6000
5000
STARTING POINT
4000
7000
4 SHOOTS
6000
5000
16, 32 SHOOTS
4000
OPTIMUM 3000
3000
STARTING POINT
16, 32 SHOOTS
OPTIMUM 2000 0.5
0.6
0.7
0.8
0.9
1
1.1
c d
CH4/C3H6 SEVERITY 5000
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
1.05
C3H6/C2H4 SEVERITY 4000
1-step traditional RTO
Traditional RTO
4500
3800
FUEL FLOWRATE
FUEL FLOWRATE
2000 0.6
2-step traditional RTO
4000
3500
3000
32-shoots D-RTO 2500
3600
3400
32-shoots D-RTO
3200
D-RTO off-spec time RTO off-spec time
2000 0
50
100
150
200
TIME [min]
250
300
3000
e
f
0
50
100
150
200
TIME [min]
250
300
Figure 3. CH4/C3H6 (a) and C3H6/C2H4 (b) severity changes; convergence comparison between the RTO and the D-RTO for CH4/C3H6 (c) and C3H6/C2H4 (d) severity changes; comparison between the fuel flowrate supplied using the RTO and the D-RTO (e, f).
456
6. Conclusions The present activity showed the industrial feasibility of the dynamic optimization (DRTO). Main benefits of D-RTO versus the traditional RTO have been discussed and quantified, showing, for example, that D-RTO practically halves the off-spec during process transients. Computational efforts required to solve the RTO and D-RTO are practically comparable, by making even the D-RTO feasibility on the industrial scale. Moreover, looking at the traditional inertia of process industries and oil refineries, no visible changes to ROMeo’s user were introduced so as to preserve to current ROMeo’s interface and to have an easy-to-use tool for the fast industrial application of D-RTO.
7. Disclaimer ROMEO and DynSim are trademarks of Invensys Operations Management. SPYRO is a registered product of Technip-Pyrotec, originally developed by Politecnico di Milano.
References Biegler, L.T., & Grossmann, I.E., 2004, Retrospective on optimization. Computers & Chemical Engineering 28(8), 1169-1192. Busch, J., Oldenburg, J., Santos, M., Cruse, A., & Marquardt, W., 2007, Dynamic Predictive Scheduling of Operational Strategies for Continuous Processes Using Mixed-logic Dynamic Optimization. Computers & Chemical Engineering 31, 574-587. Buzzi-Ferraris, G., & Manenti, F., 2010a, A Combination of Parallel Computing and ObjectOriented Programming to Improve Optimizer Robustness and Efficiency. Computer Aided Chemical Engineering 28, 337-342. Buzzi-Ferraris, G. (2010). BzzMath: Numerical library in C++. Politecnico di Milano, http://chem.polimi.it/homes/gbuzzi. Buzzi-Ferraris, G., & Manenti, F., 2010b, Fundamentals and Linear Algebra for the Chemical Engineer: Solving Numerical Problems. Wiley-VCH, Weinheim, Germany. Dente, M., Pierucci, S., Ranzi, E., & Bussani, G., 1992, New Improvements in Modeling Kinetic Schemes for Hydrocarbon Pyrolysis Reactors. Chemical Engineering Science 47, 2629-2634. Dones, I., Manenti, F., Preisig, H.A., & Buzzi-Ferraris, G., 2010, Nonlinear Model Predictive Control: a Self-Adaptive Approach. Industrial & Engineering Chemistry Research 49(10), 4782-4791. Kadam, J.V., Schlegel, M., Marquardt, W., Tousain, R.L., van Hessem, D.H., van der Berg, J., et al., 2002, A Two-level Strategy of Integrated Dynamic Optimization and Control of Industrial Processes - a Case Study. ESCAPE-12, The Hague, The Netherlands, 511-516. Lang, Y.D., & Biegler, L.T., 2007, A Software Environment for Simultaneous Dynamic Optimization. Computers & Chemical Engineering 31, 931-942. Manenti, F., & Rovaglio, M., 2008, Integrated multilevel optimization in large-scale poly(ethylene terephthalate) plants. Industrial & Engineering Chemistry Research 47(1), 92104. Manenti, F., Dones, I., Buzzi-Ferraris, G., & Preisig, H.A., 2009, Efficient Numerical Solver for Partially Structured Differential and Algebraic Equation Systems. Industrial & Engineering Chemistry Research 48(22), 9979-9984. Pierucci, S., Brandani, P., Ranzi, E., & Sogaro, A., 1996, An industrial application of an on-line data reconciliation and optimization problem. Computers & Chemical Engineering 20, S1539S1544. Rawlings, J.B., 2000, Tutorial Overview of Model Predictive Control. IEEE Control Systems Magazine 20(3), 38-52. Tosukhowong, T., Lee, J.M., Lee, J.H., & Lu, J., 2004, An Introduction to a Dynamic Plant-wide Optimization Strategy for an Integrated Plant. Computers & Chemical Engineering 29(1), 199208.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Model based optimisation of a cyclic reactor for the production of hydrogen Filip Logist,a Joost Lauwers,a Benoît Trigaux,a Jan F. Van Impea a
BioTeC & OPTEC - Chemical Engineering Dept., Katholieke Universiteit Leuven,
W. de Croylaan 46, B-3001 Leuven, Belgium.
Abstract This paper studies the model based optimisation of a cyclically operated tubular reactor, i.e., the Cyclic Water Gas Shift Reactor, for the production of hydrogen. The most important degrees of freedom are first identified based on a sensitivity analysis and are afterwards optimised. The optimisation results show that there exists an optimum. In general, short switching times and quasi symmetric operations are preferred. In addition, deviations from the symmetric operation regime gave rise to a drastic decrease in productivity. Keywords: cyclic operation, hydrogen production, dynamic optimisation, cyclic water gas shift reactor.
1. Introduction Operating reactors in a periodic way, often leads to enhanced performances and process intensification [1]. In the current study, a model based optimisation is performed for a Cyclic Water Gas Shift Reactor (CWGSR). This type of reactor is based on the repeated reduction of a fixed bed using a mixture of hydrogen and carbon monoxide and its subsequent oxidation with steam to produce pure hydrogen. This reactor has been identified as a promising alternative to upgrade the hydrogen streams containing carbon oxides from, e.g., reforming processes to high-purity hydrogen, required for, e.g., fuel cells [2, 3]. However, the rigorous model based optimisation of the design and operation of this reactor is a challenge due to its distributed nature and its time-periodic operation, giving rise to a system of coupled nonlinear partial differential equations (PDEs) with timeperiodic boundary conditions. A method of lines approach is employed to reformulate the set of PDEs to a large-scale system of differential and algebraic equations (DAEs). Before starting the optimisation, a preliminary sensitivity analysis has been performed in order to indicate the most important degrees of freedom which are subsequently optimised. For the optimisation a sequential strategy is employed in order to allow the use of standard numerical tools and avoid the implementation of tailored schemes.
F. Logist et al.
458
2. Cyclic Water Gas Shift Reactor 2.1. Operation The principle of the water gas shift reactor (or also called sponge iron process) is not new (see [2, 3] and the references therein). This reaction involves a red-ox reaction, in which the carbon within the carbon monoxide is oxidised and the hydrogen within the water is reduced. Due to the cyclic operation and the presence of iron oxide inside which captures and releases hydrogen, the two parts of the reaction are separated in time (see Fig. 1). When a stream containing carbon monoxide and hydrogen is fed into the reactor, the iron oxide is reduced, producing carbon dioxide and water: CO + 1/x FeOx o CO2 + 1/x Fe H2 + 1/x FeOx o H2O + 1/x Fe When the bed is sufficiently reduced, the feed is switched to steam. Then the iron is oxidized to iron oxide and pure hydrogen without carbon monoxide is produced: Fig 1. Scheme of the CWGSR [3]. H2O + 1/x Fe o H2 + 1/x FeOx 2.2. Mathematical model As a mathematical model, the conceptual 1D pseudo-homogeneous model from [3] is adopted. The model variables are the components CO, CO2, H2 and H2O as well as the degree of reduction of the bed. The reactions considered are the following: r2: H2 + FeOx o H2O + Fe r1: H2O + Fe o H2 + FeOx r4: CO + FeO o CO2 + Fe. r3: CO2 + Fe o CO + FeOx This yields the following balances for the individual components, the total mass, the degree of reduction and the energy: wxi wx Z i ¦Q i , j Da j R j (1) wW w9 j wZ 1 w- Z w0 (2) w9 - wW - wW wD 4 ¦j Q Fe , j Da j R j (3) wW 1 w 2ww(4) Z < ¦ '-ad , j Da j R j St - -e wW w9 Pe w9 2 j with W and ] the dimensionless time and length, xi the dimensionless component gas concentrations, Z the dimensionless gas velocity, # the dimensionless temperature, D the degree of reduction. vi,j is the coefficient of component i in reaction j. Rj is the rate of this reaction, while '-ad,j represents the heat of reaction. Da, Pe and St indicate the Damkohler, Peclet and Stanton number, respectively. The substantial fixed bed capacity 4 and the thermal capacity < are defined as the ratios between the oxygen capacity of the fixed bed to the gas hold up, and between the heat capacity of the fixed bed to the heat capacity of the gas phase. (For the exact parameter values, see [3].) <
Model based optimisation of a cyclic reactor for the production of hydrogen
459
3. Numerical strategy and approach. The equations above define a system of partial differential equations (PDEs). The classic Danckwerts boundary conditions are adopted as spatial boundary conditions, while due to the cyclic nature periodic boundary conditions in time have to be satisfied, i.e., profiles have to repeat themselves. 3.1. PDE discretisation To solve the PDEs, a method of lines approach is adopted exploiting the freeware matlab MatMOL toolbox [4]. The rationale behind the method of lines is the discretisation of the variables and their spatial derivatives, resulting in a large system of ordinary differential and algebraic equations. This system is then integrated in time using available standard integrators. In the current situation, a grid of 51 equidistant discretisation points is used and a two-point upwind and a three-point centred discretisation stencil are selected for the first and second-order spatial derivatives, respectively. Matlab’s ode15s integrator is employed with absolute and relative integration tolerances of 10-5 and by supplying the sparsity pattern of the Jacobian of the DAE right hand side to speed up computations. 3.2. Cyclic steady state computation and optimisation The periodicity constraints require that a cyclic steady-state (CSS) is reached. This CSS can be obtained [5] as (i) a result of the entire transient evolution until the CSS is reached (i.e., direct dynamic simulation) or by (ii) by formulating it as a boundary value problem in time and exploiting root finding techniques (i.e., direct simulation). Incorporating these approaches in optimisation strategies, typically give rise to sequential and simultaneous approaches (examples see, e.g., [6-8]). In the former, the simulation and optimisation are decoupled, while the latter adds the discretised model as constraints in optimisation. The former method gives rise to small optimisation problems but requires the repeated direct dynamic simulations, while the latter approach in general induces large-scale optimisation problems requiring tailored numerical approaches. Hence, in the current study, a sequential approach in combination with direct dynamic simulation is employed. To check whether a CSS is reached, the profiles after two subsequent switchings are measured and compared to the asymmetric criterion for CSS proposed in [9]. As an optimiser Matlab’s fmincon routine is used with an optimality tolerance of 10-4. The objective function to be maximised is the hydrogen efficiency. This quantity, which has been mentioned in [3], is defined as the ratio of the hydrogen production during the oxidation phase and the hydrogen consumed during the reduction phase.
KH
³ ³
Z - x Z - x
oxidation
2
reduction
H 2 outlet H2
dt
dt inlet
This quantity is a measure for the productivity and efficiency of the operation.
(5)
F. Logist et al.
460
4. Results and discussion. The important variables are first selected based on a sensitivity analysis and optimised afterwards. 4.1. Sensitivity analysis The influence of different variables, i.e., switching period, inlet temperature, inlet gas velocity and ratio of gas velocities during oxidation and reduction phase, is illustrated in Fig 2. Due to space restrictions, the resulting profiles are omitted. Clearly, decreasing the switching time improves the operation as more energy is captured in the reactor. However, for too short cycles a decrease is observed due to the loss of the unconverted gas just after a flow reversal. The effect of the inlet temperature is as expected: higher temperatures yield higher conversions. However, the effect is not large. Concerning the gas velocity Z Zox= Zred), an efficiency increase is seen as the velocity decreases, which allows a longer residence time and reaction time. However, a plateau value is seen for values lower than 0.8. When allowing asymmetric gas velocities (Zoxz Zred), it is seen that slightly higher velocities during the oxidation phase yield more hydrogen and a better efficiency. However, too asymmetric operations cause a sharp decrease.
Fig 2. Sensitivity analysis results.
4.2. Optimisation Based on the sensitivity analysis, the switching time (Ws [25, 500]), the inlet gas velocity (during reduction) (Zred [0.5, 2]) and the gas velocity ratio Zred/ Zox [0.9, 1.1] are adopted as degrees of freedom. The inlet gas temperature has been kept constant at 0.5 as the additional energy cost is assumed to exceed possible benefits. The optimal values are given in Table 1. As can be observed an improvement is possible. The resulting optimal profiles for the temperature, the hydrogen concentration and the degree of reduction are illustrated in Fig. 3. Clearly, the typical trapezoidal temperature profile is present, although it hardly shifts due to the short switching times. For the same reason, also the degree of reduction hardly changes. In contrast, there is a significant change in the hydrogen concentration. As expected, the concentration at z = 0 decreases during the reduction phase and increases during the oxidation phase. Note that the CSS is indeed achieved as the end of one phase coincides with the start of the other.
Model based optimisation of a cyclic reactor for the production of hydrogen
Ws
Table 1. Optimisation results.
70.0 s
Zox
Zred
0.52
0.50
461
K+ 1.5532
Fig 3. Optimal reactor profiles.
5. Conclusion. This paper has investigated the operation of a cyclic water gas reactor for the production of high purity hydrogen. Based on a detailed model first the most important variables have been identified, which have been afterwards optimized using a sequential optimization approach in combination with a direct dynamic simulation approach for computing the cyclic steady state. Acknowledgements Work supported in part by Projects OT/09/025/TBA, OT/10/035, OPTEC (Center-of-Excellence Optimization in Engineering) PFV/10/002 and SCORES4CHEM KP/09/005 of the Katholieke Universiteit Leuven, and by the Belgian Program on Interuniversity Poles of Attraction, initiated by the Belgian Federal Science Policy Office. J.F. Van Impe holds the chair Safety Engineering sponsored by the Belgian chemistry and life sciences federation essenscia.
References [1] T. Van Gerven and A. Stankiewicz. Structure, Energy, Synergy, Time - The fundamentals of Process Intensification. Industrial and Engineering Chemistry Research. 48:2465–2474, 2009. [2] V. Hacker, R. Fankhauser, G. Faleschini, H. Fuchs, K. Friedrich, M. Muhr and K. Kordesch 2000. Hydrogen production by steam–iron process, Journal of Power Sources 86: 531–535. [3] P. Heidebrecht, C. Hertel and K. Sundmacher. Conceptual analysis of a Cyclic Water Gas Shift Reactor. International Journal of Chemical Reactor Engineering, 6, Article A19, 2008. [4] A. Vande Wouwer, P. Saucez, and W.E. Schiesser. Simulation of distributed parameter systems using a matlab-based method of lines toolbox: Chemical engineering applications. Industrial and Engineering Chemistry Research, 43:3469–3477, 2004. [5] J. Unger, M. Kolios and G. Eigenberger, On the efficient simulation and analysis of regenerative processes in cyclic operation, Computers and Chemical Engineering, 54, 2597-2607, 1997. [6] F. Logist, A. Vande Wouwer, I. Smets and J. Van Impe. Optimal temperature profiles for tubular reactors implemented through a flow reversal strategy. Chemical Engineering Science, 62, 4675-4688, 2007. [7] F. Logist, P. Saucez, J. Van Impe and A. Vande Wouwer, Simulation of (bio)chemical processes with distributed parameters using Matlab, Chemical Engineering Journal, 155, 603-616, 2009. [8] F. Logist and J.F. Van Impe. Multiple objective optimisation of cyclic chemical systems with distributed parameters. Procs of the 14th IFAC Workshop on Control Applications of Optimisation, CD-ROM, 6 pages, 2009. [9] K. Gosiewski, Effective approach to cyclic steady-state in the catalytic reverse-flow combustion of methane, Chemical Engineering Science, 59, 4095-4101, 2004.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-objective optimisation approach to optimal experiment design in dynamic bioprocesses using ACADO toolkit Filip Logist,a Dries Telen,a Eva Van Derlinden,a Jan F. Van Impea a
BioTeC & OPTEC - Chemical Engineering Dept., Katholieke Universiteit Leuven,
W. de Croylaan 46, B-3001 Leuven, Belgium.
Abstract Mathematical models are valuable tools for optimizing dynamic biochemical processes. However, experimental data collection is often labour and cost intensive and can give rise to production losses. The current paper studies the trade-offs between objectives for production and optimally designing experiments in view of parameter estimation for a bioreactor benchmark. Recent deterministic multi-objective optimal control approaches (i.e., the freeware toolkit ACADO Multi-objective www.acadotoolkit.org) are used to efficiently produce the set of trade-off or so-called Pareto optimal solutions. These trade-offs are clearly reflected when the obtained optimal control solutions are exploited to estimate the parameters from virtual experiments, while also trying to maximise the biomass production. Keywords: optimal experiment design, multi-objective optimisation, bioprocess.
1. Introduction Dynamic biochemical processes are omnipresent in industry, e.g., brewing, production of enzymes and pharmaceuticals. Since accurate models are required for model based optimisation and measurements are often labour and cost intensive, Optimal Experiment Design (OED) [1,2] techniques for parameter estimation are valuable tools to limit the experimental burden while maximising the information content. However, experimental data collection typically gives rise to production losses. To enable a systematic analysis of the losses due to modelling, the trade-off or Pareto curve is efficiently computed for a fed-batch bioreactor benchmark [3] exploiting the freeware toolkit ACADO MultiObjective [4]. This tool employs novel scalarisation based multi-objective approaches in combination with direct dynamic optimisation methods [5]. This yields a clear presentation of the set of trade-off solutions, from which the decision maker can pick one. It has to be noted that already an educated guess for the parameter values has to be available in order to provide a good indication of the trade-offs. Hence, based on this approach, the decision maker can select solutions that possess a high information content but do not give in too much on performance. Moreover, situations were a first batch is solely run for estimation but cannot be used for production are avoided.
Multi-objective optimisation approach to optimal experiment design in dynamic bioprocesses using ACADO toolkit
463
2. Fed-batch bioreactor case (1) As a benchmark case study a well-mixed fed-batch bioreactor is adopted [3]: dS dt VX (2) PX u Cs,F dX dt (3) u dV dt u with X [g] the biomass, S [g] the limiting substrate, V [L] the bioreactor volume and [L/h] the volumetric feed rate, containing a substrate concentration Cs,F [g/L]. The limiting substrate concentration is defined as Cs = S/V [g/L]. As a well-studied example [3], monotonic Monod type kinetics are assumed: (4) Cs P Pmax K s Cs while the substrate consumption rate is given by the linear law: (5) V P YX S m with the yield YX|S and the maintenance factor m. The parameters are given in Table 1. The initial conditions are X(0) = 10.5 g, V* = 7 L (V* is the initial volume without substrate S(0)) and Cs,F = 500 g/L. As the maximum volume is Vmax = 10 L and a total amount of limiting substrate of 1500 g is available, the initial conditions for S(0) and V(0) are related as follows: S(0) + Cs,F (Vmax - V(0)) = D. Due to constructive reasons the feed rate is bounded between 0 and 1 L/h and the batch duration is limited from 5 to 40 h. The parameter values are given in Table 1. The aim is to derive a feeding profile u(t) that both maximises production and allows an accurate parameter estimation. For biomass production, the objective is easily explained as JP = X(tf). However, measures that quantify the information content of an experiment are derived based on the Fisher information matrix [6]: T (6) t f § wy ( p, t ) · § wy ( p, t ) · F p Q ¨¨ dt ¨ ¸ ¸¸ ³0 ¨© wp ¸¹ © wp ¹ p p* p p* The Fisher information matrix combines information on the error of the output measurement (typically Q is the inverse of the measurement error variance matrix) and the sensitivities of the model output y(p,t) with respect to small variations in the parameters p, i.e., the matrix wy(p,t)/wp. Under the assumption of uncorrelated normal random errors with zero-mean and constant variance, the inverse of this matrix defines the parameter variance-covariance matrix [6]. To compute the Fisher information matrix in practice, the sensitivities and the Fisher matrix elements are added as additional differential states to the model equations. Although several scalar measures of the Fisher information matrix are possible [6] and may also be conflicting, the modified Ecriterion is selected here, which minimises the condition number, i.e., the ratio of the largest over the smallest eigenvalue of F: JI = Omax/Omin thus optimises the functional shape of the confidence intervals. Table 1. Parameter values. Pmax 0.1 h-1 M 0.29 g/g
Ks
V2Cs
1 g/L 110-2 g2/L2
YX|S
V2Cx
0.47 g/g 6.2510-4 g2/L2
F. Logist et al.
464
3. A multi-objective optimal control approach 3.1. Multi-objective optimal control formulation The problem above can be cast into the general multi-objective optimal control frame: (7) min ^J1,..., J m ` x ( ), u ( ), tf
dx (8) f ( x (t ), u(t ), t ) t >0, t f @ dt (9) 0 bc ( x (0)) (10) 0 t c p ( x (t ), u(t ), t ) (11) 0 t c t ( x (t f ), u(t f ), t f ) Here, x are the states, while u denote the controls. The vector f represents the dynamic system equations (on the interval t [0, tf]) with initial conditions bc. The vectors cp and ct indicate respectively path and terminal inequality constraints. Each individual cost function can consist of Mayer and Lagrange terms. tf J i hi ( x(t f ), t f ) ³ g i ( x(t ), u(t ), t )dt (12)
subject to:
0
In multi-objective optimisation, typically no single optimal solution exists, but a set of Pareto optimal solutions. Broadly speaking, a solution is called Pareto optimal if there exists no other feasible solution that improves at least one objective function without worsening another. (For a formal definition, see, e.g., [4,5].)
3.2. Numerical solution Scalarisation methods convert the multi-objective optimal control problems, into a series of optimal control problems that are function of scalarisation parameters or weights. This series is solved by direct optimal control methods as Single and Multiple Shooting. To tackle the multi-objective aspect, several scalarisation techniques (i.e., Weighted Sum (WS), Normal Boundary Intersection (NBI) and Normalised Normal Constraint (NNC)) have been implemented in ACADO Multi-Objective [5], which is the multi-objective extension of the freeware tool ACADO [7] (www.acadotoolkit.org). This ensures an efficient solution of the multi-objective optimal control problems. In addition, ACADO can also be used for estimating parameters in dynamic processes.
4. Results To compute the Pareto sets, NBI with 41 points and Multiple Shooting with a 30 piece piecewise constant control discretisation have been used. To integrate the ODE system of bioreactor equations, the sensitivity equations and the Fischer matrix elements, a Runge-Kutta78 integrator is employed with an integration tolerance of 10-6. The discretised optimal control problem is solved by an SQP method with a KKT tolerance of 10-5. Despite the strong nonlinearities, the largely differing scales and the presence of singular arcs, the total CPU times were mainly under four minutes. Fig 1. displays the corresponding Pareto set. As can be observed there is a clear trade-off visible. However, the trade-off plot also shows that the condition number can easily be decreased at the expense of almost no productivity loss. Nevertheless, when
Multi-objective optimisation approach to optimal experiment design in dynamic bioprocesses using ACADO toolkit really pushing the condition number towards the lowest possible value, a large productivity loss is evidenced. Hence, a natural choice would be a point in the knee of the curve. According to [8], this is the point with the largest distance along the quasi-normal to the convex hull of individual minima (CHIM). Note that despite the largely different magnitudes, a nice spread on the Pareto curve is obtained, which is impossible with, e.g., the WS.
465
Fig 1. Pareto set for production vs. Mod E criterion.
Fig 2. Optimal states, controls and sensitivities both extremes and the knee point in the curve.
Fig 2. illustrates for both extremes and the point in the knee, the corresponding optimal control, state and sensitivity evolutions. When focussing on productivity all substrate is as expected fed at the beginning in order to achieve an as high as possible substrate concentration and stimulate growth as much possible. This yields a highly accurate estimation of Pmax but little information on the Ks as both sensitivities and also Fisher matrix elements are in largely different orders of magnitude. However, when the information content comes into play, not all substrate is fed at the beginning but a singular feeding rate is observed. Clearly, the production of biomass at the end is lower, and the sensitivities and Fisher elements are smaller but in a similar order of magnitude allowing also a more accurate estimate of Ks. As expected, the knee point exhibits an intermediate behaviour. Finally, the optimal profiles for the individual objectives are tested on a simulation level. To mimic biological variability and experimental uncertainty, uncorrelated Gaussian white noise has been added to all measurements. Fig 3. depicts the contours of the corresponding SSE cost surfaces. As can be seen, when production is focussed on a strong correlation between the parameters is present based on the elongated contours,
466
F. Logist et al.
Fig 3. SSE contours: production (left), knee (middle) and information (right).
which even do not close in the current plot. Alternatively, when information content is focussed on, the contours become more and more circular and close, indicating the higher de-correlation. Hence, to be able to estimate Ks too, not the intuitive solution for maximum biomass production is needed, but a more subtle feeding throughout the batch. The price to be paid, however, is a performance decrease. Nevertheless, the Pareto set allows to assess the trade-offs involved.
5. Conclusion. The current paper illustrated the features of the toolkit ACADO Multi-Objective to study the trade-offs in dynamic biochemical production processes between objectives for production and optimally designing experiments in view of parameter estimation. Based on recent deterministic multi-objective optimal control approaches the set of trade-off or Pareto optimal solutions was efficiently and accurately produced. A clear and sharp trade-off was observed. This trade-off was also reflected when testing the different solutions on a simulation level and estimating the parameters. Acknowledgements Work supported in part by Projects OT/09/025/TBA, OT/10/035, OPTEC (Center-of-Excellence Optimization in Engineering) PFV/10/002 and SCORES4CHEM KP/09/005 of the K.U. Leuven, and by the Belgian Program on Interuniversity Poles of Attraction, initiated by the Belgian Federal Science Policy Office. D. Telen has a Ph.D grant of the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen). E. Van Derlinden is supported by grant PDKM/10/122 of the K.U.Leuven research fund. J.F. Van Impe holds the chair Safety Engineering sponsored by the Belgian chemistry and life sciences federation essenscia.
References [1] J.R. Banga, K.J. Versyck and J.F. Van Impe. Computation of optimal identification experiments for nonlinear dynamic process models, Industrial & Engineering Chemistry Research, 41: 2425-2430, 2002. [2] G. Franceschini and S. Macchietto, Model-based design of experiments for parameter precision: State of the art, Chemical Engineering Science, 63:4846-4872, 2008. [3] K. Versyck and J. Van Impe. Feed rate optimization for fed-batch bioreactors: from optimal process performance to optimal parameter estimation. Chem. Engineering Communications, 172:107-124, 1999. [4] F. Logist, B. Houska, M. Diehl and J. Van Impe. Fast Pareto set generation for nonlinear optimal control problems with multiple objectives. Structural and Multidisciplinary Optimization, 42:591-603, 2010. [5] F. Logist, P.M.M. Van Erdeghem, and J.F. Van Impe. Efficient deterministic multiple objective optimal control of (bio)chemical processes. Chemical Engineering Science, 64:2527-2538, 2009. [6] E. Walter and L. Pronzato. Identification of Parametric Models from Experimental Data, Springer, 1997. [7] B. Houska, H. Ferreau, M. Diehl. ACADO Toolkit - An Open-Source Framework for Automatic Control and Dynamic Optimization. Optimal Control Applications & Methods (in press, doi:10.1002/oca.939). [8] I. Das. On characterizing the "knee" of the Pareto curve based on Normal-boundary Intersection. Structural Optimization 18:107-115, 1999.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A disturbance estimation approach for online model-based redesign of experiments in the presence of systematic errors F. Galvanin1,*, M. Barolo1, G. Pannocchia2 and F. Bezzo1 1
CAPE-Lab, Dipartimento di Principi e Impianti di Ingegneria Chimica, Università di Padova, via Marzolo 9, 35131 Padova, Italy 2 DICCISM – Dipartimento di Ingegneria Chimica, Università di Pisa, via Diotisalvi 2, 56122 Pisa, Italy * E-mail: [email protected]
Abstract Online Model-Based Redesign of Experiment (OMBRE) strategies represent a valuable support to the development of dynamic deterministic models, allowing for the dynamic update of the experimental conditions to yield the most informative data for the parameter identification task. However, the effectiveness of OMBRE strategies may be severely affected by the presence of systematic modelling errors. In this paper, a disturbance estimation approach is exploited within an OMBRE framework (DEOMBRE) in order to achieve a statistically satisfactory estimation of the model parameters, thus avoiding (or reducing) constraint violations even in the presence of systematic modelling errors. A case study illustrates the benefits of the new approach. Keywords: model-based experiment design, model updating, disturbance estimation
1. Introduction A wide class of physical systems can be described by dynamic deterministic models expressed in the form of systems of differential and algebraic equations. Once a dynamic model structure is found adequate to represent a physical system, a set of identification experiments needs to be carried out to estimate the set of parameters of the model in the most precise and accurate way. Model-based design of experiments (MBDoE) techniques [1] represent a valuable tool for the rapid assessment and development of dynamic deterministic models, allowing for the maximisation of the information content of the experiments in order to support and improve the parameter identification task. Conventional MBDoE techniques for parameter identification usually involve a sequential procedure: 1) the design of the experiment (based on current knowledge on model structure and parameters); 2) the execution of the designed experiment, where new data are collected; 3) the estimation and statistical assessment of model parameters. Iteration of steps 1 to 3 generally provides a new information flux coming from planned experiments leading to a progressive reduction of the uncertainty region of model parameters (as demonstrated in a wide range of applications [2]). However, each experiment design step is performed at the initial values of model parameters, and uncertainty on these values as well as inadequacy of the given model structure can deeply affect the efficiency of the design procedure [3]. Recently, Online Model-Based Redesign of Experiment (OMBRE) strategies [4] have been proposed to exploit the information as soon as soon as it is generated by the running experiment. In OMBRE the manipulated input profiles of the running experiment are updated
F. Galvanin et al.
468
performing one or more intermediate experiment designs (i.e., redesigns), and each redesign is performed adopting the current value of the parameters set, which is the value of estimated model parameters until that moment. OMBRE mitigates the effect of parametric uncertainty on the design effectiveness but the technique is still particularly sensitive to the presence of systematic modeling errors, that may affect the effectiveness of the entire identification procedure. Following an analogy with offset-free model predictive control strategies [5] a novel experiment design approach (DE-OMBRE) is presented in this paper where a model updating policy including disturbance estimation (DE) is embedded within an OMBRE strategy. An augmented model, lumping the effect of systematic errors, is here considered to estimate both the states and the system outputs in a given time frame, updating the constraint conditions in a consistent way as soon as the effect of bias disturbances propagates in the system. The purpose is to achieve a statistically satisfactory estimation of the model parameters avoiding (or reducing) constraint violations even in the presence of systematic errors. The benefits of the proposed strategy are illustrated and discussed through a simulated case study, where the effectiveness of the design is assessed by comparison to conventional MBDoE and OMBRE techniques.
2. The methodology Conventional MBDoE procedures aim at decreasing the model parameter uncertainty region predicted a priori by the model by acting on the nij-dimensional experiment design vector ij and solving the following set of equations
M opt
^
^
`
`
arg min ȥ ª¬ HT1 ș, M º¼ ij
(1)
yˆ
h x
(2)
arg min ȥ ª¬ VT ș, M º¼ ij
subject to f x , x, u, w , ș, t
0
F x G t d 0
(3)
H yˆ D t d 0
(4)
with the set of initial conditions x(0) = x0. In (1) Vș and Hș are the variance-covariance matrix of model parameters and the dynamic information matrix respectively; x(t) is the Nx-dimensional vector of time-dependent state variables, u(t) and w are the timedependent and time-invariant manipulated inputs, ș is the Nș-dimensional set of unknown model parameters to be estimated, and t is time. The symbol ^ is used to indicate the estimate of a variable (or a set of variables). Among the set of constraint conditions (3-4) a distinction is made between constraints involving unmeasurable states (3) and estimated outputs (4). The formulations are expressed through a set of (possibly time-varying) constraints G(t) and D(t) on the state variables, while F and H are two sets of selection functions, allowing to choose the variables being actually constrained. The design vector ij in the most general form may contain a Nydimensional set of initial conditions for the measured variables (y0), the manipulated input variables (u and w), the duration of the single experiment (IJ) and the Nsp-set of time instants at which the output variables are sampled tsp. Function ȥ in (1) is an assigned measurement function of the variance-covariance matrix of model parameters Vș, and represents the chosen design criterion [1]. When an OMBRE approach is exploited [4], intermediate parameter estimations are carried out while the experiment is still running and, by exploiting the information
A disturbance estimation approach for online model-based redesign of experiments 469 obtained, the experiment is partially redesigned before its termination. The experiment is thus divided into a number of sub-experiments where the design variables are distributed. Each redesign is carried out by solving the optimisation problem given by the solution of equations (1-4) in the corresponding time interval. 2.1. Online Model-Based Redesign of the experiment including disturbance estimation (DE-OMBRE) The presence of a systematic error between the model and the real system (bias) is not explicitly handled by OMBRE. Disturbance models have been proposed in model predictive control (MPC) [5] to ensure offset-free performance [6] when disturbances as well as plant-model mismatch are present. Let us consider an “augmented model” in the form f x , x, u, w , ș, t
yˆ
h x d
0
with
d
(5)
0
where d is a Ny-dimensional set of lumped disturbances on the outputs. The constraint equations (4) concerning the outputs for the augmented model will take the form H h x d D t d 0
(6)
where each element of d at the sampling time k can be estimated through a two step procedure: 1. prediction: simulation of the augmented model (5) with d = dk|k-1; 2. filtering: given the measurement yk the prediction error is
ek
yk h xk d k k 1
(7)
and the lumped disturbance dk|k can be evaluated as dk k
d k k 1 Ld ek
(8)
where Ld is a tuning parameter (based on the actual measurement confidence). In DE-OMBRE, these prediction and filtering steps are repeated, within each redesign time interval, until a suitable value for d is evaluated. The value can be used in the following redesign to update the model predictions.
3. Case study The case study considered is a model of glucose homeostasis for simulation of type 1 diabetes subjects in the form proposed by Lehmann and Deutsch [7], particularly suitable for simulation of multiple daily injections and recently adopted in nonlinear model predictive control studies [8]:
I
s (t t0 ) s 1 T50s D
ke I 2 s VI ªT50s t t0 º ¬ ¼ kabs Ggut GNHB t Gout t Gren t G VG
Ia
G gut
k1 I k2 I a Gempt t kabs Ggut .
(9)
F. Galvanin et al.
470
In (9) I and Ia are the plasmatic and active insulin concentrations, respectively; G and Ggut are the plasmatic and gut glucose concentrations, respectively. Gren is the renal excretion, Gempt is a trapezoidal function managing the carbohydrates uptake, GNHB is the net hepatic glucose balance, and Gout represents the glucose peripheral utilization. The purpose of the study is to design a single-day test (starting at 0:00 AM) in order to identify the set of parameters ș = [k1 k2 ke]T in a statistically satisfactory way. The variables being optimised by design are: the insulin injections (the t0 times and the amount D of Nb = 4 fast acting Lispro boluses) and the amount of carbohydrates of Nm = 4 meals (scheduled at 8:00 AM, 12:00 AM, 4:00 PM and 8:00 PM). The constraints in the (4) form acting on this system are related to normoglycaemia attainment, and are the upper (D1 = 180 mg/dL) and lower (D2 = 60 mg/dL) thresholds on G, which is the only state variable being constrained and the only measured state variable (i.e. x1 = y = G). Three distinct design configurations have been compared: 1. STDE: conventional E-optimal design; 2. OMBRE: E-optimal redesign (the redesign is scheduled every ǻtup = 6 h); 3. DE-OMBRE: E-optimal redesign including disturbance estimation (ǻtup = 6 h). A single insulin injection is optimised during each sub-experiment performed in the redesign strategies. In DE-OMBRE configuration the tuning parameter Ld appearing in (8) is kept constant to 1 and d(0) = 0. The simulated glucose measurements are available with a constant relative deviation on the readings of 0.10 and the sampling time is ǻt = 60 minutes. Additionally, a constant systematic error b is supposed to affect the readings (b = 20 mg/dL). The initial guess on model parameters is ș0 = [1.000 1.000 1.000]T while the true set of parameters defining the diabetic subject is ș = [0.025 1.250 5.400]T. A dedicated program has been developed in Octave/C++, and an SQP optimizer has been used to handle the nonlinear programming problem. 3.1. Results and comments Parameter estimation results in terms of estimate and a-posteriori statistics (including tvalues and weighted sum of squared residuals WSSR) are given in Table 1. Table 1 Comparison of different experiment design configurations. Superscript * indicates tvalues failing the t-test (the reference value is tref = 1.721) Design
Parameter Estimate T
STDE
[0.006 1.466 3.664]
OMBRE
[0.008 0.950 5.547]T
DEOMBRE
[0.025 1.003 4.090]T
Conf. Interval (95%) [±0.408 ±2.001 ±0.728] [±0.086 ±0.121, ±0.083 ] [±0.014 ±0.017 ±0.038]
t-values *
[0.015 0.73 5.027 ] [0.09* 7.85* 66.83 ] [1.78 59.20 107.63]
WSSR *
25.4 14.7 8.2
Note how a conventional design approach would provide a test where the subject is moved to a severely hyperglycemic condition (Figure 1a). Additionally, a statistically poor estimation of the model parameters is obtained (Table 1). OMBRE provides a better fit, also improving the quality of parameter estimation, but even in that case (not shown here for the sake of conciseness), slight hyperglycemic conditions are realized. Conversely, the newly proposed DE-OMBRE technique (Figure 1b) is able to preserve both optimality and feasibility of the planned test. The fitting of experimental data is
A disturbance estimation approach for online model-based redesign of experiments 471
2
4
6
8
10
12
14
16
18
20
22
24
22
140 120 100 80 60 40 20 0 24
Insulin Meals
0
2
4
6
8
10
12
14
16
18
20
320 280 240 200 160 120 80
DE-OMBRE result Test samples After identif.
0 16 14 12 10 8 6 4 2 0
2
4
6
8
12
14
16
18
20
22
24
22
140 120 100 80 60 40 20 0 24
Insulin Meals
0
2
4
6
8
10
12
14
16
18
20
Time [h]
Time [h]
(a)
10
Carbohydrates [g]
Insulin [U]
0 16 14 12 10 8 6 4 2 0
Glucose [mg/dL]
STDE result Test samples After identif.
Insulin [U]
320 280 240 200 160 120 80
Carbohydrates [g]
Glucose [mg/dL]
greatly improved, with the estimate of k1 (which is the most critical parameter to be estimated) closer to the true value defining the subject affected by diabetes. A further advantage of DE-OMBRE if compared with STDE is the lower computational cost (5.7 min against 13.2 min for STDE on a Pentium® D 3Ghz processor): like in OMBRE, the whole optimization problem is split into a number of more accessible subproblems, and this implicitly improve the robustness of the whole design procedure.
(b)
Figure 1 Predicted glucose profiles (before and after identification) and manipulated inputs (insulin doses and meal uptakes) as provided by (a) STDE and (b) DE-OMBRE. The subject’s actual response is indicated by diamonds .
Conclusions A disturbance estimation approach for online model-based redesign of experiments (DE-OMBRE) has been presented in this paper. The technique allows for the detection of systematic errors between reality and model, and for a systematic update of the constraints thanks to the information coming from the running experiment. Preliminary results clearly show the benefits coming from the novel design approach, which is able to preserve feasibility as well as optimality of the planned experiment also in the presence of model mismatch. Future work will assess the applicability of DE-OMBRE to larger systems, and extend the features of the novel design approach.
Acknowledgements The authors gratefully acknowledge the financial support granted to this work by the University of Padova under Project CPDR095313-2009 on “Towards the development of an artificial pancreas for diabetes mellitus care: optimal model-based design of experiments for parameter identification of physiological models”.
References [1] F. Pukelsheim, 1993, Optimal Design of Experiments, J. Wiley & Sons, New York, U.S.A. [2] G. Franceschini, S. Macchietto, 2008, Chem. Eng. Sci., 63, 4846-4872. [3] S. Körkel, E. Kostina, H. G. Bock, J.P. Schlöder, 2004,Opt. Methods and Software, 19, 327338. [4] F. Galvanin, M. Barolo, F. Bezzo, 2009, Ind. Eng. Chem. Res., 48, 4415-4427. [5] J.M. Maciejowski, 2002, Predictive control with constraints, Prentice Hall, Harlow, U.K. [6] G. Pannocchia, J. B. Rawlings, 2003, AIChE J., 49, 426-437. [7] E. D. Lehmann, T. Deutsch, 1992, J. Biomed. Eng., 14, 235-242. [8] G. Pannocchia, A. Landi, M. Laurino, 2010, IEEE Workshop on Health Care Management, Venice (Italy), 1-6.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Semidefinite Programming Approach to Portfolio Optimization Raquel J. Fonseca, Wolfram Wiesemann, Berç Rustem Department of Computing, Imperial College, London, United Kingdom
Abstract The application of robust optimization techniques to an international portfolio allocation problem introduces non-linearities in the model. These stem from the triangulation requirement of the foreign exchange rates and the product of the local asset and the currency returns. We show that, by making appropriate assumptions regarding the formulation of the uncertainty sets, the proposed model has a semidefinite programming formulation and can be solved efficiently. Keywords: semidefinite programming, robust optimization, international portfolio optimization, risk management.
1. Introduction Markowitz’s seminal work on portfolio optimization initiated great interest and further academic research in the area of risk management (Markowitz 1952). The same interest was extended to international portfolios, as, due to the low correlation between foreign and domestic assets, there could be a positive impact on the overall variance of the portfolio. Changes in the currency value, however, give rise to a new source of risk, and therefore research on international portfolios is closely related to the issue of hedging and the use of forwards and other financial instruments. A survey of the topic may be found in (Shawky et al. 1997). The paradigm of robust optimization gained the attention of the academic community after the simultaneous works of El-Ghaoui, Ben-Tal and their collaborators (El-Ghaoui & Lebret 1997), (Ben-Tal & Nemirovski 1998). In this framework, uncertainty is directly incorporated in the model by considering the problem parameters as random variables. Robust optimization was first applied to an international portfolio model in (Rustem & Howe 2002) and later to a currency only portfolio in (Fonseca et al. 2011). We expand on the work in (Rustem & Howe 2002) by reformulating the problem in a convex tractable framework, and by subsequently evaluating the model using historical market data. By using a semidefinite programming formulation, we are able to maintain the bilinear nature of the asset and currency returns and solve the model in an efficient manner.
2. Robust International Portfolio Optimization Our starting point is a US investor who wishes to invest in foreign assets. We assume there are n available assets in the market, denominated in m foreign currencies. The current and the future price of the ith asset in its local currency are denoted by Pi0 and Pi, respectively. The local return of asset i is ria = Pi / Pi 0 . We denote by Ej and Ej0 the future and the current spot exchange rate of the jth currency, respectively. Both quantities are expressed in terms of the base currency per unit of the foreign currency j.
A Semidefinite Programming Approach to Portfolio Optimization
473
The return on a specific currency j is described by rje = E j / E 0j . The total return on any asset i is equal to the product of the local returns ria with the respective currency returns rje. Additionally, we define an auxiliary matrix O that assigns to each asset exactly one currency. If oij is the ijth element of O , we have:
oij =
1 if the ith asset is traded in the jth currency, otherwise. 0
The portfolio return R(w) can be written as ([diag(r a )Or e ]w) , where the variable w denotes the vector of asset weights in the portfolio. In the Markowitz framework we would want to minimize the portfolio variance (Var[R(w)]), while guaranteeing a minimum expected return. As the estimates of the expected asset returns are taken as given, the Markowitz model lacks in robustness. Even small deviations of the materialized returns from their estimates could pull the solution previously obtained away from the optimum or render it infeasible. In view of this, we would like to incorporate in the model the uncertainty inherent to the estimation of the asset and currency returns by using robust optimization techniques. 2.1. The Robust Model of International Portfolio Optimization In a robust framework, uncertain parameters are assumed to be random variables. The investor has some information about their distribution, such as the first two moments, and can therefore construct a set in which these parameters are expected to materialize. This region, which is commonly designated as uncertainty set, may reflect some probabilistic measures, such as a confidence interval. We would like to obtain a solution to our problem that satisfies all the constraints, for all the possible values of the returns within that defined uncertainty set. Hence, we are interested in the worst-case value of the returns for which the solution is still feasible. We define our robust international portfolio optimization model as: (1) max min [diag(r a )Or e ] w a e w
(r ,r )
s .t . 1 w = 1 w 0, where the uncertainty set is defined as: r a r a 1 r a r a a e e 2 = (r , r ) 0 : Ar 0 e
. e e e
r r
r r The uncertainty set described here is the intersection of two different sets. The risk associated with the asset and the currency returns is expressed by a joint confidence region forming an ellipsoid, in which deviations of the returns from their estimates are weighted by the covariance matrix . Note that does not only refer to the relationship between assets, but also between assets and currencies, and between currencies. The system of linear inequalities Ar e 0 reflects the triangular relationship between the foreign exchange rates, which must be respected at all times in an arbitrage-free market. If we define two exchange rates E j and Ek relative to a base currency, a cross exchange rate X jk = Ek / E j is automatically defined between those two rates. We must then ensure that the cross exchange rate returns xjk are also within adequate bounds and respect the triangulation constraint. For simplicity we assume that x jk is materialized
R. J. Fonseca et al.
474
between a lower bound L and an upper bound U, which allows us to reformulate the bounding constraints as: L x jk U Lrje rke Urje . We are now faced with the problem of optimizing the product of two random variables. A common approximation is to consider the total asset returns as the sum of the local asset and the currency returns. In the remainder, we propose an alternative semidefinite programming approach. A semidefinite program maximizes a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite (Vandenberghe & Boyd 1996). 2.2. Semidefinite Programming Approximation We start by rewriting our robust problem (1) in the epigraph form: max w,
s.t.
[diag(r a )Or e ]' w 0,
(
(2)
)
r a , r e
1' w = 1 w0 We also rewrite the constraints that define the support of our uncertain returns in the form: = k : e1 ' = 1, 'Wl 0, l = 1,…, t , where e1 is a basis vector in k
{
}
whose first element is 1 and all the others 0. This construction guarantees that the first component of the vector is equal to 1. Starting from the ellipsoidal region, we define an equivalent constraint of the form 'W1 0 , where:
2 1 1 a r e r a ( r a
= r , W1 = 1 r a r e re
r e ) r a
1 r e
.
1
The linear system of inequalities representing the triangulation requirement may be constructed following a similar procedure, with a matrix Wl for each constraint. We then replace the semi-infinite inequality constraint by a linear matrix inequality, using the following result (Ben-Tal et al. 2004): Approximate S-lemma: Consider t+1 symmetric matrices S and Wl with l = 1,…, t and the following propositions: t
(i) t with 0 and S l Wl 0; l=1
(ii) S 0, = { k : e1 = 1, Wl 0, l = 1,…, t}. Then, (i) implies (ii). Our final model formulation is then: max w, , t
s.t. S l Wl 0 l=1
1' w = 1 w, 0
(3)
A Semidefinite Programming Approach to Portfolio Optimization
475
0 0 1 0 0 diag(w)O . where: S = 2 0 1 O ' diag(w) 0 2 The reformulated problem (3) on the decision variables w, and provides a lower bound on the optimal objective function value of the original problem. The advantage of this formulation is its tractability. Because both the objective function and the constraints are convex, we are now able to solve the problem efficiently with a standard semidefinite programming solver.
3. Numerical Results We would like to assess the performance of the theoretical model developed in the previous section with historical market data. Our US investor wishes to invest not only in domestic assets, such as the S&P500 and the NASDAQ, but also in foreign assets. We consider 3 international indexes: the German DAX and the French CAC40 denominated in EUR, and the Swiss SMI in CHF. Each month we calculate the optimal asset allocation taking the expected asset and currency returns as the mean of the historical returns from the previous twelve months. The upper and lower bounds on the cross-exchange rates were calculated based on the currencies’ mean returns for the period considered plus the standard deviation for the same period multiplied by a factor of ±1.5. These bounds and the covariance matrix are assumed to remain constant throughout this period. At the end of each month, the actual portfolio return is computed based on the materialized returns. This procedure is repeated every month, and the accumulated wealth is calculated. We compare our robust model (3), designated as SDP model, with other strategies to compute international portfolios: the EG approach (Elton et al. 2007) that does not consider the multiplicative term in the total asset returns, and the Base Currency approach where all foreign returns are converted to the base currency of the investor. Additionally, the original non-convex model (1) is solved to local optimality with a semi-infinite algorithm. We consider an uncertainty set of size = 1 . The left chart in Figure 1 depicts the accumulated wealth over the period from October 1998 to September 2008 for the different approaches. For this particular data set, the SDP model appears to outperform the other strategies, yielding an average annual portfolio return of 6%, against 2.7%, 1.8% and 1.2% obtained by the Base Currency, EG and Local Optima approaches, respectively. These results lead us to conclude that accounting for the correlation between the local assets and currency returns, as well as their multiplicative effect is important. The right chart in Figure 1 compares our robust model with the Markowitz approach of risk minimization. Again, the robust model appears to outperform the risk minimization model, with average annual returns of 6% and 2.84%, respectively. The guarantee provided by the robust model is clearly seen in the period from January 2002 to September 2003. Because we are optimizing for the worst-case scenario, our accumulated portfolio return never falls short of that given by the Markowitz model.
R. J. Fonseca et al.
476
3
3 Minimum risk SDP model
SDP model Local optimality Base currency EG approach
2.5
2.5
2 Wealth
Wealth
2
1.5
1.5
1
0.5 Oct98
1
May00
Jan02
Sep03 Time
May05
Feb07
Sep08
0.5 Oct98
May00
Jan02
Sep03 Time
May05
Feb07
Sep08
Figure 1: Accumulated wealth over the period from Oct98 to Sep08
4. Conclusion We presented a robust optimization approach to the portfolio allocation problem when foreign assets are available for investment. We showed that the bilinear relationship between local asset and currency returns could be expressed by a tractable convex formulation by using the approximate S-Lemma and rewriting our model as a semidefinite programming problem. The backtesting experiments seem to point towards the better performance of this approach when compared to the Markowitz risk minimization model and to other international portfolio optimization models.
Acknowledgments Financial support from the EU Commission through MRTN-CT-2006-034270 COMISEF and Fundação Calouste Gulbenkian - 113392 is gratefully acknowledged.
References Ben-Tal, A. et al., 2004. Adjustable Robust Solutions of Uncertain Linear Programs. Mathematical Programming Ser. A, 99, pp.351-376. Ben-Tal, A. & Nemirovski, A., 1998. Robust Convex Optimization. Mathematics of Operations Research, 23, pp.769-805. El-Ghaoui, L. & Lebret, H., 1997. Robust Solutions to Least-Squares Problems With Uncertain Data. SIAM Journal on Matrix Analysis & Applications, 18, pp.1035-1064. Elton, E.J. et al., 2007. Modern Portfolio Theory and Investment Analysis - 7th Edition, Wiley. Fonseca, R.J. et al., 2011. Robust Optimization of Currency Portfolios. Journal of Computational Finance, forthcoming. Markowitz, H., 1952. Portfolio Selection. Journal of Finance, 7, pp.77-91. Rustem, B. & Howe, M., 2002. Algorithms for Worst-Case Design and Applications to Risk Management, Princeton University Press. Shawky, H.A. et al., 1997. International Portfolio Diversification: a Synthesis and an Update. Journal of International Financial Markets, Institutions & Money, 7, pp.303-327. Vandenberghe, L. & Boyd, S., 1996. Semidefinite Programming. SIAM Review, 38, pp.49-95.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study Cristina Popaa, Cristian Pătrăúcioiua a
Control Engineering and Computers Department, Petroleum Gas University of
Ploiesti, Bucuresti Blvd., 39, 1006800, Ploiesti, ROMANIA
Abstract The paper presents an original catalytic cracking optimal control system developed by the authors for a catalytic cracking plant. The efficiency improvement for a Romanian catalytic cracking plant using the proposed optimal control system is provided as a case study. The paper is structured in three parts. The first part describes the hierarchical control structure for the fluid catalytic cracking unit. The suggested control structure is the result of extensive analysis of the control structure design strategies used for the chemical process. The second part is dedicated to the study of the objective function of the optimal control system and development of the optimal control system. The last part contains a case study of the Romanian catalytic cracking process. The authors have elaborated a specific process model and an optimal controller. Using the adequate simulation program, the authors have demonstrated the optimal control system efficiency. Keywords: catalytic cracking, control, simulation, optimization.
1. Introduction The fluid catalytic cracking unit (FCCU) has an important role in the petroleum industry. The main goal of this plant is to get the maximum benefit assuring safety and stability. The increase of the catalytic cracking processes efficiency is made by various means: the mechanical design of the reactor and regenerator, the construction and the performances of the cracking gas compressors and the air blow, the physical and chemical properties of the feedstock, the kinetic characteristics of the catalyst and the implementation of an optimal hierarchical control system. The control problem of the FCCU has been treated by under various aspects in several works. Some works deal with conventional process control [1, 2] and another category of paper deals with aspect of advanced control [3, 4]. However, the optimal control systems applied to FCCU are insufficiently treated. In theses conditions, the authors have focused the researches on the development of an optimal hierarchical control system for increasing the catalytic cracking processes
478
C. Popa et al.
efficiency. The researches have been completed by a case study of the optimal control system for a Romanian FCCU.
2. The Hierarchical Control Structure To develop the control system structure, the authors have studied the structure of the fluid catalytic cracking process. The process has been decomposed into four subprocesses: the interfusion node, the riser (adiabatic tubular reactor), the striper and the regenerator (the burn coke reactor) [5, 6]. For each sub-process, the authors have developed a mathematical model in steady state and dynamical regime [5]. To build the hierarchical control structure the authors have used the hierarchical organize concepts of the complex systems [7] and the Plantwide Control design strategies [8]. The control structure proposed by the authors is organized in three hierarchical levels: the conventional control level, the advanced control level and the optimal control level, figure 1.
Figure 1. Optimal hierarchical control structures for the FCCU.
The conventional control level contains 10 mono-variable control loops based on standard PID controllers. The advanced control level contains a multi-variable predictive controller developed by the authors. The set points of the predictive controller are the riser outlet temperature (used to control the cracking conversion) and the regenerator temperature (used to control the catalyst regeneration). The predictive controller performances have been tested using a dynamic simulator elaborated by the authors [6]. The optimal control level contains an optimal controller which the objective of the controller is to calculate the set points of the second level (the optimum riser outlet temperature and optimum regenerator temperature).
3. The optimal controller design The mains process variables that affect the yield gasoline are the regenerate catalyst temperature Treg and the catalyst/feedstock contact ratio a. The goal of the optimal
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study 479 controller is to generate optimal set points of the predictive controller that maximize the yield gasoline. The optimal controller developed by authors contains three components: an objective function, an optimal algorithm and the steady state process model, see figure 2.
Figure 2. The structure of the optimal controller.
The steady state process model developed by the authors has been reduced to the next form
>YG , TR @
Model Treg , a
(1)
where the YG represents the yield gasoline, TR – the riser temperature, a – the catalyst/feedstock contact ratio, Treg – the regenerate catalyst temperature. The objective function proposed by authors is represented by yield of gasoline of the catalytic cracking of the process
F T reg , a
YG .
(2)
The process variable a, respectively catalyst/feedstock contact ratio, cannot be used as manipulated variable. Industrial practice recommends the riser outlet temperature as manipulated variable. In this condition, the authors have proposed a correlation between the catalyst/feedstock contact ratio and the riser outlet temperature. The optimization module calculates the optimal value for the catalyst/feedstock contact ratio and after this operation the controller algorithm determines the optimal riser outlet temperature using the steady state process model. The optimal algorithm used by the authors is the multidimensional exploration algorithm based on Hessian matrix with simple restrictions. The objective function restrictions are marginally simple type:
700 Treg 750 ® 3 a 6 ¯
>qC @
.
(3)
480
C. Popa et al.
4. Case study. The increase of the catalytic cracking process efficiency into
Romanian refinery
The authors have studied a Romanian catalytic cracking plant and they have colected industrial data, figure 3. The steady state and dynamical process model have been adapted using theses industrial data [9].
Figure 3. Industrial data: a) feedstock and gasoline flow rate; b) feedstock, reactor and regenerator temperature.
Using the steady state process model and the optimization tool of Matlab, the authors have studied the objective function (2) associated to the optimal control. The 3D graphic has confirmed that the function has an optimal region, figure 4a, and the contour graphic of the objective function has indicated that the gasoline yield has a maximum value, approximated at Yg 0.46 , figure 4b. The authors have implemented in Matlab a special program for dynamic simulation of the catalytic cracking optimal control system. The numerical results have confirmed the increases of the efficiency catalytic cracking process with 3%, respectively with 10 mil de euro/year, figure 5.
Figure 4. The objective function (2): a) the 3D graphic; b) the contour graphic.
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study 481
Figure 5. Comparison between industrial data and optimal control system data of the gasoline yield.
5. Conclusion In this paper there are presented aspects for improving the catalytic cracking processes efficiency by implementation of an optimal hierarchical control system. The main contributions brought by the authors within this paper are: x development of a hierarchical control structure; x development of an optimal controller; x optimal control system simulation and economic benefits obtained by using the optimal control system.
References [1] R. Aguilar, A. Poznyak, R. Martínez-Guerra, R. Maya-Yescas, 2002, Temperature control in catalytic cracking reactors via a robust PID controller, Journal of Process Control, Volume 12, Issue 6, p. 695. [2] M. Cristea, P. Agachi, 2007, Comparison between different control approaches of the UOP fluid catalytic cracking unit, Computer Aided Chemical Engineering, Volume 24, p. 847. [3] A.A. Alaradi, S. Rohani, 2002, Identification and Control of a Riser –Type FCC Unit Using Neural Networks, Computers and Chemical Engineering, 26, p. 401. [4] J. Chunyang , S. Rohani, J. Arthutr, 2003, FCC Unit Modeling, Identification and Model Predictive Control, a Simulation Study, Chemical Engineering and Processing, 42, p. 311. [5] Popa C., Pătrăscioiu C., 2010, The Model Predictive Control System for the Fluid Catalytic Cracking Unit, Advances in Dynamic Systems and Control, 6th WSEAS International Conference Dynamical Systems and Control, Tunisia, p 95. [6] C. Popa, C. Pătrăúcioiu, 2010, New Approach in Modeling, Simulation and Hierarchical Control of Fluid Catalytic Cracking Process. I- Process Modelling, Revista de chimie, Bucharest, Romania, vol. 60, no. 4, p. 419. [7] M. D. Mesaroviþ, D. Macko, Y. Takahara , 1970, Theory of Hierarchical. Multilevel System, New York: Academic Press. [8] W. Luyben, B.D. Tyrèus, M.L. Luyben, 1998, Plantwide process control, McGraw-Hill, New York, USA. [9] C. Pătrăúcioiu, C. Popa, 2007, Kinetic Model Adaptation of Catalytic Cracking Unit, Chemical Bulletin of “Politehnica” University of Timiúoara, Vol. 52(66), 1-2, p. 34.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Experimental Evaluation of a Robust NMPC Strategy for an Unstable Nonlinear Process Udo Schubert, Andreas Lange, Harvey Arellano-Garcia, Günter Wozny Chair of Process Dynamics and Operation; Berlin Institute of Technology, Sekr. KWT-9, Straße d. 17. Juni 135, D-10623 Berlin, Germany
Abstract In this work, we introduce a nonlinear model predictive controller (NMPC) for the safe operation of an open-loop unstable process with nonlinear dynamics and grade transitions. The nominal stability of the reactor is achieved through inclusion of an optimal terminal state penalty. Moreover, a path constraint and a set-point trajectory have been investigated to minimize the chances for reactor runaway during transitions between unstable operation points. In this contribution, the derivation of the proposed control algorithm and the solution strategy of the optimal control problem will be presented in detail. Moreover, experimental results will be used to validate the approach and provide a comparison. Furthermore, aspects of infeasibility and performance of the solution of the optimal control problem are also discussed. Keywords: Nonlinear Model Predictive Control, Multiplicity, Reactor Runaway
1. Introduction The considered case study is a continuous reactor with a strong irreversible exothermic first order chemical reaction.
Figure 1. Process flowsheet (left) and stationary heat-gain heat-loss diagram (right). The heat is transported from the reactor holdup to a constant coolant recycle and finally removed from the system using a heat exchanger with external coolant makeup according to the flowsheet in Fig. 1(left). Following the static heat-gain heat-loss diagram in Fig. 1 (right), the reactor exhibits multiplicity behavior with respect to the reactor temperature, which also includes the occurrence of an unstable steady state. However, in some cases,
Experimental Evaluation of a Robust NMPC Startegy for an Unstable Nonlinear Process 483
the economically desirable operation point is represented through this unstable state B, whereas the stable steady states A, C are related to a reduced conversion because of a low temperature level, or decomposition reactions because of the high temperature level. Despite being economically desirable, the nonlinearities of the chemical reaction show pronounced effects at the unstable steady state and are prone to ignition/extinction behavior. These phenomena may lead to critical system states causing long process transitions or even a plant shutdown. Therefore, control schemes are required to stabilize the process at an unstable operation point, whilst being robust against disturbances by complying with safety margins so as to avoid both reactor runaway and reaction extinction. For the validation of the proposed NMPC scheme, an experimental setup has been implemented, which makes use of a mixed-reality approach to provide an economically feasible and realistic framework for comprehensive control algorithm testing. Whereas previous work approximated the real process behavior by replacing the chemical reaction using controlled steam injection into a reactor dummy and a water feed [1, 2], the mixed reality approach replaces the reactor completely with a simulation layer and an interface to the jacket and the cooling system [3].
2. NMPC Approach The nominal stability of the system is achieved by formulating the optimal control problem as a quasi-infinite horizon control QIH-NMPC scheme [4] following (1-5). The objective function (6) consists of the quadratic stage cost (7) and is extended using a terminal cost function (8) that penalizes a deviation of the state from the desired steady state at the end of the prediction horizon. Thereby, the computationally infeasible infinite horizon control scheme is approximated. min J (x, u, Tc )
(1)
u
s.t. f (˙x, x, u,t) = 0 g (x, u,t) = 0
, x(t0 ) = x0
(2) (3)
h (x, u,t) ≤ 0
(4)
≤u≤u
(5)
u
min
max
However, for large deviations from the setpoint (e.g. because of large disturbances, or a setpoint change), the terminal region may become infeasible within the finite prediction horizon. In this case, the optimal solution may result in unstable closed-loop behavior and result in reactor runaway or reaction extinction. In order to avoid feasibility problems of the optimization routine, the terminal state constraint included in [4] is removed, whilst the control horizon is extended to recover the stability properties in stationary operation [5]. Therefore, special attention has to be paid to the problem of providing stable transitions between unstable operation points. In order to limit the feasible trajectories to a space of safe dynamic operation that prevents the reactor from reactor runaway or reaction extinction, the utilization of (i) a dynamic path constraint and (ii) a safe setpoint trajectory for the reactor temperature have been investigated. The solution of the open-loop optimal control problem is obtained using a sequential approach in order to parametrize the control vector over the prediction horizon. It is well known that such a feasible path approach has single shooting properties and may be sensitive for instability of the controlled system [6]. This stems from strong nonlinearities of the objective function for longer prediction
U. Schubert et al.
484
horizons and the dependency of the initial guess for the control vector [7, 8]. t+Tc
J (x, u, Tc ) =
t
(7)
F (x(τ), u(τ)) dτ + E( x(t + Tc )) stage−cost
(8)
terminal−cost
T
(6)
T
F (x(τ), u(τ)) = (x(τ) − xs ) Q (x(τ) − xs ) + (u(τ) − us ) R (u(τ) − us ) E (x(t + Tc )) = (x(t + Tc ) − xs )T P (x(t + Tc ) − xs ) The computation of a suitable terminal region (9) and the corresponding terminal penalty (9) matrix P is usually not a trivial task. In this work, the approach presented in [9] has been utilized, by linearizing the system around the unstable steady state (B). Ω = x ∈ Rn xT Px ≤ α Then, the cost of the infinite horizon can be approximated from the solution of the mxn op-. timization problem (10). Therein, the linear feedback law K = YX−1 and the terminal penalty matrix P = αX−1 are designed to (i) stabilize the system and (ii) to maximize the volume of the resulting terminal region for α ∈ R+ using matrices X ∈ Rnxn and Y ∈ R (10) max det αP−1 α,P,K
s.t. 0 ≺ X = XT ⎡ −AX − XAT − BY − YT BT ⎣ Q1/2 X R1/2 Y
XQ1/2 αI 0
⎤ YT R1/2 ⎦0 0 αI
The solution of problem (10) has been obtained using the solver sdpt3 [10] and the corresponding MATLAB® interface yalmip [11].
3. Simulation Results In a first step, the controller has been designed using a dynamic model of the process depicted in Fig. 1. A path constraint has been included in the optimal control problem (1) described in (11), taken from [12]. A safe operation point requires the divergence to be negative, whereas for unstable processes, the divergence may be slightly greater than zero. To allow a transition towards higher temperatures, a certain degree of runaway behavior is required and therefore a limit of divmax ∈ R+ is defined. However, it turns out that (11) this constraint is too restrictive and results in inappropriate long process transitions. div (g(x(t))) =
∂ g1 ∂ g2 + + ∂ x1 ∂ x2
···
+
∂ gn < divmax ∂ xn
Therefore, using an offline optimization of process transitions with a fixed control horizon of 5400sec, an empirical setpoint trajectory has been determined. A constant gradient of 7K/h could be determined that provides safe transitions since it delivers feasible intermittent terminal regions. Accordingly, the length of the control and the prediction horizon has been set to Tc = 600sec with a step size of dt = 60sec, giving a control vector with 10 elements. In Fig. 2 the simulation results are illustrated, which were obtained for upand downward setpoint changes with stochastic input disturbances on the feed temperature, composition and flow, as well as the coolant temperature. The setpoint trajectory
Experimental Evaluation of a Robust NMPC Startegy for an Unstable Nonlinear Process 485
has been disabled for the downward step in Fig. 2 (left) to illustrate the fact that a stable transition in this direction can be directly obtained without any constraints. However, the difficult transition to the unstable steady state on the elevated temperature level can be accomplished smoothly with the setpoint trajectory.
360
360 350
Reactor Setpoint Jacket Jacket Return
340
330
Temperature [K]
Temperature [K]
350
340
Reactor Setpoint Jacket Jacket Return
330 320 310
320
300 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Concentration kmol/m3
0
6 5 4 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Valve position [%]
Valve position [%]
Concentration kmol/m3
310
100
50
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
6 5 4 3
100
50
0
Time [h]
Time [h]
Figure 2. Simulation results for a setpoint change down- (left) and upwards (right).
4. Experimental Results In this section, the results obtained using the simulation in the previous section is validated using online experiments using the Mixed-Reality CSTR. As seen from Fig. 3 (left), the step response for a downward setpoint change can be replicated quite well, also without the setpoint trajectory.
360 350
340
Reactor Setpoint Jacket Jacket Return
330 320
Temperature [K]
Temperature [K]
350
0.2
0.3
0.4
0.5
0.6
5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
100
50
0
0.1
0.2
0.3
0.4
Time [h]
0.5
0.6
300
0.7
5.5
0
320
0.7
Concentration kmol/m3
0.1
Valve position [%]
Concentration kmol/m3 Valve position [%]
0
6
4.5
Reactor Setpoint Jacket Jacket Return
330
310
310 300
340
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
7 6 5 4
100
50
0
Time [h]
Figure 3. Experimental results of the Mixed-Reality reactor for a setpoint change down(left) and upwards (right). Except for some oscillations in the coolant valve position, when the new steady state is being approached, the dynamic behavior is very similar. While the coolant valve shows permanent oscillations in the upward setpoint change scenario in Fig. 3 (right), the reactor temperature tracks the trajectory quite well and remains stable. The oscillations are the result of minor model inaccuracy concerning the predicted heat exchanger outlet temperature and have also been reported in [2].
U. Schubert et al.
486
5. Conclusions The NMPC control scheme adopted from the QIH-NMPC approach has been designed for closed loop stability and safe setpoint transitions using dynamic simulation. For its validation, it has been implemented on a Mixed-Reality CSTR. In order to achieve a closed loop stability during stepoint transitions with short control horizons, a safe setpoint trajectory has been calculated. However, the introduced path constraint is being further exploited so as to incorporate the dynamic properties of the heat transport in order to achieve optimal transitions without the performance limitation by a suboptimal setpoint gradient. This work has been financially supported by the german research foundation. The authors would also like to thank from Prof. Allgöwer and Christoph Böm for their support with the terminal region calculations.
References [1] L. Kershenbaum. Experimental testing of advanced algorithms for process control: When is it worth the effort? Chemical Engineering Research and Design, 78(4):509–521, 2000. [2] Lino O. Santos, Paulo A. F. N. A. Afonso, Jose A. A. M. Castro, Nuno M. C. Oliveira, and Lorenz T. Biegler. On-line implementation of nonlinear mpc: an experimental case study. Control Engineering Practice, 9(8):847–857, 2001. [3] U. Schubert, H. Arellano-Garcia, and G. Wozny. Development and experimental verification of model-based process control using mixed-reality environments. In Computer Aided Chemical Engineering, volume 26 of 19th European Symposium on Computer Aided Process Engineering, pages 333–337. Elsevier, 2009. [4] Rolf Findeisen and Frank Allgöwer. The quasi-infinite horizon approach to nonlinear model predictive control. pages 89–108. 2003. [5] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. Constrained model predictive control: Stability and optimality. Automatica, 36:789–814, 2000. [6] T. Binder, C. Blank, H. Georg Bock, R. Bulirsch, W. Dahmen, M. Diehl, T. Kronseder, W. Marquardt, Johannes P. Schlöder, and O. v. Stryk. Introduction to model based optimization of chemical processes on moving horizons. In M. Grötschel, S. O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Systems: State of the Art, pages 295– 340. Springer-Verlag Berlin, Heidelberg, 2001. [7] Moritz Diehl, H. Georg Bock, Johannes P. Schlöder, Rolf Findeisen, Zoltan Nagy, and Frank Allgöwer. Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. Journal of Process Control, 12(4):577–585, 2002. [8] Victor Zavala, Carl Laird, and Lorenz Biegler. Fast implementations and rigorous models: Can both be accommodated in nmpc? International Journal of Robust and Nonlinear Control, 18(8):800–815, 2008. [9] C. Böhm, R. Findeisen, and F. Allgöwer. Robust control of constrained sector bounded lur’e systems with applications to nonlinear model predictive control. Dynamics of Continuous, Discrete and Impulsive Systems, 17(6):24, 2010. [10] R. H. Tütüncü, K. C. Toh, and M. J. Todd. Solving semidefinite-quadratic-linear programs using sdpt3. Mathematical Programming, 95(2):189–217, 2003. [11] J. Lofberg. Yalmip : A toolbox for modeling and optimization in MATLAB. In Proceedings of the CACSD Conference, 2004. [12] J. M. Zaldívar, J. Cano, M. A. Alós, J. Sempere, R. Nomen, D. Lister, G. Maschio, T. Obertopp, E. D. Gilles, J. Bosch, and F. Strozzi. A general criterion to define runaway limits in chemical reactors. Journal of Loss Prevention in the Process Industries, 16(3):187–200, 2003. doi: DOI: 10.1016/S0950-4230(03)00003-2.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Economic Plantwide Control of C4 Isomerization Process Rahul Jagtapa, Sonam Goenkaa, Nitin Kaisthaa a
Chemical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India
Abstract Plantwide control system design for economically optimum operation of a C4 isomerization process is studied. The steady state degrees of freedom of a base case design are optimized for a given C4 fresh feed processing rate (Mode I) and maximum production (Mode II). At maximum production, the number of active constraints equal the steady state degrees of freedom (dof) exhausting all the available dof. From the set of active constraints, regulatory plantwide control structures, CS1 and CS2, that minimize the back-off from the economically dominant active constraints are synthesized along with a simple supervisory optimizing scheme to drive the process operation as close as possible to the active constraints. Quantitative results for the backoff necessary to avoid constraint limit violation during transients due to a ±10% feed composition change are reported. Comparison with a conventional plantwide control structure, CS3, where the fresh feed is flow controlled, shows that the maximum achievable throughput (profit) for CS2 is higher by ~2% (> $1x106 per yr). Keywords: Plantwide control, optimal process operation, control structure design
1. Introduction In refinery operations, iso-butane (i-C4) is a more valuable feedstock than n-butane (nC4) as it is used in the production of high octane gasoline blending components, propylene oxide and tertiary butyl alcohol. The isomerization process is commonly used to convert the n-C4 to the more valuable i-C4. As depicted in Figure 11, it consists of a de-isobutanizer (DIB) column that takes in the fresh C4 stream with small amounts of C3 and C5 impurities to recover i-C4 as the distillate (along with C3 impurity). The n-C4 leaves from the bottoms with some i-C4 (light key) impurity and all the C5 in the fresh feed. This bottoms stream is further fractionated in the purge column that recovers the heavy C5 as the bottoms with a n-C4 rich distillate. This distillate is preheated using the hot reactor effluent in a feed effluent heat exchanger (FEHE), vaporized and further heated to the reaction temperature in a furnace. The hot C4 stream enters an adiabatic packed bed reactor where n-C4 isomerizes irreversibly to i-C4. The hot reactor effluent, after losing heat in the FEHE is cooled and condensed in a flooded condenser. The i-C4 rich condensed stream is fed to the DIB above the fresh feed for recovering the i-C4. For smooth operation of this industrially important process, Luyben et al.1 have designed a regulatory plantwide control structure using their heuristic bottom-up design procedure2. Of the several reasonable control structure possibilities, this procedure gives a structure for smooth transients in the overall plantwide response to principal disturbances such as a throughput change. Economic considerations are however ignored in the design of the plantwide control structure. In today’s fiercely competitive market environment, processes must be operated for optimal economic profitability (eg to maximize throughput / operating profit or to minimize energy consumption). The optimum steady state usually is at the intersection of multiple process constraints. Economic operation then requires driving process
488
R. Jagtap et al.
operation as close as possible to these active constraints. The implemented regulatory control system determines the severity of the transients in the active constraint variables and consequently the degree of closeness of operation to the constraint limits and economic profitability 3-5. To the best of our knowledge, there are no literature reports that consider plantwide control system design for the industrially relevant C4 isomerization process from the perspective of economically optimal operation. This work presents the systematic design of such a plantwide control system for the process. nC4, iC4 Recycle
198.9°C 45 bar
FEHE
nC4 ĺ iC4
Qfur: 863 kW Qcool: 1140 kW 1
20
Frcy: 190.1 kmol/h 30
FC4: 263.1 kmol/h 0.02 C3 0.24 iC4 0.69 nC4 0.05 iC5
P2: 4.35 atm Qcnd2: 3.7 MW
Qcnd1: 10.1 MW P1: 6.4 atm L1
D1: 263.1 kmol/h 0.022 C3 0.958 iC4 0.020 nC4 0.000 iC5
1
L2
D2: 190.1 kmol/h
10
50
Qreb1: 10.4 MW
B1
20
Qreb2: 3.53 MW
B2: 13.29 kmol/h
Figure 1. C4 isomerization process schematic with base case conditions
2. Optimal Process Operation A base case design of the C4 isomerization process for processing 263.1 kmol/h of fresh C4 feed (2%C3, 24% i-C4, 69% n-C4 and 5% i-C5) to produce an i-C4 product stream with 2% n-C4 impurity has been reported by Luyben et al. 1. We take this existing design (see Figure 1 for salient design / operating parameters) and optimize the steady state operating degrees of freedom for (a) A given fresh feed processing rate of 263.1 kmol/h (Mode I) and (b) Maximum fresh feed processing rate (Mode II). There are a total of seven steady state degrees of freedom (dof), one for the fresh feed, four for the two columns (two per column), one for the furnace (duty or reactor inlet temperature) and one for the flooded condenser (duty or outlet temperature). It is assumed that the reactor is operated at the highest possible pressure (45 bar) for maximum reaction conversion and the reactor pressure is not counted as a degree of freedom. The seven independent variables chosen to fully specify the process flowsheet are the fresh feed rate (FC4), heavy key and light key impurity mol fractions in respectively the distillate and bottoms stream of the DIB column ([xDnC4]DIB) and [xBiC4] DIB) and the purge column ([xDiC5] Purge) and [xBnC4] Purge), the reactor inlet temperature (Trxr) and cooler outlet temperature (Tcool). All material and energy stream flows (except furnace duty) are constrained to between 0 and twice the base-case steady state values. The maximum furnace duty is constrained at 1.5 the base case value to reflect the limited overdesign of an expensive equipment. Similarly, the maximum DIB column boilup denoting onset of flooding is taken as 1.3 times its base case value. The corresponding factor for the purge column is 1.5. The maximum reactor temperature
Economic Plantwide Control of C4 Isomerization Process
489
and pressure limits are 200 °C and 45 atm respectively. Finally, the n-C4 impurity in the product stream should be below 2%. To minimize the quality give away, the impurity in the product stream must be at its constraint value (ie 2%). Also the reactor inlet temperature should be maximum to maximize reaction conversion for minimum recycle cost. There exists an energy consumption versus production rate trade-off with respect to the loss of n-C4 in the C5 purge stream ([xBnC4] Purge). However since the flow rate of the purge stream is small, we simply set [xBnC4] Purge to a small value (1%) so that the n-C4 loss is small. Lastly Tcool is fixed at a reasonable value of 53 °C. This leaves three steady state degrees of freedom to be optimized. The constrained minimization of the total energy cost is Table 1. Process optimization results’ summary performed using fmincon in Matlab Mode I: Minimum energy cost* Objective with Hysys as the background solver function (J) Mode II: Maximum throughput (FC4) for the two modes of operation. Case Mode I Mode II The optimization problem and its & results for Mode I and Mode II are FC4 263.1 kmol/hr 334.5 kmol/h# Trxr 200 °C Max 200 °C Max briefly summarized in Table 1. The Tcool 53 °C Fixed 53 °C Fixed Mode I energy cost is $1.716x106 yr-1 [xDnC4]DIB 0.02 Max 0.02 Max while the Mode II maximum [xBiC4]DIB 0.0517 0.0125 throughput for the given fresh feed [xDiC5]Purge 0.0202 0.00011 composition is 334.5 kmol/h. In Mode [xBnC4]Purge 0.01 Fixed 0.01 Fixed I, there are four active constraints 6 -1 Optimum J $1.716x10 yr $334.4 kmol/h leaving three unconstrained dof. In MAX Mode II, three additional constraints Qfur , Additional namely, the maximum furnace duty Vreb1MAX, Constraints Vreb2MAX (QfurMAX), the maximum DIB boilup MAX (Vreb1 ) and the maximum purge *: Furnace duty $9.83 GJ-1; Steam $4.83 GJ-1; Cooling water $0.16 GJ-1 column boilup (Vreb2MAX) are active so &: FC4 is specified that all dof are exhausted with seven #: FC4 is optimized for maximum throughput active constraints. The result is typical of chemical processes with the process being driven to its maximum throughput limit by exhausting all the dof to drive as many constraints to their respective limits.
3. Plantwide Control System Design and Economic Performance 3.1. Plantwide Control System Design To design a regulatory control structure that minimizes the economic loss due to the need for a back-off from the active constraint limit due to transients, consider the active constraints in Mode I and Mode II. Table 2 reports the percentage loss in objective function per unit back-off in an active Table 2. Percent change in objective function J hard constraint. In Mode I, Trxr is the per percent back off in active constraint* economically dominant active constraint variable. In Mode II, the throughput is Mode I Mode II affected most by Trxr and Qfur. To # 0.658 Qfur 0.360 T rxr eliminate a back-off in Qfur, we may Trxr# 0.926 Vreb1 0.086 Vreb2 0.002 flow control the furnace fuel valve and *: Only hard constraints considered. #: 50 °C span not use it as a manipulated variable (eg to maintain Trxr). Alternatively, since both Qfur and Trxr constraint variables are located in the reaction section, flow controlling the feed to the reactor and not using it as a manipulated variable would eliminate the flow variability in the reactor feed and hence
490
R. Jagtap et al.
mitigate the transients (and consequently, back-off) in both Qfur and Trxr. These two options result in plantwide regulatory control structures, CS1 and CS2. The inventory control system for CS1 is built around the flow controlled (fixed) furnace duty with loop pairings as in Table 3. The unavailability of Qfur for manipulation forces Trxr control using the recycle flow rate. The purge column reflux drum level is then controlled using the column feed. Table 3. Plantwide control structures The sump level is controlled using Regulatory control loops (Mode I) the reboiler duty as the bottoms MV stream is very small making it CV CS1 CS2 CS2 inappropriate for level control. The impurities [xBnC4] Purge and TPM QfurSP D2SP FC4SP Trxr D2 Qfur Qfur [xDiC5] Purge are maintained using Lvltop1 L1 L1 L1 the bottoms and reflux rate bot1 Lvl FC4 FC4 B1 top2 respectively. In the DIB column, Lvl B1 B1 D2 Lvlbot2 Vreb2 Vreb2 Vreb2 the sump level is controlled using [xDnC4]DIB [L/D]2 [L/D]2 [L/D]2 the fresh feed. The reflux drum [xBiC4]DIB Vreb1 Vreb1 Vreb1 level is controlled using the reflux [xDiC5]Purge L2 L2 L2 B Purge rate as the reflux ratio is large (>5) [x nC4] B2 B2 B2 with a relatively small distillate. Mode II supervisory control loops The key component impurities Trxr Maximum Maximum Maximum D DIB B DIB [x nC4] and [x iC4] are Qfur Maximum D2SP FC4SP controlled using respectively the Vreb1 [xBiC4]DIB SP [xBiC4]DIB SP [xBiC4]DIB SP Vreb2 [xDiC5]Purge SP [xDiC5]Purge SP [xDiC5]Purge SP distillate and the reboiler duty. With the basic regulatory control system in place, supervisory loops for Mode II operation are implemented where the setpoints [xBiC4] DIB and [xDiC5] Purge are adjusted to maintain the boilups Vreb1 and Vreb2 near maximum. In Mode I (given throughput), the QfurSP is slowly adjusted for the desired fresh feed processing rate. QfurSP thus is the throughput manipulator (TPM). In CS2, the regulatory loops are built around the flow controlled recycle stream (Mode I TPM). The purge column reflux drum level is controlled using the column feed and the DIB sump level is controlled using the fresh feed. Trxr is controlled using Qfur. The remainder of the regulatory control structure and the Mode II supervisory loops are similar to CS1. For comparison purposes, Table 3 also reports a conventional control structure where the fresh feed is flow controlled and acts as the Mode I TPM. Here, in addition to the CS1 Mode II supervisory loops, FC4 is adjusted to maintain Qfur. 3.2. Quantitative Back-off Results The process cannot be operated at the limit of the active constraints as ever present disturbances would cause transient hard constraint violation which is unacceptable (TrxrMAX QfurMAX, Vreb1MAX and Vreb2MAX constraints are considered hard). The constraint variable control loop setpoints must be appropriately backed off from their limits for the worst case disturbance. A ±10% step change in the fresh feed n-C4 composition with a complementary change in the i-C4 mol fraction is considered the worst case disturbance. Table 4 reports the back-off from the active constraints along with the economic objective function using the three control structures for Mode I and Mode II. In both modes, there is no back-off in Trxr for CS1 and CS2 as the flow variability in the reactor feed is negligible while a small back-off of 0.1 °C occurs in CS3 for Mode II. In Mode I, CS3 fails for a +10% n-C4 feed mol fraction change with the purge column reflux drum filling up in about 10 hours. This is due to accumulation of the unreacted n-C4 in the recycle loop (snowball effect). In Mode II, the Qfur shows a significant (7%) backoff
Economic Plantwide Control of C4 Isomerization Process
491
for CS3 while no back-off is required in CS1 and CS2. Some back-off is also necessary in the column boilups in all the structures. This back-off causes an almost negligible throughput loss of 0.1% in CS1 and CS2. The throughput loss for CS3 is however much larger at 1.9% due to the back-off in Qfur. Assuming a $20 per kmol product-raw material (including energy expense) price differential, this corresponds to a yearly revenue loss of about $1.068x106 in CS3 compared to CS1 and CS2 which is significant. The result shows that the implemented plantwide control structure significantly affects the profitability of the process. Table 4. Back-off in active constraints and economic loss for CS1, CS2 and CS3* Mode I Optimum CS1 CS2 CS3
Trxra 200 200 200
Mode II
FC4b
Jc
Trxra
Qfurd
Vreb1e
Vreb2e
Jf
263.1 263.1 263.1 Fails
1.726 1.726 1.726
200.0 200.0 200.0 199.9
1294max 12940% 12940% 12037%
2522max 24861.4% 24861.4% 24961%
851.8max 845.50.7% 845.50.7% 820.03.7%
334.5 334.10.1% 334.10.1% 328.01.4%
*: Subscripts denote % backoff. a: °C; b: kmol/h; c: x10 6 $/yr; d: MW; e: kmol/h; f: FC4 kmol/h
The TPMs for CS1, CS2 and CS3 are respectively, QfurSP, D2SP and FC4SP. Since TrxrMAX and QfurMAX are economically dominant Mode II active constraints and as these are located in the reaction section, minimizing the severity of transients (back-off) in both constraint variables requires minimizing the transient variability into the reaction section. This is accomplished is CS1 and CS2 with the TPM located in the reaction section eliminating reactor feed flow variability. The TPM location thus plays a crucial role in the economic operation of a process. It should be located close to and where possible at the economically dominant active constraint(s).
4. Conclusion This plantwide control study of a C4 isomerization process shows that the regulatory control structure can significantly affect process economic performance by determining the severity of the transients in the economically dominant active constraints. To minimize the back-off from the constraint limit and hence the economic loss, the regulatory layer TPM should be located as close as possible to the dominant constraint(s). A top-down bottom-up approach, where the TPM is first chosen based on the economically dominant active constraint(s) (top-down part) followed by the synthesis of regulatory control loops (bottom-up part) appears the most appropriate systematic methodology for plantwide control system design.
References 1. 2. 3. 4. 5.
W.L. Luyben, B.D. Tyreus, M.L. Luyben, 1999, Isomerization process, Plantwide Process Control, McGraw Hill: New York, 273-293. M.L. Luyben, B.D. Tyreus, W.L. Luyben, 1997, Plantwide Control Design Procedure, AIChE J., 43, 12, 3161-3174. R. Kanodia, N. Kaistha, 2010, Plantwide control for throughput maximization: A case study, Ind. Eng. Chem. Res., 49, 210-221. R. Jagtap, N. Kaistha, S. Skogestad, 2010, Plantwide control for economic operation of a recycle process, Compter Aided Chemical Engineering, 28, C, 499-504. P.A. Bahri, J.A. Bandoni, G.W. Barton, J.A. Ramagnoli, 1995, Back-off calculations in optimizing control: A dynamic approach, Comp. Chem. Engg., 19, S699-S708.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Application of Graphic Processing Unit in Model Predictive Control Arash Sadrieh, Parisa A. Bahri School of Engineering and Energy, Murdoch University, Western Australia, 6150
Abstract This study seeks to pave the way for the implementation of a model predictive controller method using a Graphic Processing Unit (GPU). The GPU has been adapted and used as a real time co processor for Nonlinear Model Predictive Control (NMPC) algorithms, providing a means to improve the computational performance of the MPC algorithm in an economic manner. In this approach, a parallel version of Nelder-Mead simplex algorithm was used to solve MPC optimization problem. In order to show the effectiveness of the proposed approach, the implementation was applied in model predictive control of a crystallizer unit operation. The results show a considerable improvement in computational performance compared to standard CPU based implementation. Keywords: Nonlinear Model Predictive Control, GPGPU, Nelder-Mead.
1. Introduction Model Predictive Control (MPC) is a set of computer control algorithms which use a process model to predict the future response of a process. MPC algorithms are widely used in different applications such as chemical, food processing, automotive, aerospace and metallurgy. A demanding feature of most MPC algorithms is that an optimization problem must be solved online (Cannon, 2004). In process system MPC applications, the effects of computational delay, is a major obstacle in using these algorithms in many industrial processes, specifically for processes that contain non linear mathematical models with many equations and variables (Diehl et al., 2002). GPUs are relatively cheap components found in most new PCs and are traditionally employed for 3D rendering graphics algorithms. Hardware architecture of the GPU chip is designed to facilitate the execution of a very high number of threads in parallel. This means that a similar piece of software is executed independently on a very large amount of data. 3D rendering algorithms exploit this feature to obtain real time computational performance. Similarly, a Nonlinear Model Predictive Control (NMPC) algorithm has the potential to be implemented as a data parallel algorithm due to the fact that it requires a single algorithm (optimization algorithm) to be executed on a very large number of data (time steps). In this paper, the power of GPU chips is harnessed to address the specific computational requirements of NMPC problems in process systems. The structure of the paper is organized as follows: Firstly, general background information on GPU architecture is provided. The optimization problem raised from NMPC is then explained and Parallel Nelder-Mead (NM) algorithm is described as an optimization algorithm that can be implemented on GPU. Subsequently, the implementation details of the algorithm is explained and finally the performance results achieved is compared against standard CPU-based implementation.
Application of Graphic Processing Unit in Model Predictive Control
493
2. Background The idea of using GPUs for general computation has only recently gained attention with introduction of fully programmable graphical chips with high memory bandwidth and high computational horsepower (Owens et al., 2007). It has been demonstrated that when GPUs are applied for general purpose computation, they provide speedups of orders of magnitude compared to optimized CPU implementations (Owens et al., 2008). To put the speed of GPUs into context, take NVIDIA Fermi architecture that has been introduced recently (Wasson, 2009). This processor has 512 computing cores and can sustain a peak rate of more than 500 Giga Floating points Operations Per Second (GFLOPS), compared to the fastest CPU at the time which operates at a peak rate of less than 20 GFLOPS (GFLOPS is a computing performance measurement unit). Data parallel algorithms are defined as a set of algorithms where a similar algorithm should be executed independently on a very large amount of data. The GPU has a unique hardware architecture that is suitable to run data parallel algorithms. A GPU chip principally consists of a set of Single Instruction, Multiple Data (SIMD) multiprocessors and a memory unit that is accessible from all the multiprocessors. Inside a multiprocessor, there are several processors and a shared memory to be used internally between processors. Every multiprocessor has a single instruction unit and therefore a single program code can be executed on each multiprocessor. A processor has a set of registers and these registers are locally accessed on a processor’s active thread to store/retrieve data. However, registers are not shared between different processors. Constant cache and texture cache are applied as components to reduce time expensive memory-access operations. To implement data parallel algorithms on the GPU, this chip can be assumed as a coprocessor that cooperates with a main processor (i.e. CPU). The data parallel algorithm is expressed in a specific function form, called kernel and the device (i.e. GPU) simultaneously executes a batch of kernel instances (threads), organized in a hierarchical grid structure. A grid contains a batch of similar thread blocks where each thread block describes a group of threads that could cooperate with each other efficiently through the fast shared memories available on the multiprocessors. Kernels are implemented using different programming environments such as NVIDA Compute United Device Architecture (CUDA) or OpenCL. In this study CUDA platform was applied in order to implement GPU Based NMPC. The CUDA platform is a parallel programming architecture that extends C high level programming language and applied to implement data parallel algorithms on the GPU (Nvidia, 2007).
3. Problem Formulation and Optimizer Algorithm A discrete time system is assumed to be described by the following state equations: ݔାଵ ൌ ݂ሺݔ ǡ ݑ ሻ
(1)
and
ݕ ൌ ݄ሺݔ ሻ
( 2)
Where ݇ݔ, ݇ݑ, ݇ݕdenote state vector, input vector and system output vector at stage݇ respectively and functions ݂ and ݄ are nonlinear functions. The goal of NMPC is defined by minimizing cost function ܬ, expressed by (Henson, 1998): ିଵ
ܬൌ ߶൫ݕାȁ ൯
ୀ
ܮሺݕାȁ ǡ ݑାȁ ሻ
ܣൌ ሾݑȁ ǡ ݑାଵȁ ǡ ǥ ǡ ݑାெିଵȁ ሿ ,
Subject to:
(3) (4)
A. Sadrieh et al.
494
ݑ ൏ ݑାȁ ൏ ݑ௫ ǡ Ͳ ݆ ܯെ ͳܽ݊݀ݕ ൏ ݕାȁ ൏ ݕ௫ ǡ ͳ ݅ ܲ ( 5 )
Where ݇ݕ݆ȁ݇ is the predicted output ݇ݕ݆ based on information available at time ݇ and similarly ݇ݑ݆ȁ݇ is the control input vector ݑା at time ݇. The constant numbers ܯand ܲ denote control and prediction horizons, respectively and ߶ǡ ܮare nonlinear functions. While numerous numerical approaches can be applied to solve optimization problem (5), in this paper the NM algorithm was selected due to the fact that the algorithm can be implemented on parallel architectures. In this algorithm constraints are considered by introducing penalty functions. Consequently, in problem (5) the constraint violation penalty functions ܲଵ and ܲଶ are added to the objective function : ିଵ
כ ܬൌ ߶൫ݕାȁ ൯
ୀ
ܮ൫ݕାȁ ǡ ݑାȁ ൯
ୀଵ
ெିଵ
ܲଵ ൫ݕାȁ ൯
ୀ
ܲଶ ൫ݑାȁ ൯ ( 6 )
A simplified parallel version of the algorithm (Lee and Wiswall, 2007) is provided bellow: 1. Evaluate objective function at I+1 initial points ܣ ǡ ܣଵ ǡ ǥ ǡ ܣூ 2. Reorder ܣ ǡ ܣଵ ǡ ǥ ǡ ܣூ so that כܬሺܣሻ ൏ כܬሺܣଵ ሻ ൏ ڮ൏ כܬሺܣூ ሻ ഥ ൌ ଵ σூିே ܣ 3. Setܯ ூ ୀ 4. For every ݁ݎ݄݁ݓሺ ܫെ ܰ ͳሻ ܫrun steps 4.1-4.4 in parallel: ഥ ߙሺܯ ഥ െ ܣ ሻ 4.1. Set ܣோ ൌ ܯ כ൫ܣோ ൯ כሺ ܣሻ 4.2. If ܬ ൏ ܬ then run steps 4.2.1-4.2.2 ഥ൯ 4.2.1. Set ܣா ൌ ܣோ ߛ൫ܣோ െ ܯ כ൫ܣா ൯ כሺ ܣሻ 4.2.2. If ܬ ൏ ܬ then set ܣ ൌ ܣா otherwise ܣݐ݁ݏ ൌ ܣோ 4.3. If ( כܬ൫ܣோ ൯ כܬሺܣ ሻሻܽ݊݀ሺ כܬ൫ܣோ ൯ ൏ כܬ൫ܣିଵ ൯ then ܣݐ݁ݏ ൌ ܣோ 4.4. If כܬ൫ܣோ ൯ כܬ൫ܣିଵ ൯ then run steps 4.4.1-4.4.3 4.4.1. If כܬ൫ܣோ ൯ ൏ כܬ൫ܣ ൯then set ܣሚ ൌ ܣோ otherwise set ܣሚ ൌ ܣ ഥ ܣሚ ൯ 4.4.2. Set ܣ ൌ ߚ൫ܯ כ൫ܣ ൯ 4.4.3. If ܬ ൏ כܬ൫ܣ ൯ then set ܣ ൌ ܣ 5. If the solution is converged then Terminate otherwise go to step (2) The algorithm starts by evaluating כܬat I+1 initial points and then reordering the points so that the first point (i.e., ܣ ሻ has the least objective value (i.e., כܬሺܣ ሻ). Afterwards, N worst points of the sequence created in step (2) are selected and each point is assigned to a parallel processor where the parallel processor is responsible for finding a new point that has a lower objective value. The algorithm continues iterating until the solution converges. The constant parameters ߙǡ ߛ are predefined in the algorithm and I is specified based on the optimization problem.
4. Implementation To implement the GPU based NMPC controller, the process model equations is developed in Aspen Custom Modeler (ACM) equation orientated tool. Objective function of NMPC is expressed as equations in the ACM model and control handles, control and predication horizons are specified in ACM. At each time step current plant measurements are set into associated state variables in the model. The NMPC optimization is then performed by running a dynamic optimization of the objective function over the horizons and the optimization results for the next control stage, subsequently, are sent back to the plant. This process is repeated for every time step. To perform the NMPC optimization, a special GPU based optimizer was developed and
Application of Graphic Processing Unit in Model Predictive Control
495
integrated with ACM. In the GPU optimizer two levels of parallelization are considered. Firstly the equations represented in the model are evaluated simultaneously during each model evaluation and secondly the model is evaluated concurrently for multiple evaluation points (as stated in step (4) of the optimization algorithm). To realize the first level of parallelization (i.e. parallel equation evaluation in a model), a software utility is developed that receives the process model in the ACM format and generates equivalent CUDA kernels for the model equations. Consequently when the model evaluation is necessary the kernels representing model equations are concurrently executed on the GPU. The code generator utility also has the ability to find equations with similar structure and group them into a single kernel. This improves the GPU performance throughput by reducing the number of kernels. The equation grouping is normally necessary in process models which often have multiple equations with similar structures. Examples include the equations resulted from space discretization or the equations raised from presence of several similar unit operations in a flow sheet. The generated GPU based model evaluator can be applied in any other optimisation algorithm. The second level of parallelization (i.e. concurrent model evaluation of multiple points) is implemented by launching N instances of each equation(s) kernel where N is the number of evaluation points specified in the parallel NM algorithm.
5. Numerical results and discussion A crystallizer model describing a Continuous Mixed Suspension Mixed Product Removal (CMSMPR) unit operation was selected (Pantelides and Oh, 1996). In this unit operation, potassium sulphate crystals are produced from aqueous solutions. It was assumed that crystal breakage is not important. The first principle dynamic model for this unit operation was taken from ACM examples. This unit operation was selected due to the fact that it contains a relatively high number of equations and has different types of equations including integral, partial differential and algebraic equations. The controller is designed to control the Mean value of Crystal Numbers (MCN) as close as possible to a reference trajectory by adjusting the crystallizer temperature (ͳͲ ܶሺԨሻ ͺͲ) at the sampling time of 30 minutes. In this controller, the control and predication horizons are 6 and 10 steps respectively. To benchmark the algorithm, a PC was applied that has an Intel 2.98 GHz Core(TM) 2 E7500 CPU and a NVIDIA FERMI GTX 480 GPU. To confirm the correctness of the approach, the controller was tested in presence of a 10% disturbance in the feed concentration at time t=0. As it is shown in figure (1), the controller adjusts the process output (i.e. MCN) close to the reference trajectory. The scalability of the approach was measured through changing the number of equations in the model by adjusting discretization spacing preference. As a result, three levels of accuracy for the model are defined based on the number of equations. The NMPC computation time was measured for each accuracy level in the model and results are presented in the Table 1. It can be seen that this approach is always outperforming standard NM implementation particularly when the number of equations is increased. Table 1: Computational Time Comparison of GPU-Based Approach and Standard Approach Accuracy Level
Algorithm Time (m:s)
Level 0 (5014 equations)
GPU 14:29
Standard 30:08
Level 1 (9917 equations)
GPU 18:47
Standard 47:04
Level 2 (19817 equations)
GPU 24:59
Standard 68:05
A. Sadrieh et al.
496
80
Temperature (Control handel) Uncontrolled MCN Controlled MCN
60 40 20 0 Time
1
2
3
4
Refrence Trajectory
Figure 1: Dynamic Simulation Results: The controller at accuracy level 0 is tested with 10% disturbance in feed concentration where temperature was used to control the MCN close to reference trajectory. The control and prediction horizons are 6 and 10 steps.
For example, in a model that contains 19,817 equations our approach runs 2.8 times faster compared to the standard approach. This satisfies response time constraint of 30 minutes.In these tests, the results are obtained by applying a single GPU card. Considering the good scalability of the approach, the current computational time can be improved by adding more GPUs to the architecture. The performance of typical sequential CPU-based algorithms, however, is limited to maximum speed of a single CPU chip and adding more CPUs will not affect the overall perofmance.
6. Conclusion A new hardware platform was proposed for the implementation of model predictive control of nonlinear process systems. Numerical results show a considerable improvement in computational performance results. Considering these results and the growth rate in computational power, price and the availability of GPU chips, it can be concluded that these chips can be considered as very attractive co-processors in industrial NMPC applications. Future work includes applying the GPU based model evaluator in other optimisation algorithms and benchmarking current approach against the sequential version of the aformentioned approaches.
References CANNON, M. (2004) Efficient nonlinear model predictive control algorithms. Annual Reviews in Control, 28, 229-237. DIEHL, M., BOCK, H. G., SCHLÖDER, J. P., FINDEISEN, R., NAGY, Z. & ALLGÖWER, F. (2002) Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. Journal of Process Control, 12, 577-585. HENSON, M. A. (1998) Nonlinear model predictive control: current status and future directions. Computers and Chemical Engineering, 23, 187-202. LEE, D. & WISWALL, M. (2007) A parallel implementation of the simplex function minimization routine. Computational Economics, 30, 171-187. NVIDIA, C. (2007) Compute Unified Device Architecture Programming Guide. NVIDIA: Santa Clara, CA. OWENS, J. D., HOUSTON, M., LUEBKE, D., GREEN, S., STONE, J. E. & PHILLIPS, J. C. (2008) GPU computing. Proceedings of the IEEE, 96, 879-899. OWENS, J. D., LUEBKE, D., GOVINDARAJU, N., HARRIS, M., KRÜGER, J., LEFOHN, A. E. & PURCELL, T. J. (2007) A Survey of General Purpose Computation on Graphics Hardware. Wiley Online Library. PANTELIDES, C. C. & OH, M. (1996) Process modelling tools and their application to particulate processes. Powder Technology, 87, 13-20. WASSON, S. (2009) Nvidia’s ‘fermi’gpu architecture revealed.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Statistical Process Control of Multivariate Systems with Autocorrelation Tiago J. Rato, Marco S. Reis, CIEPQPF, Department of Chemical Engineering, University of Coimbra, Rua Sílvio Lima, 3030-790, Coimbra, Portugal
Abstract Current industrial processes are characterized by encompassing a large number of interdependent variables, which very often exhibit autocorrelated behavior, due to the dynamic nature of the phenomena involved, associated with the high sampling rates of modern data acquisition systems. Multivariate statistical process control charts have been developed to handle the cross-correlation issue, such as the Hotelling’s T2, MEWMA and MCUSUM control charts, but they are not able to handle properly the presence of autocorrelation in data. In order to address both problems simultaneously, alternative procedures were developed, namely by adapting the control limits, using residuals from time series modeling and applying data transformation techniques, some of which will be addressed in this paper, along with others we now propose. The proposed monitoring methods use a combination of Dynamic PCA (DPCA), ARMA models and missing data estimation methods, allowing for the simultaneous reduction of data dimensionality while capturing its dynamic behavior, therefore also handling the autocorrelation effects. The results obtained show that the proposed methodologies based upon missing data estimation tend to present better performance, constituting good alternatives to methodologies currently in use. Keywords: Dynamic multivariate statistical process control; Principal component analysis; Autoregressive moving-average models; Missing data.
1. Introduction The objective of statistical process control (SPC) is to monitor the stability and performance of a process over time, in order to verify whether it remains within a state of “statistical control” (Kourti and MacGregor, 1995). In order to accomplish this goal, traditional SPC charts (Shewhart, CUSUM and EWMA) are often used for monitoring key product quality variables in an univariate way (Montgomery, 2005). However, with the development of processes and instrumentation, the need for properly monitoring many correlated variables led to the development of multivariate control charts such as the Hotelling’s T2 (Hotelling, 1931), MEWMA (Lowry et al., 1992a) and MCUSUM (Lowry et al., 1992b). For larger systems, even these statistics present problems. For instance, the inversion operation of the covariance matrix in the Hotelling’s T2 statistic may run into numerical instability problems for highly correlated sets of variables, or may be even impossible to perform, in case it becomes rank deficient, and therefore new methodologies based on latent variables techniques, such as Principal Components Analysis (PCA) (Jackson, 1991, Jolliffe, 2002), were developed to address this limitations. The statistics used in the latent variables frameworks, namely PCA or partial least squares, PLS (Geladi and Kowalski, 1986, Martens and Naes, 1989, Wold et al., 2001)
T. Rato et al.
498
are typically based on the model scores where a Hotelling’s T2 statistic is applied. This statistic is usually complemented with a residual statistic, Q (also known as squared predicted error, SPE). However, all these methods assume that variables are independent along time, an hypothesis that is often not met in practice, especially with the high sampling rates currently achieved with modern instrumentation. In order to address this issue, Ku et al. proposed an SPC procedure based on dynamic principal component analysis (DPCA), which is an extended version of PCA that includes time lagged variables in order to accommodate, and tacitly model, the dynamic behavior of variables within the same PCA model (Ku et al., 1995). Unfortunately, one can easily verify that the direct implementation of such method still leads to autocorrelated statistics, which raises problems in its proper implementation. Therefore, in order to better handle this issue, alternative approaches must be adopted, such as: time-series modeling (Harris and Ross, 1991, Montgomery and Mastrangelo, 1991), control limits adjustment/correction (Vermaat et al., 2008), variables transformation (Bakshi, 1998, Reis et al., 2008) and the use of non-overlapping moving windows. To address all these issues simultaneously, we present a set of candidate methodologies, some of them being new. The new proposed methodologies use a combination of DPCA, ARMA models and missing data estimation methods, allowing for the simultaneous reduction of the data dimensionality (correlation structure) while capturing its dynamic behavior, therefore also handling the auto-correlation effects. The rest of this paper is organized as follows. In the following two sections, we briefly present the techniques studied, and show the results obtained for the systems tested. Finally, we conclude with a summary of the contributions presented in this paper.
2. Methods In this paper we analyze the performance of several multivariate SPC methods. These methods are based on the Hotelling’s T2 and Q statistics and may make use of time series (TS) models and missing data (MD) estimation methods. As the total number of statistics under test is large, we will only present in this paper their general form, according to Table 2. All these statistics are applied to the scores obtained through the use of PCA, DPCA or PLS models (in the case of PLS, one is referring to the X-scores). Regarding approaches based on DPCA models, two different methods were tested to determine the number of lags (l) to be used for the variables, as indicated in Table 1. The LS1 method is the one proposed by Ku et al. (1995) and is based on the number of linear relations needed to describe the system. On the other hand, the LS2 method estimates the number of lags for each variable based on a succession of singular value decompositions and parallel analyses of an optimization function based on the smallest singular values for each decomposition. Table 1. Definitions of the lag selection method.
Designation LS1 LS2
Lag selection method Proposed by (Ku et al., 1995). New proposed method.
As an example of one of the statistics analyzed, we present the DPCA-LS2-MD-S3 statistic. This statistic has the form of S3 (as it makes use of the scores estimated by missing data, MD) and is based on the DPCA were the number of lags was estimated by the LS2 method.
Statistical Process Control of Multivariate Systems with Autocorrelation
499
Table 2 Definition of the statistics used according to the complementary subspaces they are relative to.
Subspace
Designation
Original space (residual statistics)
R1 R2 S1
PCA subspace
S2
S3
Statistic type Squared prediction error (Q) for the reconstructed data with the observed scores. Hotelling’s T2 for the reconstructed data obtained with the estimated scores. Hotelling’s T2 for the observed scores. Hotelling’s T2 for the observed and estimated scores. Hotelling’s T2 for the residual between observed and estimated scores.
Equation
x Pt x Pt T
R1
R2
x Ptˆ S1
T
1
S tˆ
T
x Ptˆ
1
t St t T
S2 S3
ªt º 1 ªt º «tˆ » S t , tˆ «tˆ » ¬¼ ¬¼
t tˆ
T
S t tˆ t tˆ 1
3. Results In this section we present a summary of the results obtained by applying the monitoring statistics presented in Table 2, to a case study that consists of monitoring a simulated distillation column. 3.1. Case study: Wood & Berry distillation column Wood and Berry (Wood and Berry, 1973) presented a linear model approximation for the dynamics of a binary distillation column separating methanol from water, in which the distillate (xD) and residual (xB) methanol weight fraction are expressed as functions of the reflux flow rate (FR) and the reboiler steam flow rate (FS). The compositions of the top and bottom products, expressed in weight % of methanol, are the output variables. The reflux and the reboiler steam flow rates are the inputs (expressed in lb/min); time units are in minutes. For conducting the simulations, FR and FS are considered to be normally distributed random variables with zero mean (variables are expressed in deviation terms) and unit variance; xD and xB are computed according to Eq. (1) with the addition of noise (with a signal-to-noise ratio of about 10 dB) through the use of the transfer functions related to the feed flow rate and feed composition, given by (Lakshminarayanan et al., 1997):
ª 12.8e s ª xD ( s ) º «16.7 s 1 « x (s) » « ¬ B ¼ « 6.6e 7 s «¬ 10.9 s 1
º 21s 1 » ª FR ( s ) º »« » 3 s 19.4e » ¬ FS ( s ) ¼ 14.4 s 1 »¼ 18.9e
3 s
(1)
The observation measurements vector was defined as x = [ xD xB FR FS ]T. In order to construct the latent variables models, 3000 observations were collected under normal operation conditions (Xref). The data matrix Xref was then used to estimate the number of lags (needed for the construction of the DPCA models). From this analysis the number
T. Rato et al.
500
of lags obtained through the use of the LS1 approach was 2 for all variables. By using the LS2 method one gets, l = [ 2 2 9 4 ], that is 2 lags for xD and xB, 9 lags for FR and 4 lags to FS. The simulation model was run for a set of perturbations in the sensors measurements and the corresponding Average Run Length (ARL), to each perturbation, was determined. The upper control limits (UCL) for all statistics were set by trial and error so that the in-control Average Run Length (ARL0) was 370. For each perturbation, 3000 datasets were generated, leading to 3000 Run Lengths, from which we have computed the ARL values for each statistic. The ARL values, along with the associated 95% confident intervals (obtain through bootstrap), for a step perturbation in mean of the first sensor with magnitude k standard deviations, are presented in Fig. 1. These results show that there is no significant difference between the traditional static PCA and the Dynamic PCA in this case (Fig. 1 (a)), even with the use of the LS2 method to estimate the number of lags. In fact, the DPCA-LS1-0-R1 (that uses the Ku et al. approach) gives better results than DPCA-LS2-0-R1. Furthermore, even the statistics that incorporate a dynamical modeling component (such as DPCA), still present some autocorrelation. This issue is mitigated by the use of an implicit prediction methodology, namely through a MD approach to estimate future values. Results obtained show that the application of such an approach, not only reduces the statistics autocorrelation, but also improves the control chart performance (Fig. 1). In this analysis, the missing data based statistics (specially DPCA-LS2-MD-R2) present the best performance, and have the weakest final autocorrelations. 400
400
380
350
360
300
340
250
320
ARL
ARL
200 300
150 PCA-0-S1 PCA-0-R1 DPCA-LS1-0-S1 DPCA-LS1-0-R1 DPCA-LS2-0-S1 DPCA-LS2-0-R1
280 260 240
100
220
0
200
-50
0
0.1
0.2
0.3
DPCA-LS1-0-S1 DPCA-LS1-0-R1 DPCA-LS1-MD-S3 DPCA-LS1-MD-R2 DPCA-LS2-MD-S3 DPCA-LS2-MD-R2
50
0.4
0.5
0.6
0.7
0.8
k
(a)
0.9
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
k
(b)
Figure 1 ARL for the tested methodologies. 1) static PCA model (PCA-0-S1 and PCA-0-R1); 2) DPCA model of (Ku et al., 1995) (DPCA-SL1-0-S1 and DPCA-LS1-0-R1); 3) Application of missing data to the DPCA model of (Ku et al., 1995) (DPCA-LS1-MD-S3 and DPCA-LS1-MDR2); 4) Application of missing data to the DPCA model with the new lag selection method (DPCA-LS2-MD-S3 and DPCA-LS2-MD-R2).
4. Discussion and Conclusions This study was also performed for the multivariate AR(l) process presented by (Ku et al., 1995) and for a Continuous Stirred-Tank Reactor (CSTR) system with an heating jacket. In all these studies, the DPCA-LS2-MD-R2 statistic turns out to present consistently superior performances. This statistic is new, and belongs to a class of statistics that make use of missing data methods to predict the future scores of a DPCA model. This class of statistics has proven to be a good alternative to traditional
Statistical Process Control of Multivariate Systems with Autocorrelation
501
methodologies, as they present a better performance and lower autocorrelation. However we would like to point out that such statistics do require a suitable method to estimate the number of lags needed to construct the DPCA model, such as the one we have also proposed. We do believe that the new statistics, based on MD estimation, are eligible for future applications as alternatives to the current ones based strictly on PCA and DPCA.
5. Acknowledgements Tiago J. Rato acknowledges the Portuguese Foundation for Science and Technology for his PhD grant (grant SFRH/BD/65794/2009). Marco S. Reis also acknowledges financial support through project PTDC/EQU-ESI/108374/2008 co-financed by the Portuguese FCT and European Union’s FEDER through “Eixo I do Programa Operacional Factores de Competitividade (POFC)” of QREN (with ref. FCOMP-010124-FEDER-010397).
References B. R. Bakshi, 1998, Multiscale PCA with Application to Multivariate Statistical Process Control, AIChE Journal, 44, 7, 1596-1610. P. Geladi and B. R. Kowalski, 1986, Partial Least-Squares Regression: a Tutorial, Analytica Chimica Acta, 185, 1-17. T. J. Harris and W. H. Ross, 1991, Statistical Process Control Procedures for Correlated Observations, The Canadian Journal of Chemical Engineering, 69, 48-57. H. Hotelling, 1931, The Generalization of Student's Ratio, The Annals of Mathematical Statistics, 2, 3, 360-378. J. E. Jackson, 1991, A User's Guide to Principal Components, Wiley, New York. I. T. Jolliffe, 2002, Principal Component Analysis, Springer, New York. T. Kourti and J. F. MacGregor, 1995, Process analysis, monitoring and diagnosis, using multivariate projection methods, Chemometrics and Intelligent Laboratory Systems, 28, 3-21. W. Ku, R. H. Storer and C. Georgakis, 1995, Disturbance detection and isolation by dynamic principal component analysis, Chemometrics and Intelligent Laboratory Systems, 30, 179-196. S. Lakshminarayanan, S. L. Shah and K. Nandakumar, 1997, Modeling and Control of Multivariable Processes: Dynamic PLS Approach, AIChE Journal, 43, 9, 2307-2322. C. A. Lowry, W. H. Woodal, C. W. Champ and C. E. Rigdon, 1992a, A Multivariate Exponentially Weighted Moving Average Control Chart Technometrics, 34, 46-53. C. A. Lowry, W. H. Woodall, C. W. Champ and S. E. Rigdon, 1992b, A Multivariate Exponentially Weighted Moving Average Control Chart, Technometrics, 34, 1, 46-53. H. Martens and T. Naes, 1989, Multivariate Calibration, Wiley, Chichester. D. C. Montgomery, 2005, Introduction to Statistical Quality Control, Wiley. D. C. Montgomery and C. M. Mastrangelo, 1991, Some Statistical Process Control Methods for Autocorrelated Data, Journal of Quality Technology, 23, 3, 179-193. M. S. Reis, B. R. Bakshi and P. M. Saraiva, 2008, Multiscale statistical process control using wavelet packets, AIChE Journal, 54, 9, 2366-2378. M. B. Vermaat, R. J. M. M. Does and S. Bisgaard, 2008, EWMA Control Chart Limits for Firstand Second-Order Autoregressive Processes, Quality and Reliability Engineering International, 24, 573–584. S. Wold, M. Sjöström and L. Eriksson, 2001, PLS-Regression: A Basic Tool of Chemometrics, Chemometrics and Intelligent Laboratory Systems, 58, 109-130. R. K. Wood and M. W. Berry, 1973, Terminal composition control of a binary distillation column, Chemical Engineering Science 28, 9, 1707-1717.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Implementation of model predictive controller in a pharmaceutical development plant Stéphane Hattou ,a Marie-Véronique Le Lann,b,c Karlheinz Preuss,d Boris Roussel,a Michel Cabassud e,c a
SANOFI-AVENTIS, 371, rue du Pr Joseph Blayac 34184 Montpellier cedex 4, France CNRS ; LAAS ;7, avenue du Colonel Roche ;F-31077 Toulouse, France b Université de Toulouse ; UPS, INSA,INP, ISAE ; LAAS , LGC ;F-31077 Toulouse, France d engineo GmbH, Ginsheimer Str. 1, 65462 Gustavsburg (Mainz), Germany e LGC, BP 84234, Campus INP-ENSIACET 4 allée Emile Monso 31030 Toulouse cedex 4, France b
Abstract Predictive control has spread in various domains such as refinery, chemical, metallurgical … industries. Nevertheless, concerning the pharmaceutical industry it still remains relatively exceptional since two particularities are clearly attached to this specific domain: the use of batch processes and the necessity to satisfy to strict validation procedures. In this context, a predictive controller has been developed; the Model Gradient Predictive Controller denoted MGPC and tested in real time application on various chemical reactors in the chemical development plant (PILOT and KILOLAB, Sanofi-Aventis at Montpellier, France). This plant is devoted to investigate new reactions before passing them to an industrial day-to-day production. In such a context, a same apparatus is used for carrying out different operations such as chemical reactions (changing several times a week) and crystallizations with highly non linear temperature set-point profiles (such as cubic profile). Keywords: Model Predictive Controller, Batch reactor, Pharmaceutical Industry.
1. Introduction The heart of a drug manufacturing process is the batch or fed-batch reactor that is still widely used in fine and pharmaceutical industries. It is often characterized as flexible and multipurpose equipment. That means that a same apparatus is used to carry out different reactions and operations under various operating conditions involving chaining of sequences. Nevertheless, in the same time these apparatus are well known to be potentially one of the major causes of lack in the operation reproducibility and of severe damages which can go up to run-away problems. To minimize these causes it is necessary to implement efficient automation and supervisory control. But, all the developed strategies have also to match with their flexible and the multipurpose characters. With the lack of on-line sensors for concentration measurement, the control of batch reactor still remains a problem of temperature control. Furthermore, since most of batch operations are successions of sequences (pre-heating, reactions, cooling following by a crystallization), the batch reactor has to be fitted out with a flexible heating-cooling system.
Implentation of model predictive controller in a pharmaceutical development plant
503
Based on previous works, the concept of thermal flux control has been adopted (Cabassud et al., 1996 ; Louleh et al., 1999). It consists in choosing as the manipulated variable, the thermal flux transferred from the reactor jacket to the reaction mixture and the temperature of the reaction mixture as the controlled variable. In this context, a predictive controller has been developed, the Model Gradient Predictive Controller denoted MGPC. The main difference between this controller and a classical model predictive controller is the objective function minimized which is expressed as a function of the temperature gradient. Then, the computed thermal flux is used in a cascade control schema to finally address the correct heating or cooling source and to determine if any change of configuration of the heating-cooling system is needed. To fit with the multipurpose character of the process, a procedure of on-line parameter identification has been added. This paper gives real time application results of such a controller on various chemical reactors used in the chemical development plant (PILOT and KILOLAB, SanofiAventis at Montpellier, France) to investigate new reactions before passing them to an industrial day-to-day production. This plant gathers about ten reactors of different sizes (from 10 to 1600 liters), of different materials (stainless-steal, glass-lined). Moreover these reactors are used to produce the first quantities of drugs needed to perform clinical tests and consequently have to be submitted to drastic drug manufacturing regulations.
2. The Model predictive controller As said previously the Model Gradient Predictive Controller denoted as MGPC takes its principal originality compared to a classical model predictive controller, from the conjunction of two principles : the minimization of an objective criterion expressed as a function of process output and reference trajectory gradients and the searched future decision variables (the manipulated variable or control variable) which in the present case represents the thermal flux to be exchanged between the reaction mixture and the utility fluid. The use of thermal flux as control variable enables to automatically address the correct heating or cooling source and to determine if any change of configuration of the heating-cooling system is needed (Cabassud et al., 1996 ; Louleh et al., 1999). In a classical Model Predictive Control scheme, the resulting amount of computations depends on the number of values of the manipulated variable in the control horizon. As only the manipulated variable for the next succeeding sample time is applied to the process, the main part of the computations is carried out to determine values of the manipulated variable that will never be applied to the process. Therefore a lot of computations can be saved if only the value of the manipulated variable for the next sample time is computed. The predictive character of the control algorithm is maintained by considering the set point at the end of the prediction horizon. The value of the set point is used to calculate the corresponding values of the reference trajectory which fixes the closed-loop dynamic of the system. With these considerations a new formulation of the criterion minimized has been adopted (Preuss et al., 2003): J° = || d/dt Tref(k+Hp°) - d/dt Tr(k+Hp°)||
(1)
where Tref(k+Hp°) and Tr (k+Hp°) represent respectively the reference trajectory and the reactor temperature computed at time (k+Hp°) Δt (Δt being the sampling time), Hp° is the output horizon. As said previously, the process model used to compute the future values of the process output gives a relation between the thermal flux q transferred from the reactor jacket to the reaction mixture (manipulated variable) and the temperature of the reaction mixture Tr. The simple model consists of one differential equation:
S. Hattou et al. .
504
d/dt Tr = b * q with: b = 1 / (Mr * Cpr)
(2)
Mr, Cpr are respectively the mass and the heat capacity of the reaction mixture. The details of the calculations for obtaining the optimal control variable with the assumptions can be found in (Preuss, et al., 2003). This value is given by: q(k+1)= [Tref(k+Hp°)-Tr (k)]/[Δt * Hp°] * Mr * Cpr
(3)
where Tr is the inner reactor temperature measured on-line. In case of chemical reaction with a reactant feeding, heat losses … the thermal flux can also be expressed as nr
(
q( k + 1) = UA(T j ( k ) − Tr ( k )) + K (Text ( k ) − Tr ( k )) + f cCpc (Tc ( k ) − Tr ( k )) − ¦ r j ΔHr jV j =1
)
(4)
Equalling the two expressions, the value of Tj(k) can be determined and sent in a cascade control schema to the low level controllers (Fig. 1). ª§ Tref ( k + Hp°) − Tr ( k ) · § K (Text ( k ) − Tr ( k )) · PT º ¸−¨ ¸¸ − Tede ( k ) = τ «¨¨ » + Tr ( k ) ¨ ¸ ΔtHp° M r Cp r «¬© ¹ M r Cp r »¼ ¹ © nr
(
)
With : PT = f c Cpc (Tc (k ) − Tr ( k )) − ¦ r j ΔHr jV and τ = j =1
Set point
M r Cpr UA
Master Controller MGPC
Tref
Inner reactor temperature
(5)
Tr
Output q from master controller Externe set point Tdesp
Inlet Jacket Temperature
Tde
P : Heating Slave Controller
Tde
Externe set point Tdesp P : « Cold » Slave Controller
Inlet Jacket Temperature
output
output
From A to 100% : hot utility control valve or electrical
Split-range on cold utility control valves From B to C% = 0 to 100% small valve From C à 100% = 0 to 100% big valve
% hot utility control valve opening
% cold utilities control valve opening
100
100 « cold »
« Hot » Output PID 0
A
100
Dead zone
%
0
B
C
100
Dead zone
Figure 1: Implementation structure of the cascade control scheme
3. Experimental results The implementation of such a controller has been performed on a glass lined reactor of 100 liters in the KILOLAB Unit at SANOFI-AVENTIS at Montpellier, FRANCE (Fig.2). The control algorithm has been implemented in the RS3 Fisher-Rosemount SCADA system (Fig.3).
Implentation of model predictive controller in a pharmaceutical development plant
Figure 2 : Semi-batch reactor
505
Figure 3 : The glass-lined reactor SCADA system
Different experiments have been performed to test the robustness of the predictive controller (Fig 5) and in particular the ability of the inner reactor temperature to track highly nonlinear set point profiles (3rd order) (Fig 6a and Fig. 6b). These atypical profiles are needed during crystallization steps to optimize the quality of the produced chemical compounds (Mullin’s cubic curves). A comparison with the Predictive Functional Controller (Richalet, 2003) has been performed. In this case the resolution of the equations depends on the type of set point profile by the number of coincidence points and the basis functions to be chosen which is not the case with MGPC. With a 2 coincidence points-PFC (Fig. 7), the set point tracking is not perfect. Similar results as MGPC have been obtained with 3 coincidence points which significantly increases the complexity of the algorithm and therefore its implementation in the SCADA system. Endothermic reaction experiments have also been performed to test how a feedforward compensation of the reaction heat consumption (eq. 5 via the variable PT) can improve the performance (Fig. 8).
4. Conclusions The implantation of the MGPC controller has been successfully performed in a multipurpose pharmaceutical pilot unit. In particular, it has been shown that it was possible to track highly non linear set point profile which is crucial for the production of a key pharmaceutical product. These implementations are pursued actually on a total of 10 reactors from 100 to 1600 liters of different types: stainless steel or glass-lined fitted out with different heating-cooling systems (mono or multi-fluid).
References M. Cabassud, A. Chamayou, L. Pollini, Z. Louleh, M.V. Le Lann and G. Casamatta, 1996, Procédé de contrôle thermique d'un réacteur discontinu polyvalent à partir d'une pluralité de sources de fluides thermiques, et dispositif de mise en œuvre de ce procédé.Patent N°95.03753. Bulletin officiel de la Propriété Industrielle, 39, 27/09/96.International Patent PCT/FR96/00426. Z. Louleh, M. Cabassud, M.V. Le Lann, 1999, A new strategy for temperature control of batch reactors : experimental application, Chemical Engineering Journal, , 75, pp.11-20
S. Hattou et al. .
506
K. Preuss, M.V. Le Lann, M. Cabassud, G. Anne-Archard, 2003, Implementation procedure of an advanced supervisory and control strategy in the pharmaceutical industry, Control Engineering Practice, Vol. 11, N° 12, pp. 1449-1458 J. Richalet , 1993, Pratique de la commande prédictive. Paris, Hermes.
Figure 5 : Experiment with succession of
Figure 4 : Comparison between computed and measured inner reactor temperatures
heating-cooling steps
Figure 6a : Experiment with a 3rd order set point temperature cooling temperature profile
Figure 6b : SCADA screen copy during a crystallization
ϵϬ ϴϬ
D'WĂŶĚW& ƉĞƌĨŽƌŵĂŶĐĞƐ
dĞŵƉĞƌĂƚƵƌĞͲΣ
ϳϬ ϲϬ ϱϬ ϰϬ
D'WZĞĂĐƚŽƌdĞŵƉ
ϯϬ
dĞŵƉ^ĞƚƉŽŝŶƚ
ϮϬ
D'W:ĂĐŬĞƚdĞŵƉ
ϭϬ
W&ZĞĂĐƚŽƌdĞŵƉ
Ϭ
W&:ĂĐŬĞƚdĞŵƉ
ͲϭϬ Ϭ͕ϬϬ
Ϭ͕ϱϬ
ϭ͕ϬϬ
ϭ͕ϱϬ ,ŽƵƌƐ Ϯ͕ϬϬ
Ϯ͕ϱϬ
ϯ͕ϬϬ
Figure 7 : Comparison between MGPC and a twocoincidence-points PFC
Figure 8: Endothermic reaction with compensation
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Hybrid Branch-and-Cut Approach for the Capacitated Vehicle Routing Problem Chrysanthos E. Gounarisa, Panagiotis P. Repoussisb, Christos D. Tarantilisb, and Christodoulos A. Floudasa a
Computer-Aided Systems Laboratory, Department of Chemical and Biological Engineering, Princeton University, NJ 08544, USA b Center for Operations Research & Decision Systems, Department of Management Science & Technology, Athens University of Economics & Business, Athens 11362, GR
Abstract This paper presents a hybrid optimization approach that combines deterministic and metaheuristic algorithms for the Capacitated Vehicle Routing Problem (CVRP). The approach combines a new branch-and-cut framework, that utilizes a two-commodity flow representation and novel heuristic-based procedures to separate various classes of cuts, with a subordinate Adaptive Memory Programming metaheuristic algorithm for the identification of high quality solutions. New local-scope cuts are suggested to exclude infeasible or suboptimal solutions, break problem symmetries, and tighten constraints. Computational experiments illustrate the potential of the new approach. Keywords: Vehicle Routing, Distribution Logistics, Branch-and-Cut
1. Introduction The Vehicle Routing Problem (VRP) deals with the optimal assignment and service sequence of a set of customers to a fleet of vehicles and is one of the most studied combinatorial optimization problems in the operations research literature (Laporte, 2009). However, unlike the Traveling Salesman Problem, where 1000-customer instances can be solved to optimality on a routinely basis, instances of VRP with more than one hundred customers can be hard to solve (Baldacci et al., 2010). In this paper, we address the Capacitated Vehicle Routing Problem (CVRP). Given a homogeneous fleet of capacitated vehicles, the objective is to design a set of least cost round-trip routes to serve a set of customers with known demand. Previous methods for solving the CVRP include branch-and-cut (Lysgaard et al., 2004), branch-and-cut-andprice (Fukasawa et al., 2006), and set partitioning approaches (Baldacci et al., 2008). Heuristic methods, such as iterative improvement, evolutionary algorithms, and hybrid metaheuristic schemes have also made significant contributions; however, most of them fail to provide a good compromise between solution quality and computational speed. Our goal is to develop a novel hybrid optimization method that combines –in a cooperative fashion– algorithms that provide theoretical guarantee of reaching optimal solutions with metaheuristic algorithms, which typically exhibit superior performance in regards to the speed of obtaining good quality solutions. In particular, we aim at exploiting synergies between an Adaptive Memory Programming (AMP) metaheuristic algorithm and a Branch-and-Cut (BC) solution framework. The former generates and continuously updates (via information from the relaxation solutions at each node of the BC tree) a reference set of high quality diversified solutions. This pool of elite solutions is then used for updating the incumbent and for guiding the BC tree search.
C.E. Gounaris et al.
508
2. Two-Commodity Network Flow Formulation Let V0 ^0,1, , N , N 1` be a node set and A ^(i, j ) : 0 d i j d N 1` be the resulting undirected arc set. The set V V0 \ ^0, N 1` represents the N customers, while nodes 0 and N 1 represent duplicate instances of the single depot (for departure and arrival of vehicles, respectively). A cost cij t 0 is associated with each arc (i, j ) A .
Furthermore, there exists a homogeneous fleet of K vehicles with maximum carrying capacity Q . Each customer i V requires qi units of product (0 qi d Q) . The solution of the CVRP calls for the determination of a set of vehicle routes with a minimum total cost, such that each customer is visited only once by exactly one vehicle, all available vehicles are used, each vehicle route starts and ends at the depot, and the cumulative customer demand satisfied by each route does not exceed the capacity of the vehicle. Baldacci et al. (2004) were the first to describe the CVRP with a twocommodity network flow formulation. We use their formulation in a slightly sparser form. For each of the undirected arcs (i, j ) A , a binary variable [ ij indicates if the arc is traversed or not (in either direction), while two flow variables, xij and x ji , represent the vehicle’s load and residual capacity (empty space). Eqs.(1-7) express the CVRP: min ¦¦ cij[ ij (1) [ ,x
s.t.
i
j !i
¦[
ji
j i
¦ [ij
s.t . xij x ji s.t .
2, i V
j !i
¦[
0j
j
¦[
i ( N 1)
i
Q[ij , (i, j ) A
¦x
Q qi , i V
¦x
¦q
ij
&
K
(2 & 3) (4) (5)
j
s.t .
j
0j
i
i
&
¦x
i ( N 1)
0
i
(6 & 7)
3. Strengthening Inequalities & Separation Algorithms 3.1. Commodity Flow Inequalities Constraints that strengthen the bounds of the flow variable bound strengthening constraints are appended to the formulation from the onset and are similar to the flow inequalities suggested by Baldacci et al. (2004). They can be expressed as follows: ª º xoj t «¦ qi ( K 1)Q »[oj , j V (8) ¬ i ¼ (9) xij t q j[ij & x ji t qi[ij , (i, j ) V 3.2. Local Scope Inequalities After examining the structure of the relaxation solution at each node, it is possible to infer that certain solution segments (i.e., collections of arcs P ) lead to infeasible, suboptimal, or otherwise undesirable solutions. Such segments can be disallowed through the addition locally at the node level of an appropriate cut of the type: (10) ¦[ij d| P | 1 i , j P
We focus on solution segments that correspond to fully formed paths, that is, collections of consecutively joint arcs P ^i, j : [ij 1`, augmented by adjacent fractional arcs,
Hybrid Branch-and-Cut for the CVRP
509
0 [ij 1 . We search for undesirable paths through a structured approach that grows
such paths iteratively, and we disallow formation of augmented paths due to 5 reasons: (a) subtour elimination – cyclical routes that do not include the depot; (b) capacity restrictions – routes that exceed vehicle capacity; (c) path dominance – suboptimal routes for which there exists a lower cost ordering of the customers; (d) symmetry breaking – non-nominal routes; that is, routes that begin by visiting a customer that is lexicographically higher than the last customer to be visited by this route; and, (e) demand restrictions – routes that are about to terminate before they have satisfied a minimum amount of demand. A separate class of local scope cuts results by inferring that a coefficient of a variable in a constraint can be suitably increased or decreased so as to tighten this constraint. In particular, we attempt to lift Eqs.(9) by replacing the coefficient of the binary variable (right hand side) with the cumulative load of the fully formed path connected to node i through –and including– node j , under the condition that this path (denoted Pj ) remains intact, and vice-versa: xij t qPj ([ij | Pj |
¦[
n ,m Pj
nm
) & x ji t qPi ([ij | Pi |
¦[
n ,m Pi
nm
)
(11 & 12)
3.3. Global Scope Inequalities There also exist 5 classes of cutting planes that are globally valid for the CVRP. These are the so-called Rounded Capacity (RC), Homogeneous Multistar (HM), Framed Capacity (FC), Strengthened Comb (SC) and Hypotour (HI) inequalities (see Naddef and Rinaldi, 2002, for detailed explanation). Due to their vast number, only those that are violated at a given node relaxation solution are taken into consideration. To this end, we have developed metaheuristic-based algorithms for their efficient separation (i.e., identifying which instances are in fact violated). The emphasis is given on RC and HM inequalities, whose separation is done concurrently through a new Tabu Search (TS) algorithm that improves upon the search framework presented in Augerat et al. (1999). For FC inequalities, a novel multi-restart TS algorithm, combined with a partition generation mechanism and edge-exchange neighborhood search, is used. The SC separation procedure proposed by Lysgaard et al. (2004) is used, enhanced with a TS procedure for expanding the teeth. Finally, the HI separation procedure proposed by Lysgaard et al. (2004) is adopted. The details of the above separation algorithms are omitted for conciseness of this paper.
4. Hybrid Branch-and-Cut Approach 4.1. Adaptive Memory Programming (AMP) Metaheuristic Initially, a reference set (pool) of high quality solutions is generated via an AMP algorithm. This is achieved via the repeated construction of provisional solutions out of promising building blocks identified during the search, while updating these adaptive memory components based on the progress and experience gained (Tarantilis, 2005; Repoussis et al., 2009). For this purpose, a knowledge extraction mechanism is utilized, coupled with a probabilistic construction heuristic and a TS algorithm. Overall, the proposed learning mechanism considers two properties: solution quality and appearance frequency for each pair of customers visited consecutively during a route. During branch-and-cut, at the end of each node’s processing (right before branching), the algorithm utilizes information from the LP relaxation in an effort to
510
C.E. Gounaris et al.
provide a new incumbent. Initially, a Path-Relinking algorithm generates an integer feasible solution [ int that is as “close” as possible to the node’s fractional solution [ f , in terms of Hamming distance d ¦int(1 [ijf ) ¦int[ijf . Next, a new provisional H ( i , j ):^[ij 1` ( i , j ):^[ij 0` solution is generated via reconstructing part of [ int using frequently observed components from the AMP pool. This solution is further improved via TS and, if particular criteria are met, the reference set and memory structures are updated. 4.2. Branch & Cut Framework Given that high quality initial upper bounds are provided through our metaheuristic framework, the priority of the branch-and-cut implementation is on improving the lower bound and on minimizing the number of subproblems (nodes) to be considered until the gap is closed. To this end, we adopt a best-bound-first node selection strategy. After obtaining the standard LP relaxation at each node, the cutting plane phase proceeds as follows: we first search for any augmented paths that need to be disallowed due to suboptimality, infeasibility, or non-nominality of the solution. Next, we check for potential to lift any flow variables and, lastly, we check for global cut violations (with emphasis on RC/HM). If at least one cut is identified at any of these three stages, we reoptimize the LP and repeat the process without continuing with the next stage(s). If no cuts are identified whatsoever, we proceed with branching the node. Let set S V and let G (S ) be the sum of [ ij of all arcs in the corresponding cut-set. As branching rule, we use the disjunction ^G (S ) 2` ^G (S ) t 4` . Among candidate sets for which G ( S ) | 3 , we select the one with the largest total demand.
5. Computational Studies We applied our framework to the standard benchmark data sets that were also used by Lysgaard et al. (2004), where a BC framework based on a different representation of the CVRP –called the vehicle flow formulation– is presented. Table 1 exhibits the root node gap for the 10 hardest problems we attempted, including two 100-customer instances. On average, our commodity flow-based method performs equally well with the vehicle flow-based method. The average root note gap can be improved if we enable CPLEX options for adding generic MIP cuts (e.g., Gomory cuts), however we have observed that doing so deteriorates the overall performance of the algorithm at later nodes. Therefore, for runs to full optimality, we disable cuts identified by CPLEX. Table 2 presents the time necessary to fully close the gap and the number of tree nodes that had to be explored for a set of medium-difficulty problems. The largest instance solved to guaranteed optimality was P-76-6, which involves 75 customers. Two instances were solved at root node by both methods. Our framework required fewer BC nodes for 9 of the remaining 13 instances. The improvement was very substantial for problems A-45-7 and B-50-8, two tight instances that are known to be hard to solve.
6. Conclusions This paper presents a hybrid BC framework for the exact solution of the CVRP that is based on the two-commodity flow formulation, systematic use of local scope cuts, new metaheuristic-based separation techniques for known classes of cuts, as well as an AMP metaheuristic algorithm for identification of high quality integer feasible solutions and acceleration of the search. Computational experiments on benchmark data sets illustrate the potential of the proposed approach.
Hybrid Branch-and-Cut for the CVRP
511
Table 1. Root node performance (% gap) Benchmark LP This This Problems relaxation paper paper+ A-69-9 9.04 3.87 3.30 A-80-10 8.42 3.02 2.39 B-68-9 10.64 1.14 1.14 E-76-8 6.06 2.34 2.17 E-76-10 7.02 3.62 3.33 E-101-8 5.65 1.57 1.55 E-101-14 6.82 3.75 3.20 P-60-15 6.56 3.91 3.48 P-65-10 6.37 3.12 2.48 P-76-5 4.03 1.56 1.55 Average 7.06 2.79 2.46 + CPLEX v11.0 generic MIP cuts enabled
Lysgaard et al., 2004 3.85 3.03 1.10 2.33 3.63 1.52 3.75 3.95 3.14 1.51 2.78
Table 2. Runs to full optimality Benchmark This paper Problems t (sec) # nodes A-44-6 86 125 A-45-7 2,835 2,084 A-48-7 5,147 203 A-55-9 145 156 B-43-6 39 100 B-44-7 3 1 B-45-6 152 276 B-50-7 2 1 B-50-8 4,523 1,503 B-52-7 3 3 B-57-7 99 49 B-64-9 21 7 E-51-5 13 8 P-50-7 78 131 P-76-4 143 105 Note: Optimum solution not provided as input
Lysgaard et al., 2004 t (sec) # nodes 620 211 19,414 4,170 372 113 468 152 125 63 8 1 299 159 11 1 31,026 5,694 25 15 441 168 42 13 59 17 805 263 535 141
References P. Augerat, J.M. Belenguer, E. Benavent, A. Corberán, D. Naddef, 1999, Separating capacity inequalities in the CVRP using tabu search. European Journal of Operational Research, 106, 546–557. R. Baldacci, E. Hadjiconstantinou, A. Mingozzi, 2004, An exact algorithm for the capacitated vehicle routing problem based on a two-commodity network flow formulation, Operations Research, 52, 5, 723-738. R. Baldacci, N. Christofides, A. Mingozzi., 2008, An exact algorithm for the vehicle routing problem based on the set partitioning formulation with additional cuts, Mathematical Programming Ser.A, 115,:351-385. R. Baldacci, P. Toth, D. Vigo, 2010, Exact algorithms for routing problems under vehicle capacity constraints, Annals of Operations Research, 175, 1, 213-245. R. Fukasawa, H. Longo, J. Lysgaard, M.P. de Aragao, M. Reis, E. Uchoa, R.F. Werneck, 2006, Robust branch-and-cut-and-price for the capacitated vehicle routing problem, Mathematical Programming Ser.A, 106, 491-511. G. Laporte, 2009, Fifty years of vehicle routing, Transportation Science, 43, 4, 408-416. J. Lysgaard, A.N. Letchford, R.W. Eglese, 2004, A new branch-and-cut algorithm for the capacitated vehicle routing problem, Mathematical Programming Ser.A, 100, 423-445. D. Naddef, G. Rinaldi, 2002, Branch-and-cut algorithms for the capacitated VRP. In: P. Toth and P. Vigo (Eds.), The Vehicle Routing Problem, SIAM Monographs on Discrete Mathematics and Applications, SIAM, Philadelphia, 53-81. P.P. Repoussis, C.D. Tarantilis, G. Ioannou, 2009, Arc-guided evolutionary algorithm for the vehicle routing problem with time windows, IEEE Transactions on Evolutionary Computation, 13, 3, 624-647. C.D. Tarantilis, 2005, Solving the vehicle routing problem with adaptive memory programming methodology, Computers and Operations Research, 32, 9, 2309-2327.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Design of robust PID controller for processes with stochastic uncertainties Pham L. T. Duong, Moonyong Leea a
Yeungnam University,Gyeongsan 712-749, Rep. Korea
Abstract Stability and performance of a system can be inferred from the evolution of statistical characteristic of system states. The polynomial chaos of Wiener provides an efficient framework for the statistical analysis of dynamic systems, with computational cost far superior to Monte Carlo simulations. In this work, we design a robust PID controller for systems with stochastic uncertainties by using a generalized polynomial chaos. Keywords: Polynomial Chaos, PID controller design, Statistical analysis, Stochastic process, Smith predictor.
1. Introduction Stochastic uncertainty may arise in systems when the physics governing the system is known and the system parameters are either not known precisely or expected to vary in the operational lifetime. Such uncertainty also occurs when the system models are built from experimental data using system identification techniques, where a system plant is represented by its transfer function with unknown parameters. As a result, the values of the parameters in the transfer function have a range of uncertainty. In order to include this uncertainty in the mathematical model, various probabilistic methods have been developed. Traditional probabilistic approaches to uncertainty quantification (UQ) include the Monte Carlo method [6,7] and its variants—for example, Latin Hypercube Sampling [9]—which generate ensembles of random realizations for the prescribed random inputs and use repetitive deterministic solvers for each realization. Although such methods are straightforward to apply, their convergence rates can be relatively slow. For example, the variance value typically converges as 1 / K , where K is the number of realization. The need for a large number of samples for accurate results tends to an excessive computational burden. The recently developed stochastic generalized polynomial chaos (gPC) methods can exhibit faster convergence for problems with relatively large random uncertainties. With the gPC, stochastic solutions are expressed as an orthogonal polynomial of the input random uncertainties. The PC method originates from the homogeneous chaos concept define by Wiener [10]. Ghanem and Spanos [12] showed that the PC is an effective computational tool for engineering studies. Karniadakis and Xiu [4] generalized and expanded the concept by using orthogonal polynomials from the Askey scheme class as the expansion basis. Puvkov et.al. [8] proposed that if the Wiener–Askey polynomial chaos expansion is chosen according to the probability distribution of the random input, then the chaos expansion allows possibility to construct simple algorithm for statistical analysis of dynamic system. When a controller is designed, it is also desirable to understand the distribution of the response in terms of the uncertainties. Once distributions of uncertainties are known, a statistical analysis problem is a prediction problem of how a specific distribution in the
Design of robust PID for processes with stochastic uncertainty
513
plant parameters maps to the range of responses. Hence, in this work, the gPC approach is used to account for the influence of random uncertainties in the parameters of control system on the statistical characteristics of its output. Robust PID controllers for systems with stochastic uncertainties are designed by judging the distribution of output response.
2. Statistical analysis of control system with generalized polynomial chaos theory [3,8] 2.1. Governing equations for system dynamics Let us consider a control system governed by differential algebraic equations (DAEs) in [4]: F (t , y , y c,..., y ( l ) , [ ) 0 (1) ® (l ) g ( t , y ( t ),..., y ( t ), [ ) 0 ¯ 0 0 0
([1 , [2 ,..., [ N ) is a random vector of mutually independent random
where ȟ
components with a probability density function (pdf) Ui ([ ) ; y is a state variable. 2.2. Polynomial chaos theory In the gPC method, one seeks an approximate of response function f ( y ( t , ȟ )) via an orthonormal polynomial of random variables: M
P
f ( y ( t , ȟ ))
¦f
m
( t )) m ( ȟ ) ;
i 1
(2)
§ N P· M 1 ¨ ¸ © N ¹
where P is the order of polynomial chaos, f m is the coefficient of the gPC expansion and satisfies Eq. (2) as: E[ E [) m f ( y )]
fm
³ f ( y ))
m
( ȟ ) U ( ȟ )dȟ
(3)
*
where E[] denotes the expectation. 2.3. Stochastic collocation Stochastic collocation approach can deal with complex response functions easily, and its algorithm is described below: x
Choose a collocation set ȟ
(m)
([1 ,..., [ N (m)
(m)
^ȟ
(m)
,w
( m)
`
Q m 1
for the random vector ȟ , where
) is the mth node and w(m) is the corresponding weight.
x
For each node, solve Eq. (1) to obtain its solution y
x
the response function f . Calculate the approximation of gPC coefficients via a discrete integration rule for Eq. (3).
(m)
(m)
y (t , ȟ ) and evaluate
(m)
Q
fj
Q
[ f ( y , ȟ )) ( ȟ )] )
¦f
(m)
) j (ȟ
(m)
)w
(m)
j
1,..., M
m 1
x
Construct the N-variate, Pth order gPC approximation of response function
(4)
P.L .T. Duong et al.
514
M
¦ f (ȟ)) (ȟ))
P
fN
j
(5)
j
j 1
^
(m)
The choice of collocation set ȟ , w
( m)
`
Q m 1
should be made such that an accurate
integration can be constructed, i.e for a smooth function g ([ ) : Q
Q
¦ g (ȟ
[g] [g
(m)
)w
³ g (ȟ) U (ȟ)dȟ
(m)
m 1
(6)
*
In the classical spectral method [8], Gaussian quadrature [11] is chosen as one dimensional (1D) numerical integration rule. The multi-dimensional numerical (1)
integration can be constructed by tensorization of 1D quadrature rule Qqi : Q
[ g] g]
(Qq
(1)
1
...
Qq
(1)
)g
(7)
N
where the subscript in Qqi
(1)
denotes the number of node for 1D quadrature rule.
2.4. Statistical analysis of control system When all the gPC coefficients are evaluated by a numerical method, the post-processing procedure can be carried out to obtain the statistics. The mean value is the first expansion coefficient:
³
Pf
P
E[ f N ]
f N U ( ȟ ) dȟ P
*
ªM º f j ( ȟ )) j ( ȟ ) U ( ȟ ))d dȟ ³ «¬ ¦ »¼ j 1 *
f1
(8)
The variance of response function f(y) is evaluated as:
V
Df
E [( f P f ) ]
2
2
f
M
³¦ (
*
M
f j ( ȟ )) j ( ȟ ) f1 )(
¦ f (ȟ)) ( ȟ) f ) U ( ȟ) dȟ
j 1
j
j
1
j 1
(9)
M
¦f
2 j
j 2
In Eqs. (8) and (9), the properties that the polynomial set starts with )1 (ȟ )
1 is
employed and the weight function of the polynomial is the probability density function. If the response function is chosen as f ( y ) y , the mean and variance of system state are approximately given by Eqs. (8) and (9). d
The set {Ii }i i 1 is the orthonormal polynomial of [i with the weight function U i ([i ) , which is the probability density function of random variable [i . This establishes a correspondence between the distribution of the random variable [i and the type of the orthonornal polynomial of it gPC basis. In this paper, we consider only uniform stochastic uncertainties and its correspondent generalized Legendre polynomial chaos. For details on other types stochastic uncertainties and there correspondence gPC basis, see [8, 3] and the references therein. 2.5. Sparse grids N
From Eq. (7), the total number of collocation point is
q or q i
N
if the number of point
i 1
in each dimensional is q. Thus, the total number of points grows very fast for large
Design of robust PID for processes with stochastic uncertainty
515
dimension N. For this reason, a full tensor product approach is mostly used for low dimensional problem only. In [2], Smolyak cubature was found that is very useful solving a random differential equation by stochastic collocation approach with high dimensional random space. Starting with one dimensional integration formula, the Smolyak algorithm is given by § N 1· U Q [ g ] (Qi1 (1)
...
Qi (1) ) g ¦ ( 1) J i ¨ J i ¸(Qi1
...
Qi ) (10) J N 1d i d J © ¹ ( 1)
( 1)
N
N
i1 i2 ... iN
i
where J t N denotes a level of the construction. The one dimensional sets should be nested for minimum number of node in Eq. (10). The Kronrod Patterson rule have nested sets of nodes, which made them more efficient for the construction of sparse grid. The nodes and weights for Smolyak cubature based on the Kronrod Patterson rule can be readily obtained from [5]. In this paper, the Smolyak cubature based on the Kronrod Patterson rule will be used in designing the PID controller for stochastic systems with high dimensional random space.
3. Example The PC method was applied to design a PID controller for the FOPDT system with a Smith Predictor K
G( s)
Ts 1
e Ls
K , T , L U [0.9,1.1];
(11)
Optimum PID parameter are obtained by minimizing the objective function T
min J
min
Kp , Ki , Kd
Kp , Ki , Kd
³ | M [e(t )] | dt
(12)
0
subject to (13)
max DY ( t ) d 0.02 0d t dT
Figure 1 shows the means and variances of system with the resulting PID setting K p 1.858; K i 1.896; K d 0.144 . In Fig. 2, 1000 possible responses of uncertain FOPDT system are plotted. The calculations were made using the DEMM toolbox [8]. 1.5
My(t)
1
0.5 0 0
1
2
3
4
5
6
7
8
9
5
6
7
8
9
t 0.015 0.01 Dy(t) 0.005 0 0
1
2
3
4 t
Figure 1. Predicted mean and variance for system output with random parameters.
P.L .T. Duong et al.
516
1
0.8 y(t)
0.6
0.4
0.2
0 0
1
2
3
4
t
5
6
7
8
9
Figure 2. 1000 possible responses of the uncertain system.
4. Conclusions In this work, a statistical analysis for the system with uniform stochastic parameters was studied with the help of the gPC. From the statistical evolution of system output, we can infer stability and design a robust controller with respect to stochastic uniform uncertainties. Optimum PID parameters can be computed by mean of nonlinear optimization approach. Simulation example has shown that the proposed design method gives robust results for systems with stochastic uncertainties. It should be noted that the variance is only a weak measure [1] for variability of random process. In [1], D .Xiu also provided alternative measure for variability, and this will be incorporated into future work on the PID design for processes with stochastic uncertainties.
Acknowledgments This research was supported by KOSEF research grants in 2009.
References 1. D. Xiu , 2010, Fast numerical method for robust optimal design, Engineering optimization, 40(6), 489-504. 2. D. Xiu, 2007,Efficient collocation approach for parametric uncertainty analysis, Communication in computational physics,2(2),293-309. 3. D .Xiu, 2010 Numerical method for stochastic computations:A spectral method approach, Princeton University press. 4. D. Xiu, G.R. Karniadakis , 2002, The Wienner-Askey Polynomial Chaos for Stochastic Differential Equations. SIAM J. Sci. Comput., 24 (2), 619–644. 5. F. Hess, V. Winschel, 2008, Likelihood approximation by numerical integration on sparse grids, Journal of Econometrics,144,62-80. http://www.sparse-grids.de/ 6. J.S.Liu, 2001, Monte Carlo Strategies in Scientific Computing. Springer-Verlag. 7. K. A. Puvkov, N.D. Egupov,2003.Classical and modern theory of control system. BMGTU press, Vol.2. 8. K. A. Puvkov, N.D. Egupov, A.M. Makarenkov, 2003, Theory and Numerical Methods for Studying Stochastic Systems. Fizmatlits, Moscow. 9. M. Stein , 1987, Large sample properties of simulation using Latin hypercube sampling, Techometrics,29(2),143-151. 10. N. Wienner, 1938, The homogeneous chaos, Amer. J. Math , 60: 897–936. 11. P. K. Kythe, M.R. Schaferkotter, 2005, Handbook of computational method for integration CRC Press. 12. R. G. Ghanem, P. D. Spanos, 1991, Stochastic Finite Elements: A Spectral Approach. Dover publications.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant Mihaela Iancu, Mircea V. Cristea, Paul S. Agachi Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, Chemical Engineering Department, Arany Janos St., No. 11, 400028, Cluj-Napoca, Romania, [email protected]
Abstract The modern process plants are continuously improved for a flexible production and for maximization of the energy and material savings. These plants are becoming more complex with strong interactions between the process units. Consequently, the failure of one unit might have a negative effect on the overall productivity. This situation reveals important control problems. Another problem is that the traditional techniques developed by now can hardly handle all the control problems that appear in modern plants. However, the appearance and the continuously development of the advanced control techniques provide better solutions for plants control at any level of complexity of the process. In this study a complex heat integrated fluid catalytic cracking (FCC) plant was used for comparing a model predictive control (MPC) strategy with the classical PID control strategy already implemented in the real plant. The study results revealed that the MPC controller was capable to maintain the variation of the controlled variables much closer to the set points than the classical PID controllers. The present work shows that it is possible to save equipment and energy costs. Moreover, it is well known that using a MPC strategy the plant can be exploited at its maximum capacity. Keywords: fluid catalytic cracking, heat integration, dynamic behavior, PID control, model predictive control.
1. Introduction It is a fact that the chemical industry is still dominated by the use of distributed control systems implementing simple PID controllers. In the literature, lots of publications can be found which discuss the control efficiency of the refinery processes ([1]–[4]). However, a few publications [5] compare the control efficiency between an advanced control technique such as model predictive control and a well designed conventional PID control system. Therefore, in this study a complex heat integrated fluid catalytic cracking (FCC) plant was used for the identification of the advantages and disadvantages of a model predictive control (MPC) strategy, previous developed [6], [7], comparing to the
518
M. Iancu et al.
classical PID control strategy from the industrial plant. The developed MPC strategy focused on the response of the heat integrated process in terms of operation, product quality and cost reduction of the heat integrated plant. To simulate the FCC heat integrated process Aspen HYSYS software was used. In the simulation are included the reactor-regenerator section, the main fractionator and the retrofitted heat exchange network – HEN (used for preheating the feedstock before entering the riser). The goal of this work was to demonstrate the necessity and the efficiency of the advanced control techniques in the refinery processes. Also, the present work intends to emphasize that it is possible to save energy and operation costs using a MPC control scheme for the heat integrated FCC plant.
2. Description of the heat integrated FCC plant dynamic model The way approached for building the FCC model, in order to simulate its steady state and dynamic state, was to use Aspen HYSYS which is specialized for simulating refining processes. Therefore the model of the FCC heat integrated plant was built and structured as a main flowsheet and two sub-flowsheets. The main flowsheet contains the FCC reaction block with the riser and the regenerator, a simplified scheme of the FCC column and the preheating train for the raw material. One sub-flowsheet consists in the reactions block and the other one represents the FCC main fractionation column. FCC fractionator was developed in a separately sub-flowsheet based on the industrial FCC column design and geometry. The FCC fractionator contains 38 trays, 2 sidestrippers (one for stripping the heavy gasoline fraction – HCN and the other one for stripping the light diesel oil fraction – LCO), 3 pump-arounds, and one condenser. Before entering in the condenser, the top column product is cooled in 2 heat exchangers. Because the case study of this work represents a real industrial plant which already has a PID control scheme implemented, the control scheme was developed on the basis of the real one. Moreover, due to the new needs of the heat integrated FCC plant the control scheme on the real plant had to be adjusted in order to solve the problems emerged from the new instabilities introduced in the system through the heat integration process. The controllers tuning parameters were obtained using Ziegler-Nichols method.
3. MPC vs. PID The control design of the FCC fractionator is an important aspect mainly from the point of view of the products [8] and also from the point of view of the HEN [9]. It was observed that if the FCC fractionator dynamic behavior is properly controlled, the HEN has an appropriate behavior. Consequently, the further analysis takes into account the PID controllers of the FCC column control scheme. These controllers are related to: the temperature control for the column top product stream TIC-100, the liquid level control for the condenser LIC-100, the flow control of the bottom product of the HCN side-stripper FIC-101, the flow control of the bottom product of the LCO side-stripper FIC-102 and the temperature
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant 519 control for the slurry stream TIC-101. The parameters of the selected PID controllers are presented in Table 1. Table 1. The characteristics of the main controllers of the FCC column FIC-101
Controlled Variable Min. Max. 0 [m3/h] 50 [m3/h]
FIC-102 LIC-100 TIC-100 TIC-101
0 [m3/h] 0% 105 [0C] 350 [0C]
Controller
60 [m3/h] 100 % 112 [0C] 363 [0C]
Set Point
Parameters
13.52 [m3/h]
Kc = 0.1; Ti = 0.2
32.72 [m3/h] 60 % 108 [0C] 356 [0C]
Kc = 0.1; Ti = 0.2 Kc = 1.8; Ti = 181 Kc = 3; Ti = 12 Kc = 2; Ti = 25
Therefore, using the manipulated and controlled variables of the PID control structure, a 5x5 MPC controller has been developed and implemented in the FCC heat integrated plant dynamic model. The prediction model of the MPC controller has been set up using the step response matrix of the controlled variables. The dynamic optimization problem approached was:
The results of the two control strategies, PID control and MPC control, are presented in Figure 1.a and Figure 1.b. The simulations were developed for 130 minutes. In the figures the red line represents the set point, the blue line the manipulated variable and the green line the controlled variable. PID MPC
I.
II. Figure 1.a: I. PID and MPC control of the column top product temperature [0C]. II. PID and MPC control of the condenser percent liquid level [%].
M. Iancu et al.
520
PID
MPC
I.
II.
III. Figure 1.b: I. PID and MPC control of the bottom product flow of the HCN sidestripper [m3/h]. II. PID and MPC control of the bottom product flow of the LCO side-stripper [m3/h]. III. PID and MPC control of the temperature control for the Slurry stream [0C]. As it can be seen in Figure 1.a and Figure 1.b, the MPC controller is able to maintain the variation of the controlled variables much closer to the set point than the PID controllers. The developed MPC controller proved to be efficient for FCC heat integrated plant control. An important difference can be observed in the control of the bottom product flow of the HCN side-stripper (Figure 1.b. I) and the bottom product flow of the LCO side-stripper (Figure 1.b. II). Therefore, for a heat integrated FCC plant, one of the advantages of using MPC control instead a classical PID control is reflected in the fact that plant can be exploited at its maximum capacity. Other incentives consist in reducing energy and operation costs.
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant 521
4. Conclusions The refinery units have continuous operation and they process large amounts of feedstocks. Therefore, any changes in the process (oil feedstock, fuel Euro 4, fuel Euro 5) needs the adjustment of the PID controllers’ parameters in order to cope with varying operating conditions. A classical PID control scheme is not able to handle all important changes in the FCC plant operation. Consequently, manual action of the operators is needed. As a result, during the period in which these changes are made, the FCC plant is operated in a non-optimal way and constrained to reach its operation parameters according to the new feedstock. This task may be successfully accomplished by the MPC controller. This study demonstrated that the best solution consists in using the advanced control techniques, especially the ones which imply the use of controllers based on the model of the process because in this way, at every moment of time, the process behavior is predicted and the multivariable controller may act promptly.
5. Acknowledgements This work was possible with the financial support of the Sectoral Operational Programme for Human Resources Development 2007-2013, co-financed by the European Social Fund, under the project number POSDRU 89/1.5/S/60189 with the title Postdoctoral Programs for Sustainable Development in a Knowledge Based Society”.
”
References [1] Cristea, M. V., Agachi, S. P., & Marinoiu, M. V., (2003). Simulation and model predictive control of a UOP fluid catalytic cracking unit. Chemical Engineering Procesing, 42, 67. [2] Williamson, C.J. and Young, B.R. (2003). Advanced Control of a Refinery Naphtha Train. IEEE Industry Applications Society Advanced Process Control Applications for Industry Workshop, Vancouver, BC. [3] Tellez, R., Young, B.R., & Castillo, F.J.L. (2008). Model Predictive Control of a HeatIntegrated Plant, A Case Study on the Reaction Section of the HDA Process. AIChE Spring National Meeting, New Orleans LA [4] Roman, R., Nagy, Z. K., Cristea, M. V., & Agachi, S. P. (2009). Dynamic modelling and nonlinear model predictive control of a Fluid Catalytic Cracking Unit, Computers and Chemical Engineering, 33, 605. [5] H. Huang, J. B. Riggs, (2002), Comparison of PI and MPC for control of a gas recovery unit, Journal of Process Control, 12, 163–173. [6] M. Morar, P. S. Agachi, (2009), The development of a M PC controller for a heat integrated fluid catalytic cracking plant. Studia Universitatis Babes-Bolyai Chemia, LIV(4), 43. [7] M. Iancu, P. S. Agachi, (2010), Optimal Process Control and Operation of an Industrial Heat Integrated Fluid Catalytic Cracking Plant Using Model Predictive Control, Computer Aided Chemical Engineering, 28, 505-510. [8] P. Lundstrom, S. Skogestad, (1995), Opportunities and difficulties with 5x5 distillation column, Journal of Process Control, 5(4), 249-261. [9] J. Morud, S. Skogestad, (1996), Dynamic behaviour of integrated plants, Journal of Process Control, 2/3, 145-156.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Plantwide Control of a Cumene Manufacture Process
a
Vivek Geraa, Nitin Kaisthaa, Mehdi Panahib, Sigurd Skogestadb Chemical Engineering, Indian Institute of Technology Kanpur, 208016, Kanpur, India b Chemical Engineering Department, NTNU, 7491, Trondheim, Norway
Abstract This work describes the application of the plantwide control design procedure of Skogestad (Skogestad, 2004) to the cumene production process. A steady state “top down” analysis is used to select the set of “self-optimizing” primary controlled variables which when kept constant lead to acceptable economic loss without the need to reoptimize the process when disturbances occur. Two modes of operation are considered: (I) given feed rate and (II) optimized throughput. Keywords: cumene production, control structure design, self-optimizing control
1. Introduction Cumene is an important industrial intermediate in the manufacture of phenolic and polycarbonate resins, nylon and epoxy and is conventionally produced by the Friedel Crafts alkylation of benzene with propylene. (Concentration unit: kmol/m3). Main reaction: C6H6 + C3H6 Æ C9H12 (Cumene) (k=2.8E7, E= 104174 kJ/kmol) Side reaction: C9H12 + C3H6 Æ C12H18 (DIPB) (k=2.32E9, E= 146742 kJ/kmol) Some research has already been done over the past few years which discusses the various aspects of operation, design and control of a cumene production plant. 1, 2 But none of them address the issue of control structure design in a systematic manner. In this work we try to address this by applying a part of Skogestad’s plantwide procedure of (Skogestad, 2004). The main steps of this procedure are as follows: x Degree of freedom analysis. x De¿nition of optimal operation (cost and constraints). x Identi¿cation of important disturbances x Identi¿cation of candidate controlled variables c. x Evaluation of loss for alternative combinations of controlled variables x Final evaluation and selection (including controllability analysis) Two modes of operation are considered for the process: Mode 1: Given Throughput. Mode 2: Optimized/Maximum Throughput. (feed rate is also a degree of freedom).
2. Base Case Design The base case design parameters and kinetics data and cost correlations were taken from Luyben (2010). Figure 1 provides a schematic of the conventional process. The fresh benzene and fresh C3 (95% propylene and 5% n-propane) streams are mixed with the recycle benzene, vaporized in a vaporizer, preheated in a feed effluent heat exchanger (FEHE) using the hot reactor effluent, before being heated to the reaction temperature in a furnace. The heated stream is fed to a cooled packed bed reactor. The hot reactor effluent loses sensible heat in the FEHE and is further cooled using cooling water. The cooled stream is sent to a light out first distillation train. The inert n-propane and small amounts of unreacted propylene are recovered as vapour distillate from column 1.The bottom stream is further distilled in the recycle column to recover and recycle unreacted benzene as the distillate. The recycle column bottom stream is sent to the product column to recover 99.9% cumene as the distillate and the heavy DIPB as the bottoms.
Plantwide Control of a Cumene Manufacture Process
523
2.1. Determination of column 1 pressure The flash tank in the Luyben design has been replaced with a distillation column (column 1) to reduce the loss of benzene and hence increase the plant operating profit. A column operating pressure of 5 bar with a benzene loss of 0.12 kmol/h was found to be near optimal. Table 1 provides an economic comparison of the base case design with the original Luyben design (with a flash tank instead of column 1) for the same operating conditions. The yearly operating profit of the base-case design is noticeably higher than the Luyben design due to the reduction in the loss of precious benzene in the fuel gas stream. For completeness, economic / operating condition details of Mode I and Mode II optimum solutions, where the plant operating profit (defined later) is optimized, are also provided in Table 1.
Figure1: Base-case cumene process flowsheet
3. Economic optimization of the base case design 3.1. Definition of objective function (J) and constraints Total operational profit per year (365 days) was chosen as the objective function J which is to be maximized with J = Product revenue – reactant cost + DIPB credit + vent gas credit + reactor steam credit – preheater electricity cost – steam cost in reboilers and vaporizer Since the plant is already built, it has certain physical limitations associated with the unit operation equipment. Moreover it is always optimal to have the most valuable product at its constraint to avoid product give-away. The steady state degrees of freedom to maximize the Mode I / Mode II operating profit are noted in Table 2. Note that since J does not have a strong relationship with cooler outlet temperature it is fixed at 100 °C.
A. Firstauthor et al.
524
Table 1. Economic comparison of base-case design with original Luyben design Unit Luyben Base case Mode I Mode II ° Reactor inlet temp C 358 358 361 346.99 Total benzene flow kmol/h 207 207 245 269.7 ° Hot Spot temp C 430 421.60 417.50 411.3 Benzene recycle kmol/h 207 207 245 269.70 Vent kmol/h 9.98 6.47 6.02 19.04 Heavy Bottom kmol/h 1.55 1.59 1.20 2.99 Fresh Propene kmol/h 101.93 101.93 101.93 175.02 Fresh Benzene kmol/h 98.78 95.09 95.00 153.87 Product kmol/h 92.86 92.94 93.67 150.47 Total Capital Cost $ 106 4.11 4.26 4.26 4.26 Total Energy Cost $ 106/year 2.23 2.35 2.68 3.43 57.09 92.47 $ 106/year 59.36 Benzene cost 57.14 30.62 52.59 $ 106/year 30.63 30.63 Propylene cost Reactor steam credit $ 106/year 0.40 0.54 0.53 0.86 Vent (B1) credit $ 106/year 1.59 0.70 0.59 1.84 Heavy (B2) credit $ 106/year 0.71 0.48 0.38 0.95 Product revenue $ 106/year 107.74 107.87 108.72 174.64 6 Total operational cost $ 10 /year 89.52 88.40 88.89 144.88 Total operational profit (J) $ 106/year 18.23 19.47 19.83 29.76 Price Data: HP steam $9.83/GJ, Steam generated $6.67/GJ, Electricity cost $16.8/GJ, Benzene price $68.6/kmol, Propylene price $34.3/kmol, Cumene price $132.49/kmol.
Table 2. Steady state degrees of freedom Process variables Fresh propene flow rate 101.93 kmol/h# Total benzene flow rate Variable Furnace outlet temperature Variable Reactor cooler temperature Fixed Condenser Temperature 32.25 0C Column 1 xC3,B Variable xC9,D Variable Column 2 xC6,B Variable xC9,D 0.999 Column 3 xC12,B Variable #: Fixed for Mode I. *: Degree of freedom for Mode II
DOF 0/1* 1 1 0 1 2 1
3.2. Optimization results Ideally all dofs in Table 1 should be simultaneously optimized. However, to overcome convergence issues in UniSim, the separation section is optimized first followed by the rest of the plant (see e.g. Araujo et al, 2007). The optimization results obtained are summarized in Table 3. For Mode I operation, none of the constraints are active while in Mode II operation (optimal throughput), the maximum furnace duty and product column boilup constraints are active. From an economical point of view, it is optimal to increase the Mode I feed rate without violating the constraints of the plant. As the propylene feed rate is
Plantwide Control of a Cumene Manufacture Process
525
increased the profit increases due to higher production. The first constraint to become active is maximum furnace heating. However this is not the real bottleneck as feed rate can be further increased by lowering the reactor inlet temperature and/or recycle benzene flow and hence increasing the profit. As the throughput is further increased, the maximum product column boilup constraint becomes active for a fixed DIPB mol fraction in the product column bottoms. This mol fraction may be decreased to further increase the throughput and profit with the boilup constraint active. The DIPB mol fraction can however not be decreased too much as the profit decreases due to cumene product loss in the heavy fuel stream. The reported column 3 xC12, B value in Table 3 corresponds to this limit of maximum operating profit. Table 3. Summary of Mode I and Mode II Optimization Results Mode I Mode II Process variables Type Value Type Value Fresh propene Total benzene Rxr inlet temperature Cooler temperature Top T Column 1 xC3,B xC9,D Column 2 xC12,B xC9,D Column 3 xC12,B
Fixed 101.93 kmol/h Variable Variable Variable 245 kmol/h 361 °C Max furnace duty* Variable 100 °C Fixed Fixed Fixed 32.25 °C Fixed Variable 0.01 Variable Variable 5.5x10-3 Variable Variable 2.7x10-4 Variable Fixed 0.999 Fixed Variable 0.9542 Max boil up* *: Variable is fixed by this constraint
175.02 kmol/h 269.7 kmol/h 346.99 °C 100 °C 32.25 °C 0.01 0.0012 3.5x10-4 0.999 0.9628
4. Self-optimizing Controlled Variables Skogestad (2004) states that self-optimizing control is when one can achieve an acceptable economic loss with constant setpoints for appropriately chosen / designed controlled variables without the need to re-optimize for disturbances. In this work, four disturbances are considered as in Table 3. SN. d1 d2 d3 d4
Table 4. Set of disturbances considered Disturbance variable Nominal Value Propylene flow rate 101.93 kmol/h 0 Column 1 condenser temperature 32.25 C Inert composition in the propylene feed 5% propane Propylene flow rate 101.93 kmol/h
change - 10 kmol/h 0 +3 C +3 % +10 kmol/h
4.1. Mode I Self Optimizing Controlled Variables For each of the four disturbances, the plant is sequentially reoptimized for all 6 unconstrained dofs (see Table 2). We also reoptimize the process keeping the distillation column mole recoveries constant (i.e. using 6 – 4 = 2 degrees of freedom). The difference in the objective function for the two cases was observed to be very small for all the disturbances (< 0.07%). Hence we choose to use distillation column mole recoveries as controlled variables for two reasons: First, resulting loss values are very small. Second, it reduces the number of self-optimizing variables to be determined and simplifies the further analysis to a great extent as we are left with only 2 input variables instead of 6. To choose the remaining two self-optimizing controlled variables, we use the “exact local method” (Halvorsen et al., 2003) which minimizes the worst case loss due to
A. Firstauthor et al.
526
suboptimal self-optimizing control policy. The branch and bound algorithm of Kariwala (2007) is used for the evaluation of the loss. Seven candidate controlled variables, namely, reactor inlet temperature, preheater duty, fresh benzene flow rate, total benzene flow rate, reactor feed benzene to propane ratio, reactor feed benzene mol fraction and vaporizer outlet temperature, are evaluated. The best set of two self optimizing variables for Mode I operation are thus found to be the reactor inlet temperature and the reactor feed benzene to propylene ratio. 4.2 Mode II Self Optimizing Controlled Variables The maximum furnace duty and maximum product column boil up are the two active constraints in Mode II. This leaves 5 (7 dof – 2 active constraints) unconstrained dof for which we need to find 5 self optimizing controlled variables. Similar to Mode I, the column purity specifications, namely, column 1 xC3,B, column 2 xC9,D and xC6,B when kept at their optimized nominal values with no disturbance result in negligible loss for the set of disturbances considered (note that column 3 xC12,B is fixed by its maximum boilup constraint). As in Mode I, the exact local method is used to select the best self optimizing variables for the remaining two unconstrained dof. The best set was found out to be fresh benzene flow rate and the reactor inlet propylene mol fraction. The economic loss for the next best set, which is total benzene flow and the reactor feed benzene to propylene is only slightly higher. Since the latter variable is a self-optimizing variable also in Mode I, we select this set as our choice of controlled variables in Mode II to simplify the transition from Mode I to Mode II. The transition would only require replacing the reactor inlet temperature controller with the total benzene flow controller.
5. Conclusion and future work In this work, a cumene production plant has been systematically analyzed for economically optimal operation at given throughput (Mode I) and optimum throughput (Mode II). Results show that in Mode I operation, the optimized unconstrained column product purities are self optimizing along with the reactor inlet temperature and the reactor feed benzene to propylene ratio. In Mode II, the maximum furnace duty and product column boilup constraints are active. The self-optimizing variables are again the unconstrained column product purities along with the total benzene flow to the reactor and the reactor feed benzene to propylene ratio. Further work would focus on developing a plantwide control structure for the process and its dynamic validation.
References 1. 2. 3. 4. 5. 6.
Luyben, W.L. (2010). Design and Control of the Cumene Process. Ind. Eng. Chem. Res. 49 (2), 719. S. Skogestad (2000). Plantwide control: The search for the self-optimizing control structure. J. Proc. Cont., 10, 487. S. Skogestad (2004). Control structure design for complete chemical plants. Comp. Chem. Engg., 28, 219–234. I. J. Halvorsen, S. Skogestad, J. C. Morud and V. Alstad (2003). Optimal selection of controlled variables. Ind. Eng. Chem. Res., 42, 3273. V. Kariwala (2007). Optimal measurement combination for local self-optimizing control, Ind. Eng. Chem. Res., 46, 3629. A. Antonio, M. Govatsmark, S. Skogestad (2006). Application of plantwide control to the HDA process. I-steady state optimization and self-optimizing control. Cont. Engg. Practice, 15, 1222.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A robust optimization based approach to the general solution of mp-MILP problems Martina Wittmann-Hohlbein, Efstratios N. Pistikopoulos Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, U.K.
Abstract In this work, we focus on the approximate solution of multi-parametric mixed integer linear programming (mp-MILP) problems involving objective function (OFC), left-hand side (LHS) and right-hand side (RHS) uncertainty. A two-step algorithmic procedure is proposed. In the first step a partial immunization against the uncertainty is performed leading to a robust RIM-mp-MILP problem, whereas in the second step explicit optimal solutions of the robust model are derived by applying a decomposition algorithm. Computational studies are presented, demonstrating that (i) the robust RIM-mp-MILP counterpart is less conservative than the conventional robust MILP model, and (ii) the combined robust/multi-parametric procedure is computationally efficient, providing a tight upper bound to the overall global solution of the general mp-MILP problem. Keywords: multi-parametric programming, robust optimization, mixed-integer linear programming.
1. Introduction We consider the multi-parametric mixed integer optimization problem (P) ݖሺߠሻ ؔ ݉݅݊௫ǡ௬ ሺሺܿ ߠܪሻ் ݔ ሺ݀ ߠܮሻ் ݕሻ (P) s.t. ܣሺߠሻ ݔ ܧሺߠሻ ݕ ܾ ߠܨ ܴ א ݔ ǡ א ݕሼͲǡͳሽ ܴ א ߠ אȣ ؔ ሼߠ ܴ א ȁߠ ߠ ߠ௫ ǡ ݈ ൌ ͳǡ ǥ ǡ ݍሽǡ where ߠ denotes the vector of parameters and ܣሺߠሻǣ ൌ ܣே σୀଵ ߠ ܣ ǡ analogously forܧሺߠሻ. We assume that all matrices and vectors have appropriate dimensions. In the following we will denote by the lower case letter with subscript ݅, for instance ሾܽ ሿ, the column vector of entries related to the ݅-th row of the corresponding matrix. The presence of uncertainty in mixed integer linear programming models, employed in widespread application fields, including planning/scheduling, hybrid control and process synthesis, significantly increases the complexity and computational effort in retrieving explicit optimal solutions. Our aim is to find solutions of (P) that (i) are good approximations of the optimal solution and (ii) can be obtained efficiently. In this work, we apply suitable robust optimization techniques to derive solutions of (P). Our approach, denoted as a two-stage method for the solution of general mp-MILP problems, differs from existing methods as we are foremost interested in an immunization against LHS-uncertainty. We formulate a robust counterpart of type RIM-mp-MILP with only OFC- and RHS-uncertainty in the model that closely resembles the parametric nature of the original mp-MILP problem -
M.Wittmann-Hohlbein et al.
528
problems of this type can be efficiently solved using the algorithm proposed by Faísca et al. (2009). The method is described next.
2. A two-stage method for the solution of general mp-MILP problems 2.1. The worst-case oriented partially robust counterpart of (P) The pair ሺݔҧ ǡ ݕതሻ் is called a LHS-robust feasible solution of (P) if
א ߛȣǣܣே ݔҧ ܧே ݕത ߛሺܣ ݔҧ ܧ ݕതሻ ܾ ߠܨ
(1)
ୀଵ
for anyߠ אȣ. Incorporating (1) into (P) and introducing q auxiliary variables and additionally 2q linear constraints for each constraint leads to the formulation of the robust counterpart of the general mp-MILP problem. The partially robust counterpart (RC) associated to (P) is given by ݖҧሺߠሻ ؔ ݉݅݊௫ǡ௬ǡ௨ ሺሺܿ ߠܪሻ் ݔ ሺ݀ ߠܮሻ் ݕሻ
s.t. (RC)
்
்
ሾܽே ሿ் ݔ ሾ݁ே ሿ் ݕ ሺߠே ቀൣܽ ൧ ݔ ൣ݁ ൧ ݕቁ ݎ ݑ ሻ ୀଵ
ܾ ሾ݂ ሿ் ߠǡ݅ ൌ ͳǡ ǥ ǡ ݉ ் ் െݑ ൣܽ ൧ ݔ ൣ݁ ൧ ݕ ݑ ǡ݈ ൌ ͳǡ ǥ ǡ ݍǡ ݅ ൌ ͳǡ ǥ ǡ ݉ ܴ א ݔ ǡ א ݕሼͲǡͳሽ ܴ א ǡ ݑ ܴ א ǡ݅ ൌ ͳǡ ǥ ǡ ݉ ߠ אȣ ؔ ሼߠ ܴ א ȁߠ ߠ ߠ௫ ǡ ݈ ൌ ͳǡ ǥ ǡ ݍሽǡ
where ݎ ؔ ሺߠ௫ െ ߠ ሻȀʹ denotes the range and ߠே ൌ ߠ௫ െ ݎ the nominal value of ߠ . The robust model (RC) is a RIM-mp-MILP problem. Every feasible solution of (RC) is a LHS-robust feasible solution of (P). Note that the conventional robust counterpart (cvRC) of (P) corresponds to a fully deterministic MILP problem (Lin et al. (2004)). The solutions of (cvRC) are immune against all data variations in (P). Clearly, every feasible solution of (cvRC) is also feasible for (RC), and consequently for (P). 2.2. A decomposition algorithm for (RC) We outline the steps of the algorithm presented in Faísca et al. (2009). The master problem (M) is derived from the RIM-mp-MILP problem (RC) by treating the parameter ߠ as an optimization variable. Due to the bilinear terms in the objective function it corresponds to a nonlinear and non-convex optimization problem. The optimal integer node ݕ௧ of (M) is input to (RC), which then results in an mp-LP subproblem (S). The critical regions of (S), each a subset of ȣ in which a particular basis remains optimal are uniquely defined by the LP optimality conditions (Gal (1979)). Between every master and sub-problem iteration the MINLP master problem is updated. A new MINLP master problem is solved for each one of the current critical regions. Integer cuts are introduced into the formulation of (M) in order to exclude previously visited integer solutions. Parametric cuts ensure that only integer nodes that are optimal for (RC) for a certain realization of the parameters are considered. The cuts are given by ݕ െ ݕ ȁܬ ȁ െ ͳǡ אೖ
݇ ൌ ͳǡ ǥ ǡ ܭǡ
אೖ
where K denotes the number of previously identified integer solutions in this region which have been marked optimal, ܬ ؔ ሼ݆ȁݕ ൌ ͳሽ and ܮ ؔ ሼ݆ȁݕ ൌ Ͳሽ respectively, and ȁ ڄȁ corresponds to the cardinality, and
A robust optimization based approach to the general solution of mp-MILP problems
529
ሺܿ ߠܪሻ் ݔ ሺ݀ ߠܮሻ் ݕ ݖҧ ሺߠሻǡ݇ ൌ ͳǡ ǥ ǡ ܭǡ where ݖҧ ሺߠሻis the optimal objective value of (RC) at the integer node related to index ݇. The algorithm terminates in a region where the master problem is infeasible. In order to keep the number of non-convex optimization problems to a minimum, further comparison procedures are omitted. Instead, we retain an envelope of parametric profiles (Dua et al. (2002)) and collect all integer nodes and corresponding continuous solutions that have been identified to be optimal for certain points within a critical region. Function evaluation of the objective values for the parametric profiles stored in the envelope determines the optimal solution of (RC) at any parameter point. We observe the following properties when the decomposition algorithm is applied to (RC): The critical regions are polyhedral convex. The solutions stored in the envelope of parametric profiles of (RC) are piecewise affine functions. 2.3. Explicit solution of the general mp-MILP problem (P) The decomposition algorithm outlined in the previous section can be readily extended to address problem (P). If the coefficients of the constraint matrices in (P) are uncertain, the critical regions identified need not be convex. The solutions stored in the envelope of parametric profiles of (P) are piecewise fractional polynomial functions. The solution of mp-LP sub-problems with LHS-uncertainty is the bottleneck in solving the general mp-MILP problem (P). It either involves enumeration of the parameter space to retrieve the exact solution (Li et al. (2007)), or else an approximation of the solution via global optimization procedures (Dua et al. (2004)). This difficulty is the driving motivation to find a suitable reduction of the model (P) in order to reduce the computational complexity for the decomposition algorithm and to obtain competitive close to optimal solution of (P). The proposed two-stage method consists of recasting (P) as partially robust RIM-mpMILP model (RC) as described in Section 2.1, before applying the decomposition algorithm outlined in Section 2.2. An upper bound for the optimal objective value of (P) is obtained. Note that a lower bound for the optimal objective value of (P) to serve as a reference value can be obtained by solving the deterministic MINLP problem derived from (P) in which ߠ is treated as an additional optimization variable to global optimality.
3. Applications of the two-stage method Example 1. Consider the problem (P1) and its partially robust counterpart (RC1) ݖሺߠሻ ൌ ݉݅݊௫ǡ௬ ሺߠଵ ݔଵ ݔଶ ݕଵ ሻ ݖҧሺߠሻ ൌ ݉݅݊௫ǡ௬ ሺߠଵ ݔଵ ݔଶ ݕଵ ሻ s.t. െݔଵ ݔଶ ݔଷ ൌߠଶ ʹݕଵ s.t. െݔଵ ݔଶ ݔଷ ൌߠଶ ʹݕଵ (P1) (RC1) ݔଵ െ ߠଷ ݔଶ ݔସ ൌͳ ߠଵ ݕଶ ݔଵ ͷݔଶ ݔସ ͳ െ ͷݕଶ ݕଶ െ ݕଵ Ͳ, ݔ Ͳ െݔଵ ͷݔଶ െ ݔସ െ ͳ െ ͷݕଶ א ݕሼͲǡͳሽǡ െͷ ߠ ͷ ݕଶ െ ݕଵ Ͳǡ ݔ Ͳ א ݕሼͲǡͳሽǡ െͷ ߠ ͷǤ The application of the proposed two-stage method to (P1) required the solution of 9 MINLP and 2 mp-LP problems, returning 6 convex critical regions where an optimal solution exists as depicted in Figure 1.a. In contrast, the decomposition algorithm applied to (P1) required the solution of 15 MINLP and 4 mp-LP problems, computing a total of 9 convex and non-convex critical regions (Figure 1.b). ۱ି ܀ஶ marks a region where (P1) is unbounded. The parametric profiles obtained by the two-stage method provide an upper bound on the exact optimal objective value of (P1), i.e. on the value
530
M.Wittmann-Hohlbein et al.
related to the best solution among the profiles stored in the envelope with respect to its തതതത ሻ ת തതതതଵ ۱܀ exact solution for any parameter point. As an example consider ߠ אሺ۱܀ ۱ି ܀ஶ for whichݖሺߠሻ ൌ െλ, butെλ ൏ ݖҧሺߠሻ ൏ λ holds true. The conventional robust counterpart of (P1) whose solutions are immunized against LHS-, OFC- and RHS-uncertainty is infeasible for every parameter realization.
Figure 1: Critical regions of (P1) with (a) two-stage method and (b) decomposition algorithm
Example 2. We consider a sequential scheduling problem with uncertain processing and set-up times (Ryu et al. (2007)). The process consists of two stages with one unit per stage. Three products A, B and C are being processed. The production time of B, denoted by ߠଵ ǡ is unknown but bounded. After two products have been processed at the final stage, this stage may become unavailable for as long as half of the completion time needed for the first two products. The latter variability is modeled as LHS-uncertainty. The objective is to minimize the make-span . The application of the proposed two-stage method required the solution of 3 MINLP problems and 2 mp-LP problems, whereas the decomposition algorithm executed 5 MINLP and 2 mp-LP problems. The parametric profiles derived with both methods are given in Table 1 and Table 2, and the corresponding critical regions are depicted in Figure 2. The optimal make-span obtained with the two-stage method is independent ofߠଶ , the parameter associated to the uncertain set-up time. It yields an overall tighter approximation of the optimal make-span of the original scheduling problem than the optimal make-span of the conventional robust counterpart (Figure 2.b). തതതതଵ ۱܀ തതതത ۱ ܀ଶ
۱܀ଵ ۱ ܀ଶ ۱ ܀ଷ
Critical Region Optimal Make-span Optimal Sequence A-B-C ሼ͵ ߠଵ ǡ Ͳ ߠଶ ͲǤͷ} ͳǤͷߠଵ ͳͷ A-C-B ߠଵ ͳͺ ሼ ߠଵ ͺǡ Ͳ ߠଶ ͲǤͷሽ Table 1: Parametric profiles of Example 2 with the two-stage method Critical Region Opt. Make-span Opt. Sequence ͷ െ ߠଵ A-B-C ሼ͵ ߠଵ ͷǡ Ͳ ߠଶ ሽ ͳ ͺ ߠଵ ሼͶ ߠଵ ͺǡ A-B-C ߠଵ ሺߠଶ ͳሻ ͺߠଶ ͳͳ ͷ െ ߠଵ A-C-B ߠଵ ͳͺ Ͳ ߠଶ ͲǤͲͺǡ ߠଶ ሽ ͺ ߠଵ ሼ͵ ߠଵ ͺǡ ߠଵ ሺߠଶ ͳሻ ͺߠଶ ͳͳ A-B-C ͷ െ ߠଵ A-C-B ͲǤͲͺ ߠଶ ͲǤͷǡ ߠଵ ͳʹߠଶ ͳʹ ߠଶ ሽ ͺ ߠଵ Table 2: Parametric profiles of Example 2 with the decomposition algorithm
A robust optimization based approach to the general solution of mp-MILP problems
531
Figure 2: (a) Critical regions with decomposition algorithm, and (b) optimal make-span of Example 2 with two-stage method and of its conventional robust counterpart
4. Conclusions In order to obtain close-to-optimal solutions of the general mp-MILP problem (P), we propose a novel multi-parametric partially robust counterpart of type RIM-mp-MILP which, compared to the original problem, is computationally less expensive to solve with the decomposition algorithm in terms of fewer iterations and avoidance of either discretization of the parameter space or additional global optimization procedures. The second advantage of the proposed two-stage method is the generation of convex critical regions, which significantly simplifies the characterization of the parameter space. Beneficial of the two-stage approach is furthermore the low degree of conservatism of the new robust model compared to the conventional deterministic worst-case robust counterpart. Therefore, we believe the combined robust/parametric optimization approach for general mp-MILP problems to be an attractive alternative to the expensive explicit solution of the original problem and the overly pessimistic results obtained by conventional robust programming.
Acknowledgements Financial support from EPSRC (EP/G059071/1, EP/I014640) and from the European Research Council (MOBILE, ERC Advanced Grant, No: 226462) is gratefully acknowledged.
References Li Z, Ierapetritou MG (2007). A new methodology for the general multiparametric mixed-integer linear programming (MILP) problems. Ind. Eng. Chem. Res. 46(15):5141-5151. Lin X, Janak SL, Floudas CA (2004). A new robust optimization approach for scheduling under uncertainty: I. Bounded uncertainty. Comput. Chem. Eng. 28(6-7):1069-1085. Faísca NP, Kosmidis VD, Rustem B, Pistikopoulos EN (2009). Global optimisation of multiparametric MILP problems. J. Glob. Optim. 45(1):131-151. Gal T (1979). Postoptimal analyses, parametric programming and related topics. McGraw-Hill Inc.,US. Dua V, Bozinis NA, Pistikopoulos EN (2002).A multiparametric programming approach for mixed-integer quadratic engineering problems. Comput. Chem. Eng. 26(4-5):715-733. Dua V, Papalexandri KP, Pistikopoulos EN (2004). Global optimization issues in multiparametric continuous and mixed-integer optimization problems. J. Glob. Optim. 30(1):59-89. Ryu J, Dua V, Pistikopoulos EN (2007). Proactive scheduling under uncertainty: A parametric optimization approach. Ind. Eng. Chem. Res. 46(24):8044-8049.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A deterministic optimization approach for the unit commitment problem Marian G. Marcovecchioa,b, Augusto Q. Novaisa, Ignacio E. Grossmann a
UMOSE/LNEG, Estrada do Paço do Lumiar 22, 1649-038, Lisbon, Portugal
b
INGAR/CONICET, Instituto de Desarrollo y Diseño, Santa Fe, Argentina UNL, Universidad Nacional del Litoral, Santa Fe, Argentina c Department of Chemical Engineering, Carnegie Mellon University, USA
Abstract Reliable power production is critical to the profitability of electricity utilities. This concern, together with the need for less dependence on fossil fuels consumption and for CO2 mitigation, is leading to the prospective use of combined forms of conventional and alternative forms of energy generation as the most promising means to meet an increasing demand for electric power. Unit commitment (UC) arises in this context as a most critical decision process, involving a large number of interacting factors and underlying therefore a complex optimization problem. As such, the UC problem has been receiving a good deal of attention in the literature, with heuristic approaches being most dominant. As an alternative, a deterministic optimization approach is proposed in this paper and applied to the thermal UC problem. The model developed is a mixed integer quadratic programming problem (MIQP) having the objective of minimizing the fuel consumption (calculated by a quadratic function) and start up costs, with a strategy proposed for its solution that exploits the characteristics of the UC problem. This consists of valid integer cutting planes and a Branch and Bound (B&B) search, which are developed and combined resulting in a Branch and Cut (B&C) algorithm particular to the UC problem. The approach is described and implemented to solve a reference case study. Although the UC problem is NP-hard, the results show that the proposed technique is capable of providing the optimal solution for real-world sized instances. Keywords: Energy optimization, Unit optimization, Branch and Cut algorithm
Commitment
problem,
Deterministic
1. Introduction Regulated and deregulated industry organizations correspond to two different UC problems: Security Constrained (SCUC) and Price-Based (PBUC). In SCUC, the on/off states and production levels for given power generators are determined to meet a timevarying demand of electricity over a given time horizon, while satisfying constraints such as startup and shutdown times, ramp up and ramp down limits, minimum up and down times and spinning reserve, among others. In the PBUC the decisions are taken
A deterministic optimization approach for the unit commitment problem
533
according to financial risks under no obligation of satisfying the expected demand. A large number of papers address the UC problem, since a good solution can bring about considerable economic savings. However, for real-world instances the underlying optimization problem turns out to be highly combinatorial and hence NP-hard, thus the need to provide efficient methods, able to reduce the computational times. In the literature they fall in two categories: deterministic and heuristic [1]. This work addresses the SCUC problem with thermal generating units. A mathematical programming model is formulated, considering all its inherent constraints and a single set of binary variables (the on/off status of each generator at each period of time), which is a MIQP, hard to solve through deterministic approaches for highly dimensional instances. A solution methodology based on valid integer cutting planes is developed and implemented to reach the global optimal solution, with a B&B search being defined that capitalizes on the proposed cuts. Finally, the model and technique are implemented and applied to an example with varying dimensions.
2. Mathematical problem formulation The SCUC problem for thermal generating units can be described as follows: given I power units and a specified time-varying demand over T time periods, the problem consists in determining, for each unit, the start-up and shut-down schedules and power production level, in order to minimize the operational costs while meeting demand. In what follows the sub-indices i and t denote respectively units i=1,…,I and time periods: t=1,…,T. The set of binary variables ui,t represents the on/off status of i at t; the sets of continuous variables pi,t, cui,t and cdi,t denote the power produced, start-up cost, and shut-down cost, respectively, for i at t. The mathematical programming model has therefore IxT binary and 3(IxT) continuous variables. The following mathematical formulation states the SCUC problem as an MIQP model, where the objective function to be minimized is the operating cost, which includes fuel consumption, start-up and shut-down costs: ଶ ൯ ܿݑǡ௧ ܿ݀ǡ௧ ൧ min cost=σூୀଵ σ்௧ୀଵൣ൫ܽ ݑǡ௧ ܾ ǡ௧ ܿ ǡ௧
(1)
where ai, bi and ci are coefficients of the fuel cost function: ai+bi.pi,t+c.pi,t2. The constraints to be satisfied are given by equations (2) to (16): t=1,...,T ܦ௧ σூୀଵ ǡ௧
(2)
ܦ௧ ܴ௧
σூୀଵ ݑǡ௧
t=1,...,T
(3)
ݑǡ௧ ǡ௧ ݑǡ௧
i=1,...,I; t=1,...,T
(4)
ݑǡ௧ െ ݑǡ௧ିଵ ݑǡ௧ା
i=1,...,I; t=1,...,T; j=1,...,(TUi-1)
(5)
ݑǡ௧ା ݑǡ௧ െ ݑǡ௧ିଵ ͳ
i=1,...,I; t=1,...,T; j=1,...,(TDi-1)
(6)
ǡ௧ିଵ െ ܴܦ ǡ௧ ǡ௧ିଵ ܷܴ
i=1,...,I; t=1,...,T
(7)
ݑǡ௧ ൌ Ͳ
i / Tinii<0; t=1,..., (TDi+Tinii)
(8)
M. G. Marcovecchio et al.
534
i / Tinii>0; t=1,..., (TUi -Tinii)
ݑǡ௧ ൌ ͳ ൫ݑǡ௧ െ ݑǡ௧ିଵ ൯ܿݏܪ ܿݑǡ௧
t=2,...,T i / Tinii>0; and t=1,...,T i / Tinii<0 i=1,...,I; (TDi+Tcoldi) < t T
ቀݑǡ௧ െ σஸ் ା் ାଵ ݑǡ௧ି ቁ ܿݏܥ ܿݑǡ௧
൫ݑǡ௧ െ σழ௧ ݑǡ௧ି ൯ܿݏܥ ܿݑǡ௧
(9) (10) (11)
i/Tinii<0; (TDi+Tcoldi+Tinii+1)
Ͳ ܿݑǡ௧
i=1,...,I; t=1,...,T
(13)
൫ݑǡ௧ିଵ െ ݑǡ௧ ൯ܿܦ ܿ݀ǡ௧
i=1,...,I; t=2,...,T
(14)
Tinii>0
൫ͳ െ ݑǡଵ ൯ܿܦ ܿ݀ǡଵ
i /
Ͳ ܿ݀ǡ௧
i=1,...,I; t=1,...,T
(15) (16)
Eq. 2 and 3 represent, respectively, the system power balance, where Dt is the power load demand for each period of time and Rt the spinning reserve required for each t, where piU is the maximum power generation of I; eq. 4 sets the units’ generation limits, with piL being the minimum generation of I; eq. 5 and 6 deal respectively with the unit minimum up time, TUi and down time, TDi, while eq. 7 relates the unit power with the ramp down, DRi, and ramp up, URi, rate limits; the initial status and the minimum up and down times of each unit can determine the status of the unit at some of the first periods, what is expressed by eq. 8 and 9, where Tinii is an integer number indicating the initial status of unit I, i.e. the number of periods that unit i has been switched off (Tinii<0) or turned on (Tinii>0). Eq. 10, 11, 12 and 13 model the start-up cost for each unit, which is time-dependent, being low if the generator was down for a short period of time and high otherwise. Specifically, the start-up cost function is usually defined as cui,t=Hsci if downtime(TDi+Tcoldi) and cui,t=Csci otherwise, where Hsci and Csci are the hot and cold start costs (Hsci
3. Deterministic optimization approach The MIQP for the SCUC problem is inherently convex, in spite of the presence of integer variables, since all constraints are linear and the objective function is quadratic and convex. A deterministic optimization approach consisting of a B&C implementation is implemented, where appropriate integer cutting planes are defined, followed by a search that exploits the cuts characteristics. 3.1. Integer cutting planes The steps involved in the development of cuts are as follows: An upper bound for the objective function, costUP , is assumed to be available. The absolute and relative tolerances for global optimality are abs and rel, respectively. Step 1:
A deterministic optimization approach for the unit commitment problem
535
For each time period t, the following two NLP problems are solved, in turn: P1: ݉݅݊ ܣ௧
P2: max ܣ௧
s.t.: Equations (2) to (16) ܣ௧ ൌ σூୀଵ ݑǡ௧
(17)
ଶ ܿ ݐݏൌ σூୀଵ σ்௧ୀଵൣ൫ܽ ݑǡ௧ ܾ ǡ௧ ܿ ǡ௧ ൯ ܿݑǡ௧ ܿ݀ǡ௧ ൧ ܿ ݐݏ -ߝ ௦
(18)
Ͳ ݑǡ௧ ͳ
(19)
i=1,...,I; t=1,...,T
Step 2: ௧ଵ
For each time period t, let be: ܣை ௧ ൌ ඃܣ௧ opt1
ඇ
and
௧ଶ
ܣ ௧ ൌ උܣ௧
ඏ
opt2
where At and At are the optimal solutions found for P1 and P2, respectively. Note that the global optimal solutions can be successfully found for P1 and P2 since they are convex NLP problems where the integer variables have been relaxed to be continuous. Step 3: The following planes are valid integer cutting planes for the relaxed problem: ூ ܣை ௧ σୀଵ ݑǡ௧
(20)
σூୀଵ ݑǡ௧ ܣ ௧
(21)
3.2. Branch and Cut search The cuts proposed in the previous section have demonstrated to be highly efficient in reducing the relaxation gap. Hence, to take advantage of this reduction technique, an appropriate B&B search is defined, by incorporating at each node the cuts for improving the lower bounds and basing the branching on the cut approximations. The basic procedure for a B&B search is adopted, and properly adapted to solve the SCUC problem. Next, the two proposed variations to the standard algorithm are specified. Update of integer cutting planes: At the root node, the cuts are computed: two for each period of time. Thereafter, at each node, only two cuts are updated: the ones corresponding to the period of time where the branching has been performed (i.e., where the variable value had just been fixed). Each new node inherits its parent’s update cuts. However, once a new solution of the original MIQP problem has been found, all cuts are updated and passed onto the nodes that are still pending. Branching variable selection: The branching is performed over the binary variables, since their integer attribute is the only constraint of the original problem that is not required in the one relaxed. The branching variable selection is carried out by following a priority order which is established for each node. First, the time period to perform the branching is chosen as the one having the largest difference (AtUP -AtLO) and at least one unit with a non integer value. Once chosen, the unit is selected as the one with the largest difference between its current value for the relaxed solution and its rounded value to the nearest integer.
M. G. Marcovecchio et al.
536
With the aim of saving computational time, a restricted local search is carried out to find the initial upper bound and to attempt an improvement in the B&C tree. 4. Numerical results and Conclusions The above optimization search is applied to solve a case study widely used as a test problem, which includes 10 thermal units and a scheduling time horizon of 24 hours [2]. The elementary problem is scaled up 10-fold, resulting in a real-size case study of 100 thermal units, where in each period of time a 10% of the load demand is required as spinning reserve and no ramp rate constraints are imposed. The proposed technique and examples were implemented on a HP with Intel Core 2 Quad 2.4 GHz and 4 GB RAM memory. Table 1 reports the computational results obtained with the proposed B&C method, CPLEX, DICOPT and SBB [6]. A common relative tolerance of 0.3% is applied, since lower tolerances slow down convergence, with the exception of SBB that reaches the time limit of 3600 sec, with a relative tolerance of 0.42%. This instance contains 2400 binary, 4800 continuous variables and 24009 constraints. B&C
Objective value CPUs Iterations Nodes
5598364.9 82.2 1 1
CPLEX
DICOPT
5600881.2 242.3 250332 3751
5601969.5 224.6 40677 4 Major it.
SBB
5605013.4 3600 99937 2071
Table 1. Computational results for a 100-units system
It can be seen that, by comparison with CPLEX, DICOPT and SBB, the proposed B&C approach, while having only a small effect on the objective function value, does markedly improve the CPU time. It can also be claimed, considering the dimension of the problem, that computational requirements are modest. Numerous published papers provide solutions for this same problem [3-5]. In [3] are reported solutions found by different authors employing various approaches, mostly heuristics. Based on that data, it can be concluded that the computational results herein presented are superior. Furthermore, unlike heuristic methods, this deterministic methodology offers guarantee of global optimality; in fact, it can be asserted that there is no solution improving the one obtained by more than 0.3%.
References [1] H.Y.Yamin, 2004, Review on methods of generating scheduling in electric power systems, Electric Power Systems Research, 69, 227-248 [2] S.A.Kazarlis, A.G.Bakirtzis, V.Petridis, 1996, A genetic algorithm solution to the unit commitment problem, IEEE Transactions on Power Systems, 11(1), 83-92 [3] T. Niknam, A. Khodaei, F. Fallahi, 2009, A new decomposition approach for the thermal unit commitment problem, Applied Energy, 86, 1667-1674 [4] E.Zondervan, I.E.Grossmann, A.B.de Haan, 2010, Energy optimization in process industries: Unit Commitment at systems level, Computer Aided Chemical Engineering, 28, 931-936 [5] M.Carrión, J.M.Arroyo, 2006, A Computationally Efficient Mixed-Integer Linear Formulation for Thermal Unit Commitment Problem, IEEE Transactions on Power Systems, 21, 1371-1378 [6] The GAMS Development Corporation Website, 2010. Available: http://www.gams.com
21th European Symposium on Computer Aided Process Engineering – ESCAPE21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Tight Convex and Concave Relaxations via Taylor Models for Global Dynamic Optimization Ali M. Sahlodina and BenoˆÕt Chachuata,b a
Department of Chemical Engineering, McMaster University, Hamilton, ON L8S 4L7, Canada. b Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK.
Abstract This article presents a discretize-then-relax method to construct convex/concave bounds for the solutions of parametric nonlinear ODEs. It builds upon Taylor model methods for veri¿ed ODE solution. To enable the propagation of convex/concave state bounds, a new type of Taylor model is introduced, whereby the remainder term consists of convex/concave bounds in lieu of the usual interval bounds. At each time step, a two-phase procedure is applied for the veri¿ed integration. A priori convex/concave bounds that are valid over the entire time step are calculated in the ¿rst phase, then pointwise-in-time convex/concave bounds at the end of the time step are obtained in the second phase. The algorithm is demonstrated by the case study of a Lotka-Volterra system. Keywords: Taylor models, Convex relaxations, McCormick relaxations, Ordinary differential equations, Global dynamic optimization.
1. Introduction The problem of computing tight enclosures for the solutions of parametric ordinary differential equations (ODEs) is central to many deterministic global optimization methods for dynamic systems, ˙ x(t) = f (x(t), p),
t ∈ (t0 , tf ],
(1)
where p ∈ P are the parameters, with P := [pL , pU ] ⊂ Rnp an interval vector, and x(t) ∈ Rnx are the state variables, with initial conditions x(t0 ) = h(p). A way to obtain enclosures is by constructing an auxiliary system of ODEs, that describes interval or convex/concave bounds on the parametric solutions of (1), pointwise in the integration variable t [1]. However, the use of non-veri¿ed procedures to solve the auxiliary system can potentially lead to invalid bounds. Moreover, the wrapping effect and the dependency problem of interval analysis are usually not addressed in this approach. An alternative approach for computing the desired enclosures is to build upon traditional interval ODE methods [2], which discretize the time span and proceed in two phases at each step: (i) compute an a priori enclosure that is valid over the entire step; and (ii) compute of a re¿ned enclosure at the end of the step. This approach systematically accounts for truncation errors and has the capability to mitigate the wrapping effect. An extension of this approach has been proposed in [3], which uses Taylor models [4] and results in much tighter interval bounds. Recently, another extension has been proposed in [5] to compute convex and concave bounds that are guaranteed to be no looser than their interval analogs.
538
A.M. Sahlodin and B. Chachuat
In this article, a new bounding technique is developed, whereby these two recent extensions are uni¿ed. The idea is to take advantage of Taylor models to mitigate the dependency problem, and apply the McCormick relaxation technique [6] to obtain convex/concave bounds that are tighter than their interval counterparts. Background on convex/concave relaxations as well as Taylor models is provided in §2, followed by a description of the proposed technique in §3. The algorithm is demonstrated by the case study of a Lotka-Volterra system in §4. Finally, §5 concludes the article.
2. Preliminaries Convex and Concave Relaxations Given a convex set P ⊂ Rnp and a function g : P → R, a convex function g cv : P → R is called a convex relaxation of g on P if g cv (p) ≤ g(p) for all p ∈ P; a concave relaxation g cc of g is de¿ned likewise. The values of convex and concave relaxations of g at a given p ∈ P are called convex/concave bounds at p and are denoted by [g cv , g cc ](p). The McCormick relaxation technique [6] is considered in this article to compute convex/concave bounds for factorable functions, which consist of a ¿nite recursive composition of binary sums, binary products, and univariate functions. The methodology described later in the article also requires convex/concave relaxations of composite functions of the form φ(ξ(p)), where the outer function φ : Ξ → R is factorable, but not the inner function ξ : P → R. Given the convex relaxation ξ cv and the concave relaxation ξ cc of ξ on P, McCormick’s composition theorem [6] provides a framework for computing convex/concave relaxations of φ(ξ(·)) on P as φcv mid{ξcv (p), ξ cc (p), z min} ≤ φ(ξ(p)) ≤ φcc (mid{ξcv (p), ξ cc (p), z max }) , where φcv and φcc are respectively convex and concave relaxations of φ on Ξ—typically the convex and concave envelopes; z min and z max are points at which φcv and φcc attain their in¿mum and supremum on Ξ, respectively; and the mid function selects the middle value of three scalars. Recently, so-called generalized McCormick relaxations have been developed in [7], which guarantee that the computed convex/concave bounds are no weaker than their underlying interval bounds. Taylor models Given a set P ⊂ Rnp and a function g : P → R, a qth-order Taylor model of g on P is de¿ned as Tg = Pg + Rg , where Pg is an np -variate polynomial of order q and Rg is a remainder interval term such that g(p) ∈ Pg (p) + Rg , for all p ∈ P. Similar to McCormick relaxations, Taylor models can be computed for any factorable function, provided that the univariate functions are (q + 1) times continuously differentiable on their domain of de¿nition [4]. Polynomial terms of order up to q are propagated via symbolic calculations, while polynomial terms of order higher than q and remainder interval terms are processed through interval arithmetic. Given a Taylor model Tg of g on P, the range of g on P can be over-approximated by bounding the range of Pg on P, and adding the result to Rg ; see e.g. [4].
3. Convex/Concave Relaxations of Parametric ODEs 3.1. McCormick-Taylor Models A new type of Taylor models, called a McCormick-Taylor model, is introduced in this work, wherein the remainder term consists of a pair of convex/concave bounds instead of
Tight Convex and Concave Relaxations for Global Dynamic Optimization
539
the usual interval bounds. Consider a function g : P → R on a convex set P ⊂ Rnp , and let there be an np -variate polynomial Pg of order q and a pair of convex/concave functions rgcv and rgcc on P, such that g(p) ∈ Pg (p) + [rgcv (p), rgcc (p)], for all p ∈ P. A qth-order McCormick-Taylor model of g on P with convex/concave remainder bounds is de¿ned as [Tgcv , Tgcc ] := Pg + [rgcv , rgcc ]. In general, neither Tgcv nor Tgcc are, respectively, convex and concave on P since the polynomial Pg itself is nonconvex/nonconcave on P. However, it is possible to relax a McCormick-Taylor model [Tgcv , Tgcc ] by ¿rst computing convex/concave bounds [Pgcv , Pgcc ] for its polynomial part using the generalized McCormick technique, and then adding the result to the remainder convex/concave bounds as [g cv , g cc ](p) = [Pgcv , Pgcc ](p) + [rgcv , rgcc ](p). This way, the resulting convex/concave bounds are guaranteed to be no looser than the interval bounds derived from the corresponding Taylor model, as asserted by generalized McCormick relaxations [7]. Similar to Taylor models, a McCormick-Taylor model can be computed by propagating the polynomial terms of order up to q via symbolic calculations, whereas the convex/concave remainder terms and all polynomial terms of order higher than q are processed according to the rules of the (generalized) McCormick relaxation technique. On the whole, a qth-order McCormick-Taylor model can be computed recursively for any factorable function that is (q + 1) times continuously differentiable on its domain set. In the analysis that follows, a McCormick-Taylor model of g with a centered remainder bound is denoted as [Tgcv , Tgcc ]C := PgC + [rgcv , rgcc ]C , with PgC = Pg + m(Rg ) and [rgcv , rgcc ]C = [rgcv , rgcc ] − m(Rg ), where m(·) returns the midpoint of an interval. 3.2. Relaxation of Parametric ODEs using McCormick-Taylor Models The proposed discretize-then-relax algorithm for parametric ODEs extends the Taylor model method presented in [3] to encompass McCormick-Taylor models. For a speci¿ed parameter value p ∈ P, the objective is to compute convex and concave bounds such cc that x(tj ; p) ∈ [xcv j , xj ](p), at ¿nite integration steps tj , j = 0, . . . , N . However, it is the McCormick-Taylor models [Txcv , Txccj ](p) of the functions x(tj ; ·) that are propagated j cc at each tj , rather than the desired convex/concave bounds [xcv j , xj ](p) themselves. The cv cc latter are obtained afterwards by relaxing [Txj , Txj ](p) as indicated above. Assumptions. h : P → D is factorable and (q + 1) times continuously differentiable on P, with q ≥ 1 the order of the Taylor models; f : D × P → Rnx is factorable and, respectively, (q + 1) and (k − 1) times continuously differentiable in p and x on D × P, with k > 1 the order of the Taylor series expansion and D ⊆ Rnx an open connected set. The algorithm begins by constructing a McCormick-Taylor model of x(t0 ; p) as [Txcv , Txcc0 ](p) = [Thcv , Thcc ](p). Next, a two-phase procedure is used to propagate the 0 McCormick-Taylor model [Txcv , Txcc ](p) at each integration step tj , j = 0, . . . , N . j j , Txccj ](p) of x(tj ; p), Phase I comPhase I Given a McCormick-Taylor model [Txcv j putes a stepsize hj and an a priori McCormick-Taylor model [Txecv , Txeccj ](p) that encloses j {x(t; p) : tj ≤ t ≤ tj+1 }, with tj+1 := tj + hj . This is done as follows:∗ k−1 cc cv cc k [k] 0 cc Txecv (p) = [0, hij ] ⊗ Tfcv (Xj , P), , T [i] , Tf [i] ([Txj , Txj ](p), p) + [0, hj ]f e x j j i=0
∗ The
binary operation ⊗ applies McCormick’s product rule [6] so that the product between convex/concave bounds yields convex/concave bounds for the product.
A.M. Sahlodin and B. Chachuat
540
cc [i] in a where [Tfcv [i] , Tf [i] ] are McCormick-Taylor models for the ith Taylor coef¿cients f Taylor series expansion of x with respect to t [2]:
x(t + h) =
k−1
hi f [i] (x(t), p) + hk f [k] (x(t + τ ), p),
for some τ ∈ [0, h],
i=0
0 Xj and hj > 0 are chosen such that: and X j k−1
0 , P) ⊆ X 0. [0, hij ]f [i] (Xj , P) + [0, hkj ]f [k] (X j j
i=0
Phase II Given a McCormick-Taylor model [Txcv , Txccj ](p) of x(tj ; p) and an a priori j cv cc McCormick-Taylor model [Txej , Txej ](p) that encloses {x(t; p) : tj ≤ t ≤ tj+1 }, Phase II computes a McCormick-Taylor model [Txcv , Txcc ](p) of x(tj+1 ; p). This can be done j+1 j+1 by using a high-order Taylor series expansion with the mean-value theorem, and choosing PxCj (p) as the (variable) reference for the mean-value theorem, so that: k−1 C cv cc cv cc cc cc k (p) = , T hij Tfcv Txcv [i] , Tf [i] (Pxj (p), p) + hj Tf [k] , Tf [k] ([Tx ej , Tx ej ](p), p) xj+1 j+1 i=0
+
k−1 i=0
hij
C cv cc cv cc C Tf [i] , Tf [i] ([Txj , Txj ] (p), p) ⊗ rxcvj , rxccj (p), x
x
[i]
[i] cc where [T cv := ∂f∂x [i] , T [i] ] denotes the McCormick-Taylor model of the Jacobian f x fx fx of the ith Taylor coef¿cients. Note that the wrapping effect caused by the matrix/vector product terms can be mitigated using Lohner’s QR transformation technique cc (see [2]). Finally, the desired convex/concave bounds [xcv j , xj ](p) are obtained by relaxcv cc ing [Txj+1 , Txj+1 ](p) on P.
4. Case Study Consider the following Lotka-Volterra problem: x˙ 1 (t) = p x1 (t) [1 − x2 (t)] ; x˙ 2 (t) = p x2 (t) [x1 (t) − 1] ;
x1 (0) = 1.2 x2 (0) = 1.1,
for t ∈ [0, 10], and p ∈ P := [2.95, 3.05]. The order of the Taylor series expansion and Taylor models are set to k = 10 and q = 4, respectively. The computation of McCormick-Taylor models is automated in our in-house package MC++ (http://www3.imperial.ac.uk/people/b.chachuat/research), which implements the generalized McCormick relaxation technique and Taylor models for factorable functions and can be used inside FADBAD++ (http://www.fadbad.com/) for automatic differentiation. Moreover, the LEPUS approach [2] is used for stepsize control during the integration, with initial stepsize h0 = 0.01. Bounding trajectories for x1 obtained from various approaches are compared on the left plot in Fig. 1. Early breakdowns of the differential inequality [1] and interval ODE [5, 2] approaches are attributed to the fact that the dependency problem or the wrapping effect are not handled properly.
1.3
1.25
1.2
1.2
1.1
1.15
x1
x1
Tight Convex and Concave Relaxations for Global Dynamic Optimization
1
1.1
0.9
1.05
0.8
1
0.7
0
2
4
6
8
10
t Interval bounds: Taylor model method Interval bounds: Differential inequalities Interval bounds: Interval method
12
541
0.95 2.95
2.97
2.99
p
3.01
3.03
3.05
Solution set Convex/concave bounds (this work) Interval bounds: VSPODE
Figure 1: Interval bounds (left plot) and convex/concave relaxations at t = 10 (right plot) for x1 .
Pointwise-in-time convex/concave relaxations of x1 at t = 10, derived from the corresponding McCormick-Taylor model, are displayed on the right plot in Fig. 1. These plots are generated from the repeated computation of convex/concave bounds at a large number of points in [2.95, 3.05]. It is found that propagating convex/concave bounds for a speci¿ed p ∈ P roughly doubles the computational time compared to propagating Taylor model bounds only. Furthermore, the relaxations provide a much tighter enclosure than their interval counterparts. For validation, it is also checked that the bounds obtained with the solver VSPODE [3], using similar settings, agree with those obtained in this work.
5. Conclusions The proposed discretize-then-relax algorithm computes tight convex/concave bounds for the solutions of parametric ODEs. It incorporates new McCormick-Taylor models into a veri¿ed ODE method as a means to systematically address the wrapping effect and the dependency problem of interval arithmetic. Moreover, this algorithm rigorously accounts for truncation errors during the discretization. The resulting convex/concave bounds are guaranteed to be no looser than the corresponding Taylor model-derived bounds and the computational time is typically less than twice the effort needed to produce the latter. On the whole, this approach appears to be well suited for use in deterministic global dynamic optimization. Acknowledgments Financial support from the NSERC of Canada, under Grant 3720782009, is gratefully acknowledged.
References [1] [2] [3] [4] [5]
A. B. Singer and P. I. Barton. SIAM J Sci Comput, 27(6):2167–2182, 2006. N. S. Nedialkov, K. R. Jackson, and G. F. Corliss. Appl Math Comput, 105:21–68, 1999. Y. Lin and M. A. Stadtherr. Appl Numer Math, 57(10):1145–1162, 2007. K. Makino and M. Berz. Reliab Comput, 5(1):3–12, 1999. A.M Sahlodin and B. Chachuat. Discretize-then-relax approach for convex/concave relaxations of the solutions of parametric ODEs. Appl Numer Math, doi:10.1016/j.apnum.2011.01.009. [6] G. P. McCormick. Math Program, 10:147–175, 1976. [7] J. K. Scott, M. D. Stuber, and P. I. Barton. Generalized McCormick relaxations. J Global Optim, doi:10.1007/s10898-011-9664-7.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Simulation-based dynamic optimization of discretely controlled continuous processes Mariano De Paula, Ernesto Martínez INGAR (Conicet-UTN), Avellaneda 3657, Santa Fe, S3002 GJC, Argentina
Abstract Discretely controlled continuous processes (DCCPs) is a special type of hybrid dynamical systems which is of great practical relevance. In this work, a novel simulation-based approach to dynamic optimization under uncertainty of DCCPs is proposed using multi-modal Gaussian Process Dynamic Programming (mGPDP). A remarkable advantage of the proposed approach is that instead of resorting to a global metamodel, which is very inefficient, mGPDP uses probabilistic models (Gaussian Processes) to simultaneously learn the transition dynamics descriptive of mode execution and to represent the optimal control policy for mode switching. Throughput maximization and smoothness in a typical PVC production line in the face of significant schedule variability due to resource sharing is used as a case study. Keywords: Gaussian process, optimization, simulation, switched systems, uncertainty.
1. Discretely controlled continuous processes (DCCPs) Discretely controlled continuous processes (DCCPs) are found in many dynamic optimization problems such as throughput maximization when integrating batch operations with continuous processing, optimal sequencing of changeover operations in polymer production with varying qualities and optimal control of safety-critical physiological systems (Barton, et al., 2006). DCCPs are made up of a continuous process and a discrete-event multi-modal controller arranged in a feedback loop as it is shown in Fig. 1 (Goebel, et al., 2009; Lehmann and Lunze, 2010). For dynamic optimization under uncertainty of a DCCP, the optimal operating policy must switch among different modes of operation of the plant which are associated with distinct continuous dynamics and goal-oriented feedback laws with meaningful terminating conditions. Switching modes can be made in response to the occurrence of a disturbing event e(t) or a controlled decision [ V (x) (Mehta and Egerstedt, 2006). The feedback law "V ( y ) for each mode depends on design parameters Vwhich are the decision variables for dynamic optimization. The optimal policy defines the best mode parameters for each observable state y(t). By an observable state it is meant that regardless of the number of internal states x(t) and and complexity simultaneous differential-algebraic equations in the simulation model, only “output” variables that made up the objective function are needed to find the optimal policy for a DCCPs.
Simulation-based dynamic optimization of discretely controlled continuous processes 543
Fig. 1. Discretely controlled continuous processes.
2. Methodology 2.1. Learning the mode transition dynamics In simulation-based dynamic optimization of DCCPs under parametric uncertainty is of paramount importance efficient modeling of mode transitions. Let us assume these transitions correspond to: y t T
f ( yt , ut )
(1)
where T is the time interval for the implemented mode Vwhen terminating conditions are activated, and ut is the vector of design parameters which define the feedback law " V (x) over T. Without any loss of generality it has been assumed in (1) that feedback laws for different modes all have the same functional form but differ in their parameterization u. At all times, the manipulated input q(t ) "V ( y ) will depend on the chosen value of the parameter u for mode V. To avoid a comprehensive metamodel which is costly, it is proposed here to model mode transition dynamics based on designed interactions with a simulator of the DCCPs. We assume that the process dynamics evolve smoothly during a mode execution interval. Moreover, we implicitly assume that output variability is due to parametric uncertainty. We utilize a GP model, the mode dynamics GP, to describe mode transition dynamics f ~GPf . A GP can be considered a distribution over functions. For each output yi, a separate GP model is trained in such a way the effect of parametric uncertainty is modeled statistically: yti T yti ~GPf (mf, kf)
(2)
where mf(x) is the mean function and covariance function kf(x), also called a kernel (for details about regression with GPs see Rasmussen and Williams, 2006). The training inputs to the dynamics GPf are (y, u) pairs whereas the targets are the differences in (2).
M. De Paula et al.
544
2.2. Gaussian process dynamic programming (GPDP) using modes GPDP is a generalization of DP/value iteration to continuous state and action spaces using fully probabilistic GP models (Deisenroth, 2009). GPDP describes the value functions V k* directly in function space by representing them using fully probabilistic GP models that allows accounting for uncertainty in dynamic optimization. In this section, a mode-based abstraction is incorporated into the basic GPDP. Training inputs for the involved GP models are placed only in a relevant part of the state space which is reachable using finite number of modes. A sketch of the mGPDP algorithm using transition dynamics GP (mf, kf) and mode-based active learning is given in Fig. 2. The algorithm mGPDP starts from a small set of input locations yN. Using mode-based active learning (line 5), new locations(states) are added to the current set yk at any stage k. The sets yk serve as training input locations for both the dynamics GP and the value function GPs. At each stage, the dynamics model GPf is updated (line 6) to incorporate most recent information from simulated transitions. Furthermore, the GP models of mode transitions f and the value functions V* and Q* are updated. After each mode is executed the function g(x) is used to reward the transition plus some noise wg. A key idea in the algorithm mGPDP is that the set Y0 is a multi-modal quantization of the state space based on Lebesgue sampling. In line (5) of the algorithm in Fig. 2, this quatization is generated using mode-based active learning. As a result, Y0 is the set of all states that are reachable from given initial states using a sequence of modes of length less than or equal to N. For example, yN-1 is a set of observable states from which the goal state can be achieved by executing only one mode. Lebesgue sampling is far more efficient than Riemann sampling which uses fixed time intervals for control.
Fig. 2. Mode-based Gaussian Process Dynamic Programming (mGPDP).
3. Case study 3.1. Problem statement Dynamic throughput optimization plays a key role in hybrid chemical plants such as the Solvay PVC production line (Melas, 2003). To maximize the average plant productivity
Simulation-based dynamic optimization of discretely controlled continuous processes 545 a buffer tank is used, as it is shown in Fig. 3, to smooth the interface between the batch and continuous processes downstream. This type of process systems are generally made up of several parallel working units (e.g., batch reactors) “sharing” common resources, typically utilities such as cold water or steam. Batch reactors often operates by cyclically passing through a sequence of phases, namely loading of raw material, cooling, heating, reaction, etc.. Resource sharing by different reactors typically increase the duration of some tasks in the recipe which will alter the whole schedule of batches which discharge downstream to continuous units such as centrifuges and dryers. Such delays in reactor outflows drastically the schedule pattern of the flow rate entering to the buffer tank which makes throughput control a challenging problem (Simeonova, 2008). As a demonstrative example, let’s assume that there exist 5 batch reactors, the buffer tank has a volume of 0.5 m3 and the maximum allowable level is 0.5 m. At any time, the number of reactors that are discharging, the tank level and current inflow rate are known. The tank outflow discharged downstream is varied using the feedback law: ~ (3) f out U ht 0.05 ~ where Uis the mode parameter and h (t ) is the exponentially smoothed level based on ~ h (t )
~
E .ht 1 (1 E )ht 1; E
(4)
0.025
The algorithm mGPDP has been applied to determine an optimal control policy that maximize the plant throughput by rewarding mode transitions in such way that the average outflow rate is increased without overflowing or interrumpting the discharge downstream. Also, sudden changes to the outflow rate are penalized. After each mode has been implemented, the corresponding reward r(U is calculated using:
if 0 h(t ) hmax : r ( U )
1 D e
12
1 'f 2 a2
(1 D ) e
12
1 1 F b2
2
; Otherwise; r(U = -10 (5)
where F is the average flow rate, 'F is the net increment in the outflow rate and D 0.7, a 1 / 12, b 5 . All modes are stopped whenever a discharging event occurs.
Reactor discharge schedule R5 R4 R3 R2 R1
0
50
100
150
200
250
300
Time [min]
Fig. 3. Hybrid chemical plant.
Fig. 4. Nominal schedule for a production campaign.
M. De Paula et al.
546
3.2. Results In Fig. 5(a), results obtained when the optimal control policy is used to vary the flow rate downstream assuming that the nominal discharge schedule in Fig. 4 defines the inflow rate patterns to the buffer tank. It is remarkable that despite the variations in the inflow rate, throughput is stable and high over the production campaign. In Fig. 5(b), the same “nominal” policy has been applied to manage a different schedule. As can be seen the outflow rate is properly managed which highlights the robustness of the policy. Results Inflow [m3/h]
Results
Outflow [m3/h]
Level[m]
Inflow [m3/h]
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0
0.2
(a) 0
50
100
150 time [min]
200
250
300
0
Outflow [m3/h]
Level[m]
(b) 0
50
100
150
300 200
250
time [min]
Fig. 5. Throughput optimization. (a) Nominal schedule; (b) Disturbed schedule.
4. Final remarks A novel integration of dynamic programming with Gaussian Processes using modes has been proposed to determine an optimal policy for DCCPs. Throughput optimization in hybrid batch plants was discussed to highlitght the potential of the proposed approach.
References P. Barton, C. Lee, M. Yunt, 2006, Optimization of hybrid systems, Computers chem Engng, 30, 1576-1589. M. Deisenroth, C. Rasmussen, J. Peters, 2009, Gaussian process dynamic programming, Neurocomputing, 72, 1508–1524. R. Goebel, R. Sanfelice, A. Teel, 2009, Hybrid dinamical systems, IEEE Control Systems Magazine, April issue, 28–93. D. Lehmann, J. Lunze, 2010, Extension and experimental evaluation of an event-based statefeedback approach, Control Engineering Practice, in press. T. Mehta, M. Egerstedt, 2006, An optimal control approach to mode generation in hybrid systems, Nonlinear Analysis 65, 963–983. S. Melas, 2003, Pvc line predictive inventory control: a production rate optimization for a hybrid system, Solvay-Group report, 1–34. C. Rasmussen, C.Williams, 2006, Gaussian processes for machine learning, Adaptive Computation and Machine Learning series,The MIT Press, Cambridge, MA, USA. I. Simeonova, 2008, On-line periodic scheduling of hybrid chemical plants with parallel production lines and shared resources, PhD Thesis, Université Catholique de Louvain.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Evaluation of Steady State Multiplicity for the Anaerobic Degradation of Solid Organic Waste Mihaela Sbarciog, Andres Donoso-Bravo, Alain Vande Wouwer ∗ UMONS, Automatic Control Laboratory, 31 Boulevard Dolez, Mons 7000, Belgium
Abstract This paper evaluates the number and stability of steady states of a two-population model, describing the anaerobic degradation of solid organic waste. Analytical conditions are provided, which define in the input space regions characterized either by a different number or change of stability of steady states. The qualitative results do not depend on the parameter values and allow the proper selection of inputs and initial conditions to achieve a good operation of the process. Keywords: anaerobic digestion, microbial kinetics, mathematical modelling, nonlinear dynamics
1. Introduction Anaerobic digestion is a consolidated technology because it has remarkable advantages, such as low energy consumption, less sludge production compared to aerobic processes and direct production of biogas. The total annual potential production of this renewable energy source is estimated at around 200 billions m3 (Appels et al., 2008). One of the most important applications is the biomethanization of organic solid waste, mainly sewage sludge and municipal solid waste. Several mathematical models have been developed for the anaerobic digestion process. The most well known is Anaerobic Digestion Model no1 described in (Batstone et al., 2002). However, its use in industrial applications is limited, due to the high number of variables and parameters involved, which makes the identification a delicate task. Therefore, simplified models have been developed in order to ease the parameter identification, model validation and design of control strategies. Commonly, the methanogenesis reaction is described as the critical step of the process, even though the hydrolysis reaction (first step of the process) plays also an important role in the anaerobic digestion of solid waste, as it has a limiting effect on the process (Pavlostathis and Giraldo-Gomez, 1991; Batstone et al., 2009). The steady state behaviour of biological conversion systems (such as the anaerobic digestion process) is an important aspect for the process design and control. Particularly, process conditions under which steady state multiplicity occurs should be accurately identified (Volcke et al., 2010). When the process is characterized by steady state multiplicity, its time evolution is not fully determined by the choice of inputs but also by the initial reactor conditions. This paper evaluates the occurrence of multiple steady states in a two-population model, describing the anaerobic digestion of solid waste. The stability of the physical steady states is assessed. Analytical conditions are provided, which define regions in the input space characterized by a different number of steady states and/or ∗ [email protected]
Mihaela Sbarciog et al.
548
change of stability of the steady states. Simulation results are presented to illustrate the importance of initial conditions for the proper operation of the process.
2 Process model The model, developed by Vavilin and Angelidaki (2005), considers two reactions: hydrolysis and methanogenesis. The biological transformations are described by the following reaction network: r
1 ξ1 −→ aξ2 + ξ3
r
2 bξ2 −→ CH4 + ξ4
(1)
In the first reaction acidogens (ξ3 ) transforms the particulate organic matter (ξ1 ) into volatile fatty acids (ξ2 ). In the second reaction, methanogens (ξ4 ) consume the volatile fatty acids (ξ2 ) and produce methane. a, b > 0 are the stoichiometric coefficients. The two reactions are characterized respectively by Contois and Haldane kinetics: Ki1 ξ1 ·ξ3 r1 (ξ ) = μm1 Kx ξ3 + ξ1 Ki1 + ξ2 μ 1 (ξ )
Ki2 ξ2 r2 (ξ ) = μm2 ·ξ4 Ks + ξ2 Ki2 + ξ2
(2)
μ 2 (ξ )
For an ideal continuous stirred tank reactor, the system dynamics described by the reaction network (1) are given by the differential equations (3). The equivalent canonical state representation (4) can be obtained (Bastin and Dochain, 1990) by considering the state transformation: x1 = ξ1 + ξ3 , x2 = ξ2 − aξ3 + bξ4 , x3 = ξ3 , x4 = ξ4 , with the positiveness constraints of the original states Sx = x ∈ R4 , x1 − x3 ≥ 0, x2 + ax3 − bx4 ≥ 0, x3 ≥ 0, x4 ≥ 0 D represents the dilution rate, while ξin1 and ξin2 respectively represent the concentrations of the particulate organic matter and volatile fatty acids in the influent.
ξ˙1 = D (ξin1 − ξ1 ) − r1 (ξ ) ξ˙2 = D (ξin − ξ2 ) + ar1 (ξ ) − br2 (ξ ) 2
ξ˙3 = −Dξ3 + r1 (ξ ) ξ˙4 = −Dξ4 + r2 (ξ )
x˙1 = D (ξin1 − x1 ) x˙2 = D (ξin2 − x2 )
x˙3 = −Dx3 + r1 (ξ ) x˙4 = −Dx4 + r2 (ξ ) (3) (4) All steady states lie on the plane Δ = x ∈ R4 , x1 = ξin1 , x2 = ξin2 (Sbarciog et al., 2010).
3 Calculation of steady states The steady states are calculated from (4), by setting the derivative equal to zero. Consequently, the conditions defining Sx are checked to decide whether or not the steady state is physical. All steady states satisfy x1 = ξin1 , x2 = ξin2 , (−D + μ1 (ξ )) · x3 = 0, (−D + μ2 (ξ )) · x4 = 0. The last two expressions lead to several possibilities, which are briefly discussed below. A capital letter is assigned to each steady state. The analytical expressions of all steady states and the conditions under which they are physical are listed in Table 1.
Evaluation of Steady State Multiplicity for the Anaerobic Degradation of Solid Organic Waste
549
1. x3 = 0, x4 = 0: this generates the steady state denoted by A, which represents the total wash out state of the system; 2. x4 = 0, μ1 (ξ ) = D: this generates the steady state denoted by B, which is characterized by the wash out of methanogens. ξ3,B is the unique positive solution of μ1 (ξ ) = D, which can be rewritten as a 2nd order equation in ξ3 ; 3. x3 = 0, μ2 (ξ ) = D: this possibility leads to two steady states denoted by C and D, characterized by the wash out of acidogens. ξ2,C < ξ2,D are the two solutions of μ2 (ξ ) = D, which can be rewritten as a 2nd order equation in ξ2 ; 4. μ1 (ξ ) = D, μ2 (ξ ) = D: two steady states E and F are obtained. ξ2,E < ξ2,F are the two solutions of μ2 (ξ ) = D, while ξ3,E and ξ3,F are respectively the solutions of μ1 (ξ ) = D for ξ2 = ξ2,E and for ξ2 = ξ2,F . These steady states correspond to the coexistence of the two species, a situation arising from a syntrophic relationship (methanogens consume the volatile fatty acids which are inhibitory for themselves and in less degree for acidogens). Steady state xA = ξin1 ξA = ξin1 xB = ξin1 ξB = ξin1 xC = ξin1 ξC = ξin1 xD = ξin1 ξD = ξin1
ξin2 ξin2 ξin2 ξin2 ξin2 ξ2,C ξin2 ξ2,D
T 0 0 T 0 0 T ξ3,B 0 T ξ3,B 0 0 (ξin2 − ξ2,C )/b 0 (ξin2 − ξ2,C )/b 0 (ξin2 − ξ2,D )/b 0 (ξin2 − ξ2,D )/b
Condition of occurrence N/A
T T T T
T xE = ξin1 ξin2 ξ3,E (ξin2 − ξ2,E + aξ3,E )/b T ξE = ξin1 − ξ3,E ξ2,E ξ3,E (ξin2 − ξ2,E + aξ3,E )/b T xF = ξin1 ξin2 ξ3,F (ξin2 − ξ2,F + aξ3,F )/b T ξF = ξin1 − ξ3,F ξ2,F ξ3,F (ξin2 − ξ2,F + aξ3,F )/b
D ≤ μm1 Ki1 /(Ki1 + ξin2 ) D ≤ max(μ2 (ξ )) ξ2,C ≤ ξin2 D ≤ max(μ2 (ξ )) ξ2,D ≤ ξin2 D ≤ max(μ2 (ξ )) D ≤ μm1 Ki1 /(Ki1 + ξin2 ) ξ2,E − aξ3,E ≤ ξin2 D ≤ max(μ2 (ξ )) D ≤ μm1 Ki1 /(Ki1 + ξin2 ) ξ2,F − aξ3,F ≤ ξin2
Table 1: Analytical expressions and conditions for the occurrence of steady states
Table 1 shows that, except for the total wash out state, which is independent of the input values, one or more conditions must be satisfied for a steady state to be physical. These conditions are graphically illustrated in Fig. 1 by continuous lines. They define in the space ξin2 −D various regions, characterized either by a different number or type of steady states. The correspondence is given in Table 2. Note that only the last condition for x E , respectively xF , involves ξin1 . As an example, and in order to obtain a clear representation of the various regions, this condition has been evaluated for a specific value of ξ in1 rather than an interval, and led to the curve separating regions 3 and 5 from regions 2, 4 and 6. This implies that the size of these regions will change depending on the value of ξ in1 .
Mihaela Sbarciog et al.
550 D μm 1
Region 1 2 3 4 5 6 7 8
1 2
max(μ2 )
4
8 6
3
7
5
ξin2
Figure 1: Regions corresponding to the occurrence of various steady states
Physical steady states xA xA , xB x A , x B , xE x A , x B , xE , x F xA , xB , xC , xE xA , xB , xC , xE , xF xA , xB , xC ,xD , xE , xF xA , xC , xD
Table 2: Correspondence between the regions and the physical steady states
4 Stability of steady states The stability of the steady states is assessed using the linearization principle. The Jacobian matrix on the plane Δ is computed and the two eigenvalues are evaluated at each steady state. A steady state is (locally asymptotically) stable if both eigenvalues have negative real parts, and unstable if at least one eigenvalue has positive real part. The index of the steady state is the number of eigenvalues with positive real part. Hence, xA is stable in regions 1 and 8, has index 1 in 2, 3, 4, 7, and index 2 in regions 5 and 6; xB is stable in regions 2, 4, 6, 7, and has index 1 in regions 3 and 5; xC is stable in region 8 and has index 1 in regions 5, 6, 7; xD is always unstable, either with index 1 in region 8 or index 2 in region 7; xE is always stable and xF has always index 1 in the regions where they occur as physical steady states.
5 Discussion A good operation of a continuous anaerobic digestion system aims at reaching a steady state characterized by the consumption of both particulate organic matter and volatile fatty acids, which is possible only if both type of bacteria are present in the reactor. There are two steady states, which fulfill this condition, xE and xF . However only xE is stable and will be reached in open loop. If ξin1 and ξin2 are such that (ξin2 , D) is in region 3 or 5, then any initial condition characterized by the presence of both populations will lead the process to xE . On the contrary, if (ξin2 , D) is in region 4, 6 or 7, then some initial conditions will lead the system to xE , while the rest will determine the convergence to xB , characterized by the wash out of methanogens and consequently by the accumulation of volatile fatty acids (see Fig. 2). The two sets of initial conditions are separated by the stability boundary of xE (illustrated by a dashed line in Fig. 2), which can be estimated as described by Sbarciog et al. (2008). Fig. 2 shows the phase portrait of the system on the plane Δ for ξin1 = 25 g/L, ξin2 = 0.02 g/L and D = 0.85 d−1 , which correspond to region 4. The numerical values for the system parameters are given in Table 3.
6 Conclusions In this paper the occurrence of multiple steady states in a two-population model, describing the anaerobic digestion of solid waste, has been evaluated. Analytical conditions have
Evaluation of Steady State Multiplicity for the Anaerobic Degradation of Solid Organic Waste
551
50 45 40 35
x4
30
μm1 Kx Ki1 a ξin1 D
25
xE
20 15
xF
10 5
xA
xB
0 0
5
10
15
20
6.8 d−1 10.8 g/L 15 g/L 0.15 [0, 30] g/L [0, 5] d−1
μm2 Ks Ki2 b ξin2
1.19 d−1 0.021 g/L 1.5 g/L 0.08 [0, 5] g/L
25
x
3
Figure 2: The phase portrait of the system on the plane Δ for choice of inputs in region 4
Table 3: Numerical values of model parameters and inputs
been provided, which define regions in the input space characterized by a different number and/or change of stability of the steady states. The qualitative results do not depend on the parameter values. When steady state multiplicity occurs, the time evolution of the system is not fully determined by the choice of inputs but also by the initial reactor conditions, which should be carefully selected for the good operation of the process.
References Appels, L., Baeyens, J., Degreve, J., Dewil, R., 2008. Principles and potential of the anaerobic digestion of waste-activated sludge. Progress in Energy and Combustion Science 34, 755– 781. Bastin, G., Dochain, D., 1990. On-line Estimation and Adaptive Control of Bioreactors. Elsevier, Amsterdam. Batstone, D., Keller, J., Angelidaki, I., Kalyuzhnyi, S., Pavlostathis, S., Rozzi, A., Sanders, W., Siegrist, H., Vavilin, V., 2002. The iwa anaerobic digestion model no 1 (adm1). Water Science & Technology 45, 65–73. Batstone, D., Tait, S., Starrenburg, D., 2009. Estimation of hydrolysis parameters in full-scale anerobic digesters. Biotechnology & Bioengineering 102, 1513–1520. Pavlostathis, S., Giraldo-Gomez, E., 1991. Kinetics of anaerobic treatment: A critical review. Critical Reviews in Environmental Control 21, 411–490. Sbarciog, M., Loccufier, M., Noldus, E., 2008. The computation of stability boundaries in state space for a class of biochemical engineering systems. Journal of Computational and Applied Mathematics 215, 557–567. Sbarciog, M., Loccufier, M., Noldus, E., 2010. Determination of appropriate operating strategies for anaerobic digestion systems,. Biochemical Engineering Journal 51, 180–188. Vavilin, V., Angelidaki, I., 2005. Anaerobic degradation of solid material: Importance of initiation centers for methanogenesis, mixing intensity, and 2d distributed model. Biotechnology & Bioengineering 89, 113–122. Volcke, E., Sbarciog, M., Noldus, E., Baets, B. D., Loccufier, M., 2010. Steady state multiplicity of two-step biological conversion systems with general kinetics. Mathematical Biosciences 228, 160–170.
Acknowledgement This paper presents research results of the Belgian Network DYSCO (Dynamical Systems, Control, and Optimization), funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office. The scientific responsibility rests with its author(s).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Towards global optimization of combined distillation-crystallization processes for the separation of closely boiling mixtures Martin Ballersteina, Achim Kienleb,c, Christian Kundeb, Dennis Michaelsa, Robert Weismantela a
Eidgenössische Technische Hochschule Zürich, Institut für Operations Research, Rämistrasse 101, 8092 Zürich, Switzerland b Otto-von-Guericke-Universität Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany c Max-Planck-Institut für Dynamik komplexer technischer Systeme, Sandtorstraße 1, 39106 Magdeburg, Germany
Abstract This paper deals with global optimization of hybrid distillation/melt crystallization processes. For this purpose, improved algorithms are presented taking the problem specific structure into account. We use recent results to derive improved convex relaxations for terms occurring in the material balance equations and propose a constraint propagation strategy that enables us to exclude several process configurations very fast. Computations are presented demonstrating the usefulness of our methods. It is shown that the speed of global optimization can be improved by orders of magnitude. Keywords: MINLP, crystallization.
convex
relaxations,
constraint
propagation,
distillation,
1. Introduction Separation of closely boiling mixtures is a challenging problem for process synthesis and process design. A typical example is the separation of mixtures of isomers like n/iso-aldehyde mixtures arising from oxo-synthesis. Standard distillation is often not favorable due to high process costs. A more energy and cost efficient separation process for such closely boiling mixtures is desirable and may be obtained by an optimal combination of distillation and melt crystallization, thus exploiting the advantages of both processes [1]. The optimal design of such chemical processes with structural and operational degrees of freedom leads to mixed-integer nonlinear programs (MINLP). Due to non-convex nonlinearities and integrality conditions on some variables, MINLP problems are usually difficult to solve. Typical approaches for this purpose are either gradient-based local optimization methods or stochastic optimization methods like genetic algorithms. However, both methods cannot guarantee to find the global optimum. In the past, several methods have been proposed in the literature to solve a MINLP globally [2]. Although these methods could be recently significantly improved, existing state-of-the-art global optimization software like BARON [3] cannot solve many realworld applications within reasonable time. In the following, we present a model for hybrid distillation/melt crystallization processes and specialized tools for its global optimization, where operating parameters
Towards global optimization of combined distillation-crystallization processes for the 553 separation of closely boiling mixtures and the process structure are optimized simultaneously. We show computational results for a reduced case study.
2. Process Model Superstructure. In general, the overall process consists of an arbitrary number of interconnected distillation columns N C and crystallizers N Cr with the feed of each unit and the system outputs being mixing points. The system feed flow and the output flows of the process units can be connected to any mixing point. Splitting of flows is not considered here. Additionally, the set of possible connections is limited by some constraints to reduce redundant configurations. Distillation. A common model formulation is based on a tray-by-tray model with a variable feed position as well as a variable condenser reflux position to account for a variable column length [4]. The column model assumes steady-state, simple thermodynamics with constant relative volatilities, a total condenser and reboiler, a single liquid feed flow at boiling temperature and constant molar overflows. It is therefore based on material balances only. In view of the optimization strategy, an alternative approach is developed in this paper. Alternative formulation. In the model described above, binary decision variables occur in the material balances for every stage. Thus, global optimization solvers cannot easily transfer information derived for one configuration to another one. We propose a more static structure similar to [5]. We model the feed stage, the rectifying section and the stripping section separately, and link them by coupling conditions. Each of the three sections has a constant structure and therefore can be treated more efficiently. We introduce x ri ,l and y ri ,l , and xsi ,l and y si ,l to denote the composition variables in the rectifying and stripping section. Here x and y are the mole fractions in the liquid and vapor phase and i stands for the species. The trays are numbered from the condenser downwards and from the reboiler upwards by l . The material balance equations for the three sections and the coupling conditions read i 0 = − DxD + Vyri,l +1 − Rxri ,l ,
l = 1,, lrmax ,
0 = FxFi + Vy i,n +1 − Vy i,n + Rxi , n −1 − (R + F )xi , n , 0 = − BxBi + (R + F )xsi ,l +1 − Vysi,l , y i , n = yri ,lr +1,
y i , n +1 = ysi ,ls ,
(1)
l = 1,, lsmax , xi , n −1 = xri ,lr ,
xi , n = xsi ,ls +1.
Therein l smax and l rmax are the maximum possible number of trays in the rectifying and the stripping section. The indices ls and l r are the actual number of trays in the respective section, which follow from vectors of binary decision variables. The feed location is denoted by n . The degrees of freedom for the column design are the feed position and the column length given by discrete variables, as well as the reflux ratios for the reboiler and the condenser given by continuous variables. Crystallization. The crystallizer model incorporates a binary, eutectic system. We assume complete crystallization at the eutectic temperature and perfect crystal purity. Consequently, we have pure product crystals of a type that depends on the feed
554
M. Ballerstein et al.
composition z and a solution of eutectic composition xeutectic . The output flow of crystals S and eutectic solution L are derived from material balances for the crystallizer. The feed flow is denoted as F and molar fractions in the crystals as w . There are no degrees of freedom for the crystallizer design.
Fz i = Swi + Lx i ,
1 ° wi = ® °¯ 0
i z i ≥ xeutectic i z i < xeutectic
i xi = xeutectic
,
(2)
Cost function. For this case study, a simplified cost function is used which accounts for essential cost factors. The overall annual costs of the general process consist of annualized investment costs and operating costs. The investment costs of the column are proportional to the column height and also to the vapor flow since the column diameter must be chosen accordingly. The operating costs are proportional to the desired vapor flow. The respective cost coefficients are k1 and k 2 . For the crystallizer, investment and operating costs are proportional to the feed flow F jCr and both covered by k 3 since the crystallizer size is fixed here.
costs =
NC
¦ (k1 (lr
+ ls + 1)V
jC
j C =1
+ k2 V
jC
) + k3
N Cr
¦F j
Cr
(3)
j Cr =1
3. Global Optimization The optimal design of hybrid distillation/melt crystallization processes leads to MINLP. To determine a global solution for our case study, we apply a branch-and-bound based algorithm [2]. A main step in this approach is to compute strong globally valid bounds on the underlying MINLP. Those bounds can be obtained by constructing and solving tight convex relaxations. Improved relaxations. Convex relaxations are typically built by replacing each nonconvex term by a convex underestimator and a concave overestimator. Applied to our model, this means that each non-convex term occurring in the mass balance equations and in the phase equilibrium equations is relaxed separately. It can be shown that the relaxation can be improved by incorporating the phase equilibrium of a binary mixture into the mass balance equations. For this, we derive the best possible estimators for the resulting term φ ( x,V ) := Vα x ((α − 1) x + 1) restricted to [lx , u x ]× [lV , uV ] ⊆ R≥20 . As φ is linear in V , for fixed x , and concave in x , for fixed V , we can use recent results [6, 7] to obtain the following description of the estimators. For a given α > 1 , we get
α lV x α l xV α lV lx + − , l u l l ( 1 + ( α − 1 ) )( 1 + ( α − 1 ) ) 1 + ( α − 1 ) ( 1 + ( α − 1 ) x x x x )(1 + (α − 1)u x ) ¯
φ ( x,V ) ≥ max ®
½ α u xV α uV u x − ¾ (1 + (α − 1)l x )(1 + (α − 1)u x ) 1 + (α − 1)u x (1 + (α − 1)lx )(1 + (α − 1)u x ) ¿
α uV x
φ ( x ,V ) ≤
+
uV − V V − lV α lV r ( x,V ) α uV s ( x,V ) + uV − lV 1 + (α − 1)r ( x,V ) uV − lV 1 + (α − 1) s ( x,V )
Towards global optimization of combined distillation-crystallization processes for the 555 separation of closely boiling mixtures where ° (α − 1) lV ( lV + uV ) x − (V − lV ) u − l V − lV ½° r ( x,V ) := max ®lx , , V V x− u x ¾, uV − V uV − V °¿ (α − 1)( lV uV + V ) °¯ ° (α − 1) uV ( lV + uV ) x + (uV − V ) u − l u − V ½° s( x,V ) := min ®u x , , V V x− V lx ¾. V − lV V − lV ° (α − 1)( lV uV + V ) °¯ ¿
Constraint Propagation. Initial bounds on the variables are often rather weak. As the approximation quality of the estimators depends on the size of the domain, global solvers use constraint propagation strategies to tighten the given bounds, e.g. see [8]. However, such strategies usually deal with general structures and do not take any problem-specific structure into account. Therefore, the quality is often limited. In this work, we exploit the specific structure of countercurrent separation processes. Material balance equations can be used to propagate the bounds on the composition variables from one stage to the next one [9]. We can solve the material balance equations given for the stripping section in Eq. (1) for x si,l +1 to obtain the following improved bounds.
lb( xsi ,l +1 ) =
lb(V )lb( ysi ,l ) + ub( B)lb( xBi ) ub(V )ub( ysi ,l ) + lb( B)ub( xBi ) , ub( xsi ,l +1 ) = . (4) lb(V ) + ub( B ) ub(V ) + lb( B )
The strength of the bounds depends on the bounds on V , B and x Bi . Similar bounds can be derived for the composition variables of the rectifying section. Preprocessing. Our constraint propagation strategy can be used to relax the problem in the following way. We replace all mass balance equations except the ones associated with the condenser, the feed position and the reboiler with the lower and upper bounds given by Eq. (4). In this way, we obtain a simplified problem that is much smaller than the original problem but still contains all its feasible solutions. Then we strengthen the bounds by splitting the domains of the variables V , B and x Bi within a branch-andbound framework. Thereby, we can detect infeasible and non-optimal subdomains very fast which leads to a significant reduction of the interesting domain.
4. Computations As a first step, we investigate a binary separation model consisting of two alternative process structures. Structure 1 is a stand-alone distillation column. Structure 2 is a combination of a distillation column with a crystallizer where the bottom flow of the column is purified by the crystallizer and the eutectic solution is fed back to the column. An output purity of at least 99% of the respective species is required. The parameters and domains for this case study are given in Table 1. The MINLP is implemented in the GAMS 23.4.3 environment. To solve the MINLP for global optimality three approaches are applied. In approach A1 we apply the global optimization solver BARON 9.0.5 to the common tray-by-tray MINLP formulation mentioned in section 2. In approach A2 we apply BARON to the alternative tray-by-tray MINLP formulation proposed in this work. For the third approach A3, the preprocessing step is used to reduce the interesting domain. The simplified model considered in our preprocessing is dominated rather by the involved binary decision
556
M. Ballerstein et al.
variables than by the non-convex terms. We have hence chosen to implement and to solve the relaxations of the simplified model in a SCIP 1.2.0 environment [10]. We then apply BARON for the complete model over the reduced domain as it is more suitable for the large number of non-convex terms. The overall branch-and-bound framework managing the interaction between the SCIP and Baron environment is written in the programming language C. All computations are carried out on a 3 GHz AMD Opteron™ Processor 8222 SE with 64 GB Ram. We stopped the computations after 100 hours, if the global optimum was not yet reached. The results are reported in Table 2. Table 1. Parameters and Constraints.
Parameter ( x FA , x FB )
Value
(α A , α B )
(1.5, 1)
Parameter k1 k2
A ( x eutectic ,
(0.5, 0.5)
k3
(l r + l s + 1)
≤ 50
Fsystem
1 mol s −1
l rmax , l smax
= 48
any molar flow
≤ 30 ⋅ Fsystem
(0.25, 0.75)
B x eutectic )
Value 1 mol −1 6 mol −1 50 mol −1
Table 2. Lower bounds for the process costs and computation time (hh:mm).
Structure 1 Structure 2
global optimum
A1
A2
A3
131.2 129.2
88.5 (100:00) 29.9 (100:00)
131.2 (11:16) 129.2 (14:28)
131.2 (00:04) 129.2 (00:32)
The computational results for this case study illustrate that the effort for global optimization can be reduced effectively by choosing a suitable model structure. It can be reduced even by orders of magnitude if the model structure is taken into account explicitly in the optimization algorithm. In our current research we are generalizing the results presented in this paper to more complicated process models and process structures.
5. Acknowledgments This work is part of the Collaborative Research Centre "Integrated Chemical Processes in Liquid Multiphase Systems" coordinated by the Technische Universität Berlin. Financial support by the Deutsche Forschungsgemeinschaft (DFG) is gratefully acknowledged (TRR 63).
References [1] M. B. Franke, N. Nowotny, E. N. Ndocko, A. Gorak, and J. Strube, 2008, AIChE J., 54, 2925 [2] R. Horst and H. Tuy, Global Optimization, 3rd Ed., Springer-Verlag, 1995. [3] M. Tawarmalani and N. V. Sahinidis, 2004, Mathematical Programming, 99, 563 [4] J. Viswanathan and I. E. Grossmann, 1993, Ind. Eng. Chem. Res., 32, 2942 [5] H. Yeomans and I. E. Grossmann, 2000, Ind. Eng. Chem. Res., 39, 1637 [6] M. Jach, D. Michaels, and R.Weismantel, 2008, SIAM Journal on Optimization, 19, 1451 [7] M. Tawarmalani and N. V. Sahinidis, 2001, Journal of Global Optimization, 20, 137 [8] P. Belotti, J. Lee, L. Liberti, F. Margot, and A. Wächter, 2009, Optimization Methods and Software, 24, 597 [9] M. Ballerstein, D. Michaels, A. Seidel-Morgenstern, and R. Weismantel, 2010, Comput. Chem. Eng., 34, 447 [10] T. Achterberg, Ph.D. thesis, Technische Universität Berlin, 2007
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Time Optimal Control of Particle Size Distribution in Emulsion Polymerization Ahmad Mansoura, Ala Eldin Bouaswaiga, Sebastian Engella a
Process Dynamics and Operations Group, Technische Universität Dortmund, 44221 Dortmund, Germany
Abstract Emulsion polymers are used in a wide range of applications such as adhesives, inks, paints, coatings, drug delivery systems, gloves, floor polish, films and cosmetics. Because the end-use properties of the polymers strongly depend on the particle size distribution (PSD), modeling and control of the PSD is of high interest and is an active field of research. The aim of this work is to investigate the feasibility and the potential of applying nonlinear model predictive control (NMPC) to the control of the PSD in emulsion polymerization processes. Specifically, time optimal control of the PSD of a semi batch homo-polymerization in a pilot-scale reactor is considered. Keywords: Emulsion polymerization; particle size distribution; nonlinear model predictive control; time optimal control; extended Kalman filter.
1. Introduction The increasing environmental concerns and strict legislation are driving forces that favor the production of water-based latexes as a replacement of solvent-based polymers. The obvious advantage of using water is the low viscosity of the latex that facilitates heat removal during polymerization. Excellent flow over substrates to be coated is an additional advantage of using water that can then be evaporated without polluting the environment, leaving a thin film of polymer. Moreover, the absence of the chemical solvents in water-based emulsion polymers reduces the risk of fire. The final latex properties such as mechanical strength, adhesion, film forming, optical and rheological properties, and drying time are determined by the particle size distribution (PSD). Modeling and control of the PSD is thus of high interest in research and in industry [1]. Various control concepts have been applied to control the PSD in emulsion polymerization. For example, researchers have considered open loop control of the PSD [2], control concepts that use reduced models [3], control of the PSD characteristics instead of the full PSD [4], and multilayer control architectures in which the optimization of an economic cost function is performed offline followed by a tracking controller at the lower level [5]. Nonlinear model predictive control (NMPC) using a rigorous nonlinear emulsion polymerization model is an alternative control concept that has not been investigated yet. As discussed in [6], in comparison to other control schemes, direct optimizing control is generic and enables the direct optimization of an economic cost function and the inclusion of constraints in a straightforward fashion. In this work, time optimal control of the PSD is investigated in a pure growth problem in which the monomer flow rate is considered as the manipulated variable. As the considered process is semi-batch, a shrinking horizon NMPC scheme is employed. To complete the control loop, a state estimator using an extended Kalman filter (EKF) is
A. Mansour et al.
558
included to estimate the unmeasured states using the information from the available measurements and the inputs. This article is organized as follows: in Section 2, the important features of the model of an emulsion polymerization are briefly described. In Section 3, the formulation of the time optimal control problem is discussed. State estimation using an EKF is introduced in Section 4 and the results of time optimal control with state estimation are presented in Section 5. Finally, Section 6 gives a summary and conclusions.
2. Dynamic Model The polymer particles are located at different positions in the reactor and have different volumes (V). For simplicity, it is assumed that the reactor is well mixed, the latex is colloidally stable, and no nucleation takes place. Hence, the population balance equation that describes the system then reads: 1 w VR n(V , t ) w ( ( , ) ( , )) 0, (1) V R wt wV where VR is the volume of the reactor, n(V,t) is the population density function and G(V,t) is the growth rate of a particle of size V which can be described by:
GV t nV t
G (V , t )
dV dt
k p MWM
UpNA
[ M ]P n(VS , t ),
(2)
where kp is the propagation rate coefficient, MWM is the monomer molecular weight, ȡp is the density of polymer, NA is Avogadro’s number, [M]P is the monomer concentration in the particle, n(VS ) is the average number of radicals per particle. To describe the changes in the concentrations of the substances that are present in the reactor, the discretized PBE (Equation (1)) is coupled to a set of ordinary differential and algebraic equations. This model has 103 dynamic variables and it is applied here to the homopolymerization of methyl methacrylate. The modeling details are explained for example in [7] and the values of the parameters can be found in [8]. It can be seen from Equation (2) that the growth rate depends on the monomer concentration inside the particle ([M]P). Hence, by manipulating the monomer feed rate, the growth rate can be changed and consequently the PSD is modified.
3. Time Optimal Control of the PSD As the process considered is a semi-batch operation, it is important to increase the throughput of the plant by minimizing the batch time. The objective is to find optimal input trajectories such that a desired PSD is reached from an initial seed in the shortest possible time while satisfying the process constraints and the product specifications. In literature, there are relatively few contributions that deal with the operation of the polymerization process in a time optimal fashion (e.g., [2]; [9]; [10]). In [2] a rigorous model that describes particle growth was used and the time optimal production of unimodal PSD in emulsion polymerization was investigated. In this contribution it is proposed to extend the work in [2] by closing the loop to handle plant-model mismatch and disturbances. The function of the controller designed here is to produce the target PSD from the seed in the shortest time possible while guaranteeing that the remaining amount of monomer in the reactor is less than 5% of the amount fed. Furthermore, the controller should ensure that the total amount of monomer fed does not differ from the nominal amount (i.e. the amount used to produce the target PSD) by more than 10% and that the heat generated by the reaction remains below the maximum cooling capacity of
Time Optimal Control of Particle Size Distribution in Emulsion Polymerization
559
the reactor. It is assumed that the cooling capacity of the pilot plant reactor is 55 J/s and the maximum capacity of the available monomer pump is 1 ml/s. In the case studied here, the flow rates of monomer used to produce the target PSD from the available seed is referred to as the nominal profile and the original batch time is 55 minutes. One way to handle the receding horizon optimization problem is to consider the batch time as the objective function and to include the PSD, the concentration of monomer at the end of the batch run, and the heat generation as hard endpoint and path constraints [2]. However, handling hard constraints in nonlinear optimization problems may cause high computational effort. Besides, disturbances in online control can make the problem infeasible [11]. As an alternative, the state constraints can be handled as soft constraints by adding penalty terms to the objective function as proposed for example in [12]. The amount of violation of the soft constraints obviously depends on the corresponding weights in the objective function. The constraints used in the weighting terms therefore are stricter than the real constraints. To include the batch time, the PSD, and the output constraints in the objective function, the objective function is formulated as follows: NPo
min J W1t f W2 u, t f
P (Q gen )
¦(n
goal
(i) nresult (i))2
i 1
nscale
>
W3 C MonCmax W4 P(Qgen )(Qgen Qmax ), (3)
@
exp N 1 Q gen Qmax 1 exp N 1 Q gen Qmax , 1
(4)
where tf is the batch time, CMon is the concentration of monomer at the end of the batch time, Cmax=0.2 mol/l is the maximum allowed concentration of monomer at the end of batch time, Qgen is the maximum value of the generated heat, Qmax= 50 J/s is the maximum allowed heat generation and W1, W2, W3, and W4 are weighting factors that are assigned the values 0.01, 100, 100, and 10 respectively. When the controller reduces the batch time, the generation of heat increases due to the faster additions of the monomer. The logistic function P in Equation (3) is given by Equation (4) where N equals 100. By introducing P in the objective function the controller only considers the last term if the generated heat is significantly close to or more than Qmax. To provide initial solutions for the online optimization, an offline optimization is performed as a first step. Due to the non-convexity of the given problem, the global solver glcSolve from TOMLAB is used for that purpose to ensure that the solution is not just an arbitrary local optimum.
4. State Estimation using an Extended Kalman Filter (EKF) In real processes, not all the states can be measured and it is important to determine the unmeasured states so that they can be employed for control and monitoring purposes. State estimators estimate the unmeasured states by using the available measurements and the manipulated variables. In this work, an EKF is considered as a state estimator because its implementation and tuning are relatively simple and it was found to satisfy the requirements. The estimator has to estimate the concentrations of monomer, initiator, and radicals in the water phase using the available measurements of the solid content, the normalized PSD that can be obtained for example from a dynamic light scattering instrument, and the volume in the reactor. The measurements are taken with an equal sampling rate of 28 seconds. The tuning parameters of this estimator are the model error covariance matrix (Q), the initial condition error covariance matrix (P0) and the measurement error covariance matrix (R) which are chosen to be diagonal matrices, i.e. the states and the
A. Mansour et al.
560
20
15
10
5
4
5
Particle radius [dm]
0.6
0.4
0.2
0
6 x 10
0
500
1000
Initiator concentration [m ol/l]
H e a t g e n e ra te d [J /s ]
3.5
50 40 30 20 10
500
1000
1500
2000
2500
3000
3500
Time [s]
Radical concentration [m ol/l]
5
x 10
2500
3000
10
5
0
3500
2
3
sim est
2.5
0
500
1000
1500
(e)
-8
5
6 x 10
-7
(c)
3
2
4
Particle radius [dm]
-3
2000
Time [s]
(d) x 10
2000
15
(b)
60
0
1500
seed result target
20
Time [s]
-7
(a)
0
25
optimal nominal
2500
3000
1.6
M onom er concentration [m ol/l]
3
-3
0.8
0 2
x 10
D e n s ity fu n c tio n [m o l/(l.d m )]
1
seed plant model
M o n o m e r flo w ra te [l/s ]
D e n s ity fu n c tio n [m o l/(l.d m )]
measurements are not correlated with other states and measurements. The corresponding values of the model error covariance for the PSD, the concentrations of monomer, initiator, radicals in the water phase and the volume of the reactor are: 1e-5 I99, 1e-3, 1e-8, 1e-3, 1e-3 respectively. Moreover, the corresponding values of the initial condition covariance for these states are; 1e-5 I99, 1e-4, 1e-8, 1e-3, 1e-4 respectively. Finally, the measurement error matrix is R =10 I100, where In is an identity matrix of size n x n.
3500
sim est
1.4 1.2 1 0.8 0.6 0.4 0.2 0
0
500
1000
1500
2000
Time [s]
(f)
2500
3000
3500
sim est
4
3
2
1
0
0
500
1000
1500
2000
Time [s]
2500
3000
3500
(g) Figure 1. Effect of plant-model mismatch and results of time optimal control and state estimation
5. Results The simulation results are shown in Figure 1. The simulation was done for closed-loop time optimal control of PSD in a growth problem in which the plant-model mismatch is represented by 20% uncertainty in the average number of radicals per particle and 10% uncertainty in the initial conditions of the unmeasured states as shown in Figures 1(e), 1(f), and 1(g). For simulation the Matlab solver ode15s was used and for optimization the TOMLAB solver NPSOL was selected. The effect of the introduced plant-model mismatch without feedback control is illustrated in Figure 1(a). The estimated states are converging to the real states as depicted in Figures 1(e), 1(f), and 1(g). The generated heat is below the cooling capacity of the reactor (55 J/s) as shown in Figure 1(d). As a
Time Optimal Control of Particle Size Distribution in Emulsion Polymerization
561
result, the batch time is reduced by 33% compared to the nominal case while achieving the target PSD as shown in Figures 1(b) and 1(c).
6. Conclusion In this work, time optimal control was considered in a homo-polymerization process to achieve the target PSD in a shorter time. Since the considered process is semi-batch, a shrinking horizon NMPC scheme was used. The physical constraints were handled as soft constraints by using a weighted sum formulation in order to reduce the computational cost and to avoid infeasibility of the problem. It was assumed that the available measurements are the solid content, the normalized PSD and the total volume in the reactor. An extended Kalman filter was therefore introduced in the control loop to estimate the unmeasured states. The simulation results showed that the unmeasured states converged to the real states even when the initial conditions of the unmeasured states were uncertain. Using the proposed controller it is possible to produce the target PSD despite the presence of plant-model mismatch while reducing the batch time by one third. Future work will focus on applying the control concept on emulsion co-polymerization processes and including the nucleation stage to make it possible to investigate the control of ab-initio emulsion polymerization experiments.
References [1] F. Doyle, C. Harrison, T. Crowley (2003). Hybrid model-based approach to batch-to-batch control of particle size distribution in emulsion polymerisation. Computers and Chemical Engineering 27, 1153–1163. [2] M. Rajabi-Hamane, S. Engell (2007). Time optimal production of a specified particle size distribution in emulsion polymerization. Chemical Engineering Science 62, 5282–5289. [3] M. Dokucu, M. Park, F. Doyle (2008). Reduced-order methodologies for feedback control of particle size distribution in semi-batch emulsion copolymerization. Chemical Engineering Science 63, 1230–1245. [4] V. Liotta, C. Georgakis, M. El-Aasser (1997). Real-time estimation and control of particle size in semi-batch emulsion polymerization. Proceedings of the American Control Conference, 1172– 1176. [5] B. Alhamad, J. Romagnoli, V. Gomes (2005). Online multivariable predictive control of molar mass and particle size distribution in free-radical emulsion copolymerisation. Chemical Engineering Science 60, 6596–6606. [6] S. Engell (2007). Feedback control for optimal process operation. Journal of Process Control 17, 203–219. [7] J. Rawlings, W. Ray (1988). The modeling of batch and continuous emulsion polymerization reactors. Part I: Model formulation and sensitivity to parameters. Polymer Engineering and Science 28 (5), 237-255. [8] D. Paquet, W. Ray (1994). Tubular reactors for emulsion polymerization: II. Model comparisons with experiments. AIChE Journal 40 (1), 88–96. [9] R. Gesthuisen, S. Krämer, S. Engell (2004). Hierarchical control scheme for time-optimal operation of semibatch emulsion polymerizations. Industrial & Engineering Chemistry Research 43 (23), 7410–7427 [10] A. Echevarria, J. Leiza, J. de la Cal, J. Asua (1998). Molecular-weight distribution control in emulsion polymerization. AIChE Journal 44 (7), 1667–1679. [11] S. Qin, T. Badgwell (2000). An overview of nonlinear model predictive control applications. Progress in Systems and Control Theory 26, 369–392. [12] C. Immanuel, F. Doyle (2002). Open-loop control of particle size distribution in semi-batch emulsion copolymerization using a genetic algorithm. Chemical Engineering Science 57, 4415– 4427.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-objective optimization of three-phase batch extractive distillation Alien Arias Barretoa, Ivonne Rodriguez Donisa, V. Gerbaudb,c, X. Jouliab,c a
Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC). Ave Salvador Allende y Luaces. Ciudad Habana. AP 6163. Cuba. [email protected] b Université de Toulouse, INP, UPS, LGC (Laboratoire de Génie Chimique), 4 allée Emile Monso, F-31432 Toulouse Cedex 04 – France. [email protected] c CNRS, LGC (Laboratoire de Génie Chimique), F-31432 Toulouse Cedex 04 – France
Abstract A multi-objective genetic algorithm with constraints is used to obtain the Pareto front of heterogeneous extractive batch distillation (HEBD). Genetic algorithm (NSGA-II) realcoded in MATLAB is coupled to a commercial simulator ProSim Batch to consider optimization variables related to the energy consumption of the process. Illustration is done by the separation of chloroform – methanol mixture with water considering both a constant and a two piecewise constant policy for all optimization variables. Keywords: batch extractive distillation, heterogeneous entrainer, genetic algorithm
1. Introduction Optimal operation of batch distillation column is a daily concern in industry. Extractive distillation is a common technology and the use of a heterogeneous entrainer has increased the number of feasible distillation alternatives. Continuous feeding of heterogeneous entrainer at the top of the batch distillation column allows the substitution of the unstable ternary azeotrope by the saddle binary heteroazeotrope at the column top [1] as is the case for the separation of the azeotropic chloroform – methanol mixture with water [2]. Heterogeneous extractive batch distillation (HEBD) tasks are: (T1) steady-state startup at infinite reflux with non entrainer feeding; (T2) continuous entrainer feeding (FE) at total reflux until the unstable ternary azeotrope is replaced by the saddle binary heterogeneous azeotrope chloroform – water; (T3) distillation of chloroform at a given RT3 together with the FE; (T4) off-cut distillate product at constant reflux ratio and (5) separation of methanol – water at (RT5) with FE=0. Key differences arise between homogeneous and heterogeneous distillation. In the latter, distillate composition is the heterogeneous key component – rich phase and is different from the reflux composition that is made from the whole entrainer-rich phase along with a portion “Į” of the distillate phase, so as to obtain high recovery and purity of the distillate product. Definition of “Į” is not straightforward because it has a controversial influence over the economical and environmental optimization of HEBD. Previous optimization of HEBD was done by using the short-cut modeling [3]. In the present paper, we couple rigorous simulation with optimization tools to handle variables related to energy consumption such as the entrainer temperature (TFE). The others manipulated variables are (FE) and the reflux ratios RT3 and RT5. Multi-objective genetic algorithm (NSGA-II) real-coded in MATLAB [5] was coupled with energy balances, (BatchColumn® simulator [4]) and the optimization problem deals with maximizing the economical profit EP while minimizing the environmental impact Envimp. The set of optimal operating conditions is displayed by means of a Pareto front.
Multi-objective optimization of three-phase batch extractive
563
2. Formulation of the optimization problem The optimization problem deals with maximizing the economical profit EP while minimizing the environmental impact Envimp. EP is the positive balance between the incomes of the product per batch ($/mole) and the pay-off giving by the solvent consumption cost ($/mole) and the total operating cost ($/time). Envimp includes CO2 emission related to the total heat duty and the hazardous properties of the liquid waste generated by HEBD (aqueous phase from the decanter (task T3) and the off-cut product (task T3). Manipulation of chemicals in a particular unit operation has a most important influence on the human health. This effect is commonly measured by using single indices, such as human toxicity potential by ingestion (HTPI) and by dermal exposure (HTPE). They are evaluated by means of LD50 and TLV factors, respectively [6]. The classical formulation of multi-objective optimization with typical constraints is:
Min {− EP ( x )} = Min Env
imp
2
¦n
i
* price i − n EntrainerH
i =1
(x) =
¦ j
n
j ( moles
)
¦
2
0
j
* price H 2 O − Operation Cost
xi I
k i
+ Q tot ( KW ) ×
k
n CO
Eq.1 2
KW
s.t: overall operating time (minutes): tov – 480 ≤ 0 chloroform and methanol product recovery (%): 90 - Și ≤ 0 boiler maximal capacity (mole): U – 3Uinitial ≤ 0 x is the vector of the continuous variables of HEBD. nj is the molar amount of pollutant j (aqueous phase and off-cut product), xi is the molar fraction of component i (chloroform and methanol) in nj , Iik is the environmental impact index (EIik) for the factor k, LD50 or TLV, for each component i. As LD50 and TLV units and meanings are not similar, Iik is an average contribution of the pollutant mixture chloroform – methanol computed as:
I ik =
2
¦100* β k =1
k i
*
EI ik k k + EI CH EI CHCl 3 40
Eq. 2
βi represents the relative weight of impact factor k for each component. βik is identical to
ensure comparable results. The decision variables in HEDB process are i) the entrainer flowrate FE/V and (ii) the entrainer temperature TFE for tasks T2 and T3; (iii) the portion α of distillate phase refluxed to the column during task T3; (iv) the reflux RT5 for task T5. Bounds are set based on practical operation conditions [2]: 25°C≤(TFE)≤100°C, 1.4≤(FE/V)≤2, 0.4≤α≤0.9 and 1≤RT5≤10. The multi-objective algorithm NSGA-II uses controlled elitist to favors both individuals with better fitness value and the diversity of the population, the crossover and mutation fraction is 0.80 and 0.05, respectively. Both objective functions Envimp and EP are evaluated with results of the rigorous simulation software that gets in return new operating variables values for each individual. For non converging simulation, the genetic algorithm allocates an infinite heat duty.
3. Separation of chloroform – methanol with water by HEBD The ternary residue curve map is shown in Figure 1. Calculations were done with Simulis®Thermodynamics property server in Microsoft Excel [4] using NRTL. Binary coefficients were published elsewhere [2]. Because water is fed continuously at the column top, the saddle heteroazeotrope chloroform – water is obtained in the overhead vapor and condensed in two liquid – liquid phases [2]. The fraction (1-α) of the
A. Arias Barreto et al.
564 chloroform-rich phase (xII) is withdrawn as distillate and the portion α is refluxed to the column with the whole waterrich phase (xI). Characteristics of the real batch distillation column are given elsewhere [2]. Matching a real mixture, the initial conditions are [2]: initial charge (20 mol), composition charge (x1=0.2704/x2=0.6714/x3=0.05 82), decanter holdup (1 mol), vapor flow (0.016 kmol/hr), at atmospheric pressure.
Methanol [S3]
L/L envelope (25°C)
L/L/V Equilibrium 53.3°C [S2] Unstable Separatrix
I
x
Liquid – Liquid Tie Line
Water [SN2]
53 °C [UN] Vapour Line α12
xD = xII
56.3°C [S1] Feasible Top Region Chloroform [SN1]
Fig. 1. Chloroform – methanol – water residue curve map
4. Results and discussion Optimization of HEBD is performed considering a constant FE/V and TFE and following two piece-wise constant policy for α and RT5. They correspond to reflux policies commonly used in industrial practice. Case I keeps constant all optimization variables TFE, FE, αT3, RT5. Case II uses a two piece wise reflux policy (R1,T5 and R2,T5) for task T5. Case III uses a two piece wise reflux policy (α1,T3 and α2,T3) for task T3. Case IV combines case II and case III with a two piece wise reflux policy (α1,T3 and α2,T3) and (R1,T5 and R2,T5) . Figure 2 shows the front of Pareto for all studied cases. Envimp
Ϯ͘ϭ Ϯ ϭ͘ϵ ϭ͘ϴ ϭ͘ϳ ϭ͘ϲ ϭ͘ϱ ϭ͘ϰ ϭ͘ϯ ϭ͘Ϯ ϭ͘ϭ ϭ Ϭ͘ϵ Ϭ͘ϴ Ϭ͘ϳ Ϭ͘ϲ
Case I: TFE,FE,αT3,RT5 Case II: TFE,FE,αT3, R1,T5,R1,T5
Case III: TFE,FE, α1,T3,α2,T3,RT5 Case IV: TFE,FE, α1,T3,α2,T3, R1,T5,R1,T5
ͲϮϰ͘ϯ ͲϮϰ͘ϭ ͲϮϯ͘ϵ ͲϮϯ͘ϳ ͲϮϯ͘ϱ ͲϮϯ͘ϯ ͲϮϯ͘ϭ ͲϮϮ͘ϵ ͲϮϮ͘ϳ
-(EP) Fig. 2. Pareto optimal front in terms of environmental impact vs economical profit
Every optimal operating condition of the Pareto Front satisfies all problem constraints. Economical profit improves as the environmental impact increases. Hence, the highest profit implicates the most pollutant alternative. Comparing to the simplest operating case I, the use of two values for RT5 in case II improves significantly both objective functions. However, two piece wise policy for ĮT3 (case III) is an intermediate alternative between case I and case II giving a much better improvement for Envimp than EP.. In general, the application of two piece-wise policy for ĮT3 or/and RT5 improves both objective functions and, therefore, case IV is the best option. However, implementation of two piece-wise policy for ĮT3 increases EP very slightly. Indeed, the
Multi-objective optimization of three-phase batch extractive
565
variation range (v.r) of the optimum of EP is less than 3 % taking into account all cases. Nevertheless, Envimp exhibits a wider variation range but depending on constant Į (≈20%) or two piece wise Į (≈4%). This behaviour can be used as a practical criterion for selecting the values of the operating variables linked to the minimal value of Envimp together with the easier manoeuvring and control of the variables in industrial practice. Variation of RT5 is worthier than variation of ĮT3 .Figure 3 displays the optimal value for all operating variables TFE, FE, αT3, RT5 of each point of the Pareto Front for all studied cases. Optimal values of FE, αT3, RT5 are almost constant in each case, exhibiting a v.r. lower than 3%, 1% and 2%, respectively. TFE exhibits the greatest v.r. around 20% along the Pareto Front. However, TFE remains below 60°C, rather far from the typical usage of feeding a saturated liquid entrainer (100°C for water). That shows the significant inclusion of TFE as optimization variable. ϰ͕ϰ
ϭ͕ϳ ϭ͕ϲ
ϰ͕ϯ
)(9
ϭ͕ϱ ϭ͕ϰ
ϰ͕Ϯ
ϭ͕ϯ ϭ͕Ϯ
ϰ͕ϭ
57
ϭ͕ϭ ϭ Ϭ͕ϵ
ϰ ϯ͕ϵ
Case I
αT3
Ϭ͕ϴ Ϭ͕ϳ
ϯ͕ϴ ϯ͕ϳ
Ϭ͕ϲ
ϯ͕ϲ ϮϬ
ϭ͕ϴ ϭ͕ϳ ϭ͕ϲ ϭ͕ϱ ϭ͕ϰ ϭ͕ϯ ϭ͕Ϯ ϭ͕ϭ ϭ Ϭ͕ϵ Ϭ͕ϴ Ϭ͕ϳ Ϭ͕ϲ Ϭ͕ϱ Ϭ͕ϰ
Ϯϱ
ϯϬ
ϯϱ
ϰϬ
ϰϱ
TFE(°C)
ϱϬ
ϱϱ
ϲϬ
57
ϰ͕ϱ
ϭ͘ϴ
ϰ͕ϰ
ϭ͘ϲ
ϰ͕Ϯ ϰ͕ϭ
Case III
ϰ
ϯ
α1,T3
ϯ͕ϳ
Ϭ͘ϲ
ϯ͕ϲ ϯ͕ϱ ϯϬ
ϯϱ
ϰϬ
ϰϱ
TFE(°C)
ϱϬ
ϱϱ
ϲϬ
Ϭ͘ϰ
Ϯ͕ϲ Ϯ͕Ϯ
57
αT3
ϭ͕ϴ ϭ͕ϰ
Ϯϱ
ϯϬ
ϯϱ
ϰϬ
TFE(°C)
ϰϱ
ϱϬ
ϱϱ
ϲϬ ϱ͘ϭ
57
ϰ͘ϲ
)(9
ϰ͘ϭ ϯ͘ϲ
Case IV
ϭ Ϭ͘ϴ
ϯ͕ϰ
Case II
ϭ͘Ϯ
ϯ͕ϴ
ϯ͕ϴ
)(9
ϯ͘ϭ
α2,T3
ϯ͕ϵ
α2,T3
Ϯϱ
ϭ͘ϰ
ϰ͕Ϯ
57
ϮϬ
ϰ͕ϯ
)(9
ϮϬ
Ϯ ϭ͕ϵ ϭ͕ϴ ϭ͕ϳ ϭ͕ϲ ϭ͕ϱ ϭ͕ϰ ϭ͕ϯ ϭ͕Ϯ ϭ͕ϭ ϭ Ϭ͕ϵ Ϭ͕ϴ Ϭ͕ϳ Ϭ͕ϲ
Ϯ͘ϲ
57
α1,T3 ϮϬ
Ϯϱ
Ϯ͘ϭ ϭ͘ϲ
ϯϬ
ϯϱ
ϰϬ
ϰϱ
ϱϬ
ϱϱ
ϲϬ
TFE(°C)
Fig. 3. Evolution of optimization variables in the multi-objective optimization of HEBD
Table 1 displays the simulation results with ProSim Batch for the optimal conditions marked (circle) in Figure 2 giving suitable EP and excellent Envimp. Operating conditions are: case I: TFE=29°C, FE/V=1.58, αT3=0.636, RT5=4.06; case II: TFE=28.4°C, FE/V=1.63, αT3=0.636, R1,T5=1.94 ; R2,T5=4.04; case III: TFE=28.4°C, FE/V=1.63, α1,T3=0.44, α1,T3=0.751, RT5=4.04; case IV: TFE=25.4°C, FE/V=1.56, α1,T3= 0.405, α1,T3=0.763, R1,T5=1.84, R2,T5= 5.05. General remarks linked to Table 1 arises: (1) The operating time for Task 2 is almost identical for all cases; (2) Off-cut operation (task 4) is not required because the recovery yield of chloroform is almost 100%; (3) methanol recovery is always close to the limit value; (4) the economical profit and the environmental impact of the HEBD is mainly determined by decreasing the operating time and, consequently, a reduction in energy consumption; (5)two piece wise policy for αT3 becomes only important when it is combined with varying RT5 resulting in the most attractive alternative, case IV. But, case III is similar to case IV.
A. Arias Barreto et al.
566
Table 1. Rigorous simulation of HEBD for optimal operation conditions of each case. Task T2 t2 (min) Task T3 t3 total time(min) t3 switch time(min) Tank I (mol) xDist,TankI CHCl3 CH3OH H2O CHCl3 recovery (%) aqueous phase(mol) entrainer amount(mol) Q average (Kcal) final still (mol) Task T5 t5 total time (min) t5 switch time (min) Tank II (mol) XDist,TankII CH3OH H2O CH3OH recovery(%) final still (mol) XH2O still Q average (Kcal) Total time (min)
Case I
Case II
Case III
Case IV
34.3
33.0
34.1
34.9
62.4 5.44 0.9930 0.0030 0.0040 99.97 0.169 152.9 232.9 56.72
60.4 5.44 0.9930 0.0030 0.0040 99.99 0.169 152.2 197.3 56.46
49.7 41.1 5.45 0.9900 0.0048 0.0052 99.92 0.367 133.2 201.3 50.81
46.5 37.8 5.45 0.9900 0.0050 0.0050 99.97 0.360 127.0 186.1 49.22
209.6 12.19 0.9900 0.0100 90.06 44.49 0.9710 514.3 306.3
147.2 86.8 12.20 0.9900 0.0100 90.07 44.22 0.9710 361.3 240.6
229.3 12.28 0.9900 0.0100 90.72 39.49 0.9760 562.5 313.1
159.8 86.9 12.48 0.9900 0.0100 92.19 36.7 0.9800 391.1 241.2
5. Conclusions Genetic algorithm (NSGA-II) real-coded in MATLAB was coupled to ProSim Batch simulator. The approach maximizes the economical profit together with minimization of environmental impact of HEBD. Non pollutant HEBD process required the entrainer feeding around 30°C avoiding the typical entrainer heating. Combination of two piecewise reflux policy for the product withdrawals is the most attractive alternative.
References 1. 2. 3. 4. 5. 6.
I. Rodríguez-Donis, J. Acosta-Esquijarosa, V. Gerbaud, X. Joulia. AIChE J., 49 (2003) 3074. R.Van Kaam, I. Rodriguez-Donis, V. Gerbaud. Chem. Eng. Science, 63 (2008) 78. A. Arias Bareto, I. Rodriguez-Donis, V. Gerbaud. Comp. Aided Chem. Eng. 28 (2010) 961. ProSim SA, 2001, www.prosim.fr Matlab® Global optimization toolbox, 2009, MathWorks Inc. http://www.mathworks.fr/ C. Li, X. Zhanga, S. Zhanga, K. Suzukib. Chem. Eng. Res. Des., 87 (2009) 233
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integrating Graph-based Representation and Genetic Algorithm for Large-Scale Optimization: Refinery Crude Oil Scheduling Manojkumar Ramteke,a Rajagopalan Srinivasana, b,* a
Institute of Chemical and Engineering Sciences, A*STAR (Agency for Science, Technology & Research), 1 Pesek Road, Jurong Island, Singapore 627833 b Department of Chemical & Biomolecular Engineering, National University of Singapore, 4 Engineering Drive 4, Singapore 117576 *Corresponding Author: [email protected]
Abstract Scheduling optimization problems are often associated with large number of variables and combinatorial constraints. These problems can be represented graphically through a network structure. This graphical representation can provide important insights to handle the combinatorial constraints. In this study, the graphical representation is incorporated in the framework of genetic algorithm to solve large-scale refinery crude oil scheduling problems. Our results show that use of such graphical representation offers significant advantages while solving multi-objective, multi-solution and nonlinear formulations in reasonable computational time. Keywords: Network, graphs, genetic algorithm, scheduling.
1. Introduction Scheduling optimization of any real-life network is an important problem. This includes solving various systems like batch process scheduling, heat exchanger network, polymer grade scheduling, refinery crude oil scheduling, etc. These problems are modeled either by discrete or continuous time representation with state task network (STN) (Kondili et al., 1993) or resource task network (RTN) (Schiling and Pantelides, 1996) using mathematical programming (MP). Evolutionary algorithms (Deb, 2001), too, are used recently, either in linear programming (LP) hybrid or heuristics adapted forms due to their distinct advantages. However, one of the major difficulties for using these evolutionary algorithms is to handle the large number of variables and constraints commonly present in planning and scheduling problems. This difficulty originates from the fact that evolutionary algorithms such as genetic algorithm (GA) do not often exploit the structure in the problem. In this paper, we integrate graph networks (Mah, 1983; Kim et al., 2008) with GAs to solve such problems by exploiting their structural facets. A new optimization algorithm called structure adapted genetic algorithm (SAGA), is developed in the frame work of the binary-coded NSGA-II-JG to solve large-scale planning and scheduling problems. The proposed adaptation incorporates the structural aspects (real-coded) whereas the quantitative aspects (binary-coded) are built in the base frame work of GA. The developed algorithm is used to solve several large-scale refinery crude oil scheduling problems.
568
M. Ramteke and R. Srinivasan
2. Problem Description Most refineries process several classes of the crudes and commonly receive them from marine transportation such as very large crude carriers (VLCC) or small jetty ships. VLCCs can carry several parcels of crudes and have to be offloaded using the singlebuoy mooring (SBM) station or, single-point mooring (SPM) station whereas jetty ships can offload directly into storage tanks. The crude scheduling problem for a marine access refinery includes unloading crude oil from vessels (VLCCs or jetty ships) to storage tanks and charging various mixes of crude oil to distillation units (CDUs) without violating limits on capacity, flow, properties and composition, etc. As noted by Reddy et al. (2004), optimal scheduling leads to increase in throughput, intelligent use of less expensive crude stocks, better quality control, reduction in sea waiting cost (demurrage), improved control and predictability of downstream processing, etc. Intelligent scheduling potentially can save (Kelly and Mann, 2003) millions of dollars per year. Consider a simple illustrative refinery crude oil scheduling problem (see Figure 1). In this problem, a VLCC is carrying two crude parcels P1 (100 kbbl) and P2 (150 kbbl) to be unloaded into two tanks. The crude from these tanks are then charged to a crude distillation unit (CDU). The scheduling horizon is set to three periods (to illustrate the concepts graphically). The following are the operating rules in this problem: (a) at any time a parcel can be unloaded in to at most one tank; (b) parcels are unloaded in a prespecified sequence; (c) a tank getting filled cannot simultaneously feed a CDU in current and next period; (d) all parcels should be unloaded by the end of the time horizon; (e) at most two parcels can be unloaded in a period; (f) supply of material to CDU should not stop; (g) maximum and minimum rate of charging from tank to CDU is 100 and 50 kbbl/period, respectively. Crude Parcels P1& P2 I1, 1
Tank 1
P2
I2, 1
I2, 2
I1, 2
P1 Tank 1
SBM VLCC
Tank 2
CDU
Tank 2
O1
O2 CDU
Figure 1. Network of simple refinery crude oil scheduling problem
Figure 2. Super-structure of simple refinery crude oil scheduling problem
3. Graphs and Genetic Algorithm In general, the planning and scheduling problems such as the illustrated above involve a network of various streams in a graph structure. The problem described above is converted to a graphical representation as shown in Figure 2. There are four variables (I1, 1, I1, 2, I2, 1, I2, 2) representing inflows to the tanks. The first subscript represents the tank number whereas the second represents the parcel number. There are two outflows from the tanks (O1, O2). These six variables pertain to each of the three periods; thus
569
Integrating Graph Representation and GA for Large-Scale Optimization
there are a total of eighteen variables for the scheduling problem. We call the physical network as a super-structure since it captures the structural facet of every solution to the original scheduling problem. A feasible non-zero subset of it referred to as a substructure represents the subset of a schedule’s connectivity corresponding to one period. A compilation of such sub-structures over the entire scheduling time horizon is referred to as the structural schedule. Large number of feasible structural schedules can be generated from the super-structure. Associated with each of the streams in the substructure is a weight, which provides the quantitative value of the corresponding variable (e.g. flowrate, temperature). Separating the structural and weight aspects of network considerably reduces the number of variables and constraints in the optimization problem. 3.1. Constraint handling The operating rules described above act as constraints on the schedule. These constraints are either intra- or inter-period in nature. The super-structure representing each period has six flow streams (I1, 1, I1, 2, I2, 1, I2, 2, O1, O2). Due to intra-period combinatorial constraints (a)-(f), not all these six streams will be present in each period for a feasible combination and some of the streams will be necessarily absent. The super-structure with such infeasible streams deleted results in a sub-structure. The structural schedule is a feasible combination of various such sub-structures assigned to each period. In this problem, a total of nine feasible sub-structures are possible. Any three of these has to be selected to form a structural schedule for the three periods. A feasible combination of sub-structures to form structural schedule has to account for inter-period constraints. One such structural schedule is given in Figure 3. When the structural schedule is overlaid with a set of weights, a specific solution to the scheduling problem is formed. Optimization of the structure and weights is thus coupled. The constraints such as constraint (g) are associated with the weights and handled using suitable bounds are univariate in nature. Sometimes, multi-variate constraints are also present that are handled using usual penalty functions. Crude Parcels P1& P2
Crude Parcels P1& P2
Crude Parcels P1& P2 I2, 1, I2, 2
I1, 2 Tank 2
Tank 1
Tank 1
Tank 2 O2
O2 CDU Period 1
Tank 1
CDU Period 2
Tank 2
O1 CDU Period 3
Scheduling horizon Figure 3. A feasible structural schedule 3.2. Genetic algorithm with graph representation Genetic algorithms can take advantage of these distinct facets in scheduling problems by separating the structure and weight decisions. A structural schedule can be generated
570
M. Ramteke and R. Srinivasan
first independently and then optimal weights can be specified. Structural schedules are generated stochastically using constraints and rules in the specific problem. Then weights for each of the streams present in these structural schedules are optimized. In order to achieve this, two types of sub-chromosomes are necessary. The structural schedule is used as structural sub-chromosome; the corresponding weight subchromosomes are represented using conventional GA codes. Both these parts combine to form a chromosome that depicts a complete schedule. The separation of structure and weight decisions handles most of the combinatorial constraints while structural subchromosome generation and thus do not require the use of penalty functions. In addition to this, only the non-zero problem variables are represented in chromosome which reduces the chromosome size significantly. Np chromosomes are generated as described above. The binaries corresponding to each GA variable present in a weight sub-chromosome are then mapped to decimal values of the variable using its actual identity present in the structural sub-chromosome so that they all lie within their bounds (decoding). The decoded values of each variable are used in the model equations to calculate all the objective functions (model evaluation). All structural constraints are thus satisfied while structural schedule generation. Similarly, all univariate weight constraints are satisfied using dynamic bounds. In our current work, multi-variate weight constraints are handled by adding penalties to the fitness function of respective chromosomes. Chromosomes are then ranked based on the concept of non-domination and assigned the crowding distance value. These are then copied into a mating pool using tournament selection. These Np parent chromosomes are then operated on to produce daughters. Two chromosomes are picked randomly from these Np parents in the mating pool to undergo genetic operations. The conventional GA operators can operate on weight sub-chromosomes and are not equipped to operate on its structural counterparts. Thus, weight chromosomes undergoes conventional crossover, mutation and the jumping gene operation12,13 whereas their structural counterparts undergo new operators such as structure crossover (at the same site of weight crossover), structure mutation and structure correction operator to produce two (initial) daughter chromosomes. These new operators for structural chromosome are conceptually similar to conventional GA operators. In structure crossover, two structural sub-chromosomes corresponding to selected weight counterparts are crossed at the same site as of the crossover of the latter. In structure mutation, a stream is selected stochastically from the structural sub-chromosome and operated on stochastically. However, it is to be noted that both these operators can lead to structurally infeasible daughter sub-chromosomes and may need to be corrected through a the correction operator. The Np daughters are produced using above procedure are mixed with the Np parents. These 2Np chromosomes are re-ranked and the best Np (final) daughter chromosomes are selected from these. The best Np chromosomes are picked from the (2Np) chromosomes in the elitism operation. These Np chromosomes are then used as the starting population for the next generation. This process continues for a user-specified number of generations, maxgen, after which the Pareto front is not expected to change much.
4. Results and Concluding Remarks SAGA has been applied on several refinery crude oil scheduling problem. One such problem (Li et al., 2007) consists of three VLCCs with total twelve parcels and fourteen jetty parcels to supply crude to six tanks and three CDUs for sixty periods. The
Integrating Graph Representation and GA for Large-Scale Optimization
571
objective of the study is to optimally schedule the crude oil unloading and charging to maximize the profit. For this problem, the number of variables reduces to 780 using SAGA as compared to 74,880 required in MILP approach. In addition to this, all the combinatorial structural constraints can be satisfied during structural schedule generation which thus reduces the search space considerably. The CPU time required per solution is 34 sec as compared 2988 sec required for the MILP approach on Pentium 4 PC (2.99 GHz; 3 GB memory). Thus, the CPU time required for the SAGA (with population of 100) is comparable with the MILP approach for single objective optimization (SOO). However, for MOO studies, the population remains same for SAGA and thus the CPU time required is still same for a given number of generations. In contrast to that hybrid MILP and evolutionary algorithm approaches requires significantly high CPU time (Naraharisetti, et al., 2009). The results of SOO are slightly inferior than the MILP results obtained using GAMS. However, the importance of algorithm lies in its ability to solve the multi-objective, multi-solution and non-linear formulation without approximation. The concepts here developed are general and can be applied to other evolutionary algorithms too. Table 1: Results of large-size refinery crude oil scheduling problem MILP formulation solved using GAMS (Li, et al., 2007) No. of CPU Time Profit variables (s) per (k$) solution 74880
2988
7370.13
Stochastic formulation solved using SAGA No. of variables 780
CPU Time (s) per solution 34
Profit (k$) (after 5000 generations) 7340.14
References E. Kondili, C. C. Pantelides, R. W. H. Sargent, 1993, A general Algorithm for Short-Term
Scheduling of Batch Operations-I. MILP Formulation, Comp. Chem. Eng., 17, 211227 G. Schiling, C. C. Pantelides, 1996, A Simple Continious-Time Process Scheduling Formulation and a Novel Solution Algorithm, Comp. Chem. Eng., 20, S1221-S1226 K. Deb, 2001, Multi-Objective Optimization using Evolutionary Algorithms, Wiley: Chichester, UK R. S. H. Mah, 1983, Application of Graph Theory to Process Design and Analysis, Comp. Chem. Eng., 7, 239-257 Y. Kim, L.T. Fan, C. Yun, S. B. Park, S. Park, B. Bertok, F. Friedler, 2008, Graph Theoretic Approach to Optimal Synthesis of Supply Networks: Distribution of Gasoline from a Refinery, Comp. Aided Chem. Eng., 25, 247-252 P. C. P. Reddy, I. A. Karimi, R. Srinivasan, 2004, A Novel Solution Approach for Optimizing Crude Oil Operation, AICHE J., 50, 1177-1197 J. D. Kelly, J. L. Mann, 2003, Crude Oil Blend Scheduling Optimization: An Application with Multi-Million Dollar Benefits-Part 1, Hydrocarbon Processing, 82, 47-51 J. Li, W. Li, I. A. Karimi, R. Srinivasan, 2007, Improving the Robustness and Efficiency of Crude Scheduling Algorithms, AIChE J., 53, 2659-2680 P. K. Naraharisetti, I. A. Karimi, R. Srinivasan, 2009, Supply Chain Redesign— Multimodal Optimization using a Hybrid Evolutionary Algorithm, Ind. Eng. Chem. Res., 48, 11094-11107
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Self-adaptive Differential Evolution with Taboo List for Constrained Optimization Problems and Its Application to Pooling Problems Haibo Zhang and G. P. Rangaiah Department of Chemical & Biomolecular Engineering, National University of Singapore, Singapore 117576, Singapore
Abstract Differential evolution (DE), a population-based global optimization algorithm, has been gaining popularity in the recent past due to its capability to handle non-convex and nondifferentiable functions. In this study, Self-adaptive Differential Evolution with Taboo List (SaDETL) with a novel constraint handling technique is proposed. It is tested for solving benchmark problems with equality and/or inequality constraints, and then applied to pooling problems, which are challenging with many constraints and important in process industries. In SaDETL, mutation strategy and parameter are selfadapted according to the learning experience from the previous generations, and taboo list is used to avoid revisiting the same area, to increase the population diversity and exploration of search space with fewer function evaluations, and to prevent premature convergence. An efficient constraint handling technique is incorporated with SaDETL; it is based on adaptive relaxation of constraints to improve the search and feasibility approach for selection. The results show that SaDETL with this technique is better than recent stochastic techniques for solving benchmark constrained problems, and is reliable and promising for solving pooling problems. Keywords: Global optimization, differential evolution, self-adaptation, constraint handling, pooling problems
1. Introduction Global optimization has been an area of active research for the past few decades due to the increasing needs of practical applications in many areas such as science, engineering and business. Global optimization methods can be classified into two broad categories: deterministic and stochastic methods (Pardalos et al., 2000). Stochastic algorithms include genetic algorithms, tabu search and differential evolution (DE). They require little or no assumption on the characteristics of the optimization problem, and yet provide a high probabilistic convergence to the global optimum. Furthermore, stochastic methods are usually simple in principle and easy to implement (Lin and Miller, 2004). The focus of this study is DE, proposed by Storn and Price (1997). It is a populationbased global optimization method, its principles are easy to understand, and the number of parameters involved is fewer compared to other algorithms. DE has relatively faster convergence and high reliability to find the global optimum. The objective of this work is to develop an efficient and reliable DE for practical applications with both equality and inequality constraints. This is achieved by integrating the taboo list to avoid revisits (Srinivas and Rangaiah, 2007), self-adaptation of parameters and mutation strategy (Qin et al., 2009), and a local optimizer for faster convergence to the precise optimum. In
SaDETL and its Application for Solving Pooling Problems
573
addition, we propose an effective technique for handling both equality and inequality constraints; it incorporates adaptive relaxation of constrains and feasibility approach for selection. The resulting Self-adaptive DE with Taboo List (SaDETL) is tested on many constrained benchmark problems, its performance is compared with recent stochastic techniques, and then it is applied to solving the pooling problems of importance in process industries. The pooling problems are global optimization problems to determine the optimal allocation of intermediate streams to pools and the blending of pools to final products, in order to minimize cost or maximize profit. The main difficulties to find the global optimum of the pooling problems are: (a) pooling process introduces nonlinearities and non-convexities into the optimization problems, leading to several local optima, and (b) large number of equality/inequality constraints (Gounaris et al., 2009). Deterministic methods have been studied for solving the pooling problems (Pham et al., 2009). However, to the best of authors’ knowledge, stochastic global optimization methods have not been applied to the pooling problems, probably because of equality constraints in them. Consequently, there is a need to develop robust and easy-to-implement global optimization techniques to solve the pooling problems. In this work, SaDETL along with a novel constraint handling technique is studied for solving the pooling problems.
2. Self-adaptive Differential Evolution with Taboo List The hybrid of DE with taboo list (TL) was proposed by Srinivas and Rangaiah (2007). The taboo check is implemented after mutation and crossover steps. It is performed by measuring the Euclidean distance between the new trial individual and each individual inside TL. If the Euclidean distance is smaller than the taboo radius, the new trial individual is rejected and another new trial individual is produced. This procedure is repeated until the Euclidean distance between the new trial individual and each individual in the TL is greater than the taboo radius. This operation significantly avoids revisiting the same area, increases the diversity of the population and avoids unnecessary objective function evaluations. Thus, the ability of global exploration is greatly enhanced. Recently, Qin et al. (2009) proposed Self-adaptive DE (SaDE) and showed its good performance and high reliability by comparison with nine variants of DE and other recent self-adaptive DE methods. In brief, SaDE is as follows. The initialization step is to generate randomly NP number of individuals within the bounds of variables. Then, mutation, crossover and selection steps are carried out until the stopping criterion is satisfied. In SaDE, four most used mutation strategies are selected as the candidate pool. Stochastic universal sampling method is used to choose a strategy for mutation based on certain probability. Mutation factor (F) is randomly chosen using normal distribution to balance between exploration and exploitation. Crossover rate (Cr) is self-adapted based on the learning experience in the previous LP generations. This option will lead to promising Cr value for different problems and thus better reliability of the algorithm. In this study, SaDETL incorporates adaptation of mutation strategy and Cr, and random selection of F, as in SaDE, as well as taboo list/check of DETL, into DE for solving global optimization problems. After the stopping criterion of the stochastic technique is satisfied, global search is terminated and the best solution obtained over all generations is refined using a local optimizer. SaDETL has been tested and shown to be promising for solving unconstrained problems, by Zhang and Rangaiah (2011).
H. Zhang and G. P. Rangaiah
574
3. Handling Equality and Inequality Constraints Many optimization problems in chemical engineering involve both equality and inequality constraints. Of these, equality constraints are more challenging to handle. Constraint handling techniques in the stochastic techniques have received much attention in the last decade. In this study, we propose a new constraint handling scheme for both equality and inequality constraints. It incorporates adaptive relaxation of constrains and the feasibility approach for selection between trial and target individuals in SaDETL. In the initialization, values of the objective function and constraints of the population are calculated. The median () of total absolute violation (TAV) of all individuals in the population is recorded. If TAV of an individual is less than , then it is temporarily treated as a feasible solution; else, it is taken as an infeasible solution. Thus, constraints are relaxed. During the generations, value is gradually reduced according to the percentage of feasible individuals (PF) in the latest population. After considering several possibilities, we propose the following equation for reducing for the next generation. P u P (G) (1) P (G 1) P (G) F NP
Here, G is generation number. The relaxation of constraints in the initial generations helps greater exploration of the search space for locating the global optimum region. The selection between target and trial individuals in the DE algorithm is based on the feasibility approach of Deb (2000). According to this, (a) a feasible solution is preferred over an infeasible solution; (b) among two feasible solutions, the one with better objective function value is preferred; and (c) among two infeasible solutions, the one with smaller TAV is chosen. The adaptive relaxation and feasibility approach will force the population towards the feasible region but more gradually than the feasibility approach alone without constraint relaxation. So, it can enhance the global search, especially for problems with equality constraints, which have very small feasible region.
4. Implementation and Evaluation 4.1. Parameter Setting and Initialization The following parameter values are used in this study: population size, NP = 30, learning period, LP = 10, taboo list size = 30, taboo radius = 0.001×NV where NV is the number of decision variables. Stopping criteria are the maximum number of function evaluations (= 50,000) or maximum number of rejections (NR) in generating a trial individual = 20. We performed 30 independent runs on each benchmark function and 100 independent runs for each pooling problem. Mean and standard deviation (std) of solutions and (average) number of function and constraint evaluations (NFE) from these runs for each problem are summarized and compared. The proposed algorithm is implemented in MS Excel with VBA due to its wide use in engineering and industry. 4.2. Evaluation on Benchmark Problems First, the performance of SaDETL with the proposed constraint handling technique is evaluated on 10 commonly used constrained optimization problems. It is compared with recent stochastic global optimization methods, namely, improved stochastic ranking, ISR (Runarsson and Yao, 2005), genetic algorithm with adaptive penalty function, GAADF (Tessema and Yen, 2009) and local exploration-based DE with parameter-free penalty, LEDE-PFP (Ali and Kajee-Bagdadi, 2009) for constrained problems. From the results in Table 1, it is clear that SaDETL with the proposed constraint handling
SaDETL and its Application for Solving Pooling Problems
575
technique is able to find the feasible solution for all the problems and provided better mean and std results for most of the functions. Further, SaDETL uses fewer NFE compared to other algorithms; this is due to use of taboo list, effective stopping criterion and local optimizer after global search. The proposed algorithm can also solve problems with equality constraints efficiently. For example, SaDETL obtained the best global optimum with the smallest std and least NFE for G10 problem. Hence, results in Table 1 show that SaDETL with the proposed constraint handling technique is efficient and better than recent stochastic algorithms for solving constrained problems. Table 1. Comparison of SaDETL algorithm with other algorithms Function: Optimal GA-ADF LEDE-PFP ISR (Runarsson value, NV, NI, Measure (Tessema & (Ali & Kajee & Yao, 2005) NE* Yen, 2009) Bagdadi, 2009) Mean -15.000 -14.3697 -15.000 G01: -15.0000, std 5.80E-14 5.56E-01 3.40E-04 13, 9, 0 NFE 350000 50000 50000 Mean -1.000 -0.565 -1.000 G02: -1.0005, std 8.20E-09 2.11E-01 3.90E-02 10, 0, 1 NFE 350000 50000 50000 Mean -30665.539 -30664.453 -30665.539 G03: -30665.5387, std 1.10E-11 4.96E-01 5.20E-05 5, 6, 0 NFE 350000 50000 50000 Mean 5126.497 5237.521 5126.501 G04: 5126.4967, std 7.20E-13 7.18E+00 4.10E-03 4, 2, 3 NFE 350000 50000 50000 Mean -6961.814 -3958.713 -6961.814 G05: -6961.8139, std 1.90E-12 3.10E+00 2.70E-07 2, 2, 0 NFE 350000 50000 50000 Mean 24.306 24.409 24.306 G06: 24.3062, std 6.30E-05 2.65E-01 2.10E-04 10, 8, 0 NFE 350000 50000 50000 Mean -0.095825 -0.09563 0.0958 G07: -0.095825, std 2.70E-13 6.00E-04 7.00E-07 2, 2, 0 NFE 350000 50000 50000 Mean 7049.250 7077.6841 7150.512 G8: 7049.248, std 3.20E-03 5.12E+01 3.00E+00 8, 6, 0 NFE 350000 50000 50000 Mean 0.75 0.75 0.75 G9: 0.749900, std 1.10E-16 1.00E-04 0.00E-00 2, 0, 1 NFE 350000 50000 50000 Mean 0.06677 0.09025 0.07142 G10: 0.0539415, std 7.90E-02 1.15E-01 1.40E-01 5, 0, 3 NFE 350000 50000 50000 * NI – no. of inequality constraints; NE – no. of equality constraints. + Success rate by SaDETL in 30 trials is 100% for all problems in this table.
SaDETL+ (This Work) -15.000 1.67E-07 17759 -1.000 1.93E-06 31076 -30665.539 2.94E-06 7597 5126.498 3.42E-06 37313 -6961.814 2.78E-12 1368 24.306 5.31E-06 20942 -0.0958 1.08E-14 4209 7049.248 5.0E-07 49864 0.75 4.81E-07 20455 0.05394 1.88E-08 22028
4.3. Application to Pooling Problems The proposed SaDETL with the constraint handling technique is applied to solve several pooling problems taken from Gounaris et al. (2009), and its performance is given in terms of success rate (SR), NFE and feasibility rate (Table 2). Here, a run is considered successful if the best objective function value found is within 1.0E-6 of the known global optimum. Feasibility rate (FR) is the percentage of runs, which satisfy all the constraints, out of 100 runs.
576
H. Zhang and G. P. Rangaiah
As shown in Table 2, SaDETL is able to solve the pooling problems Problem, optimum value, SR FR NFE successfully in all runs, except NV, NI, NE Haverly 2 problem. Finding the Haverly 1, -400, 7, 4, 4 100 100 30121 global optimum of this problem is 100 47554 Haverly 2, -600, 7, 4, 4 49 relatively more difficult compared Haverly 3, -750, 7, 4, 4 100 100 27473 to other problems (Adhya and Ben-Tal 4, -450, 8, 4, 4 98 100 7913 Ben-Tal 5, -3500, 38, 15, 12 100 100 7476 Tawarmalani, 1999). SR for Haverly 2 is the lowest (49%) and NFE is the highest (47554). SaDETL with the constraint handling technique can find feasible solutions for all pooling problems tested which indicates the effectiveness of the proposed constraint handling technique. Table 2. SaDETL results for pooling problems
5. Conclusions This study developed a novel constraint handling technique and incorporated it in SaDETL algorithm, where two control parameters (F and Cr) and mutation strategy are adapted and taboo list is used to prevent re-visiting the same place, to increase the diversity of population and consequently increase the reliability of the algorithm. The results obtained on 10 benchmark functions show that SaDETL with the proposed constraint handling technique is superior to the recent stochastic algorithms. Its application to five pooling problems indicates that SaDETL with the proposed constraint handling technique is effective and promising for application problems with equality and/or inequality constraints.
References N. Adhya and M. Tawarmalani, 1999, A Lagrangian Approach to the Pooling Problem, Ind. Eng. Chem. Res. 38, 1956-1972. M. M. Ali, and Z. Kajee-Bagdadi, 2009, A local exploration-based differential evolution algorithm for constrained global optimization, Appl. Comput. Math., 208, 31-48. K. Deb, 2000, An efficient constraint handling method for genetic algorithm, Computer Methods in Applied Mechanics and Engineering, 186, .311-338. C. E. Gounaris, R. Misener and C. A. Floudas, 2009, Computational Comparison of PiecewiseLinear Relaxations for Pooling Problems, Ind. Eng. Chem. Res, 48, 5742-5766. B. Lin and D. C. Miller, 2004, Tabu search algorithm for chemical process optimization, Computers and Chemical Engineering, 28, 2287-2306. P. M. Pardalos, H. E. Romeijin, and H. Tuy, 2000, Recent developments and trends in global optimization, Journal of Computational and Applied Mathematics, 124, 209-228. V. Pham, C. Laird and M. EI-Halwagi, 2009, Convex hull discretization approach to the global optimization of pooling problems,Ind. Eng. Chem. Res, 48, 1973-1979. A. K. Qin, V. L. Huang, and P. N. Suganthan, 2009, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evolu. Comp, 13, 398-417. T. P. Runarsson and X. Yao, 2005, Search biases in constrained evolutionary optimization, IEEE Trans. Syst., Man, Cybern, 35, 233–243. M. Srinivas, and G. P. Rangaiah, 2007, Differential evolution with tabu list for solving nonlinear and mixed-integer nonlinear programming problems, Ind. Eng. Chem. Res, 46, 7126-7135. R. Storn, and K. Price, 1997, Differential evolution - a simple and efficient heuristic for global optimization over continuous space, Journal of Global Optimization, 11, .341-359. B. Tessema and G. G. Yen, 2009, An adaptive penalty formulation for constrained evolutionary optimization, IEEE Trans. Syst., Man, Cybern, 39, 565–578. H. Zhang and G.P. Rangaiah, 2011, A hybrid global optimization algorithm and its application to parameter estimation problems, Asia-Pac. J. Chem. Eng. DOI:10.1002/apj.548.
! " #$%&'(%
' ( ) * + % , -
./
0 % ( ) ( % ( 1 ) / ) ) ( ) *)( */ 2 0 )) ) % / + % ( 1
%(2
! 3 / ( % )) / ( 4 2 / (/ 2( ( 5 ( ( 6 !7 2 $$8"4 (2 2 ( 1 * ) * + "! "! #$! %&! ! "! '() 2("$ ( )(&(/ % )) ( 9( ) / (% )& '()/ (/ & )('()9( 2/ )(( ( /( % )&( / 9( % : 1 !:" ( (/ '()2( 6 ( /( % )&!7 $$$" 4) % ) / )((2 ( )) /2 (2 !& ( $$"4)( % ( 2% /(2 / ) )( %9( 2/ ( /) 2 (
%(!;'"2(( ( %) % 2 ( /< ( 2 ( ) 2 2 ( %( ) ;'4 (2 ( ) ( : % ) 2 ()( ) ;') ( ) * ) 5 2 % ()) )( % / )% (
578
*+ !*
"! # # *(* , 9(/ '() ( + "! % "! & $ & # $! &! !
!
" ! $ ! -!
&!
')
2( & - % /2(1 % .! /!
**0 -1 2
9( % ( 1 !:" / / "= &= -= / (3 % () 2 1 /+
"=4 3 >4 =4 3 >4
"=4 3 >4 "4 3 >4
4
! 4 3
544 3 "=4 3 >4 644 3
? &= ! &! @ ? & $@ .
01
4
! 4 3
-=!
/01
'7) '76)
"=! "=! #$! %&= ! &! = ! "=! -=! " & - '8) 2(& % /2( & % .01 & - % /2(1 % .(01 /01& ) () ( (2 1 * / 9( 9( ( % )( !9
$$"9( )) ( )( (: ( (/ ((2 2((2 2( /( % ( ( 1 / '7)9'8) 2
$! % & 7*(*0 : 0 ( !5 " )3*2
% ) !222 " 4 2 8 3 A, 5 2 ( / ) /, 9( B /( % // * 3 ) ) ! $$C" ) A * )) 6 ) ( / ( / ) * / 6 ) ( )) / * 9( % / 3!$-&" * (* 2* !;" *
(* 2* !" 9() +% ( 9( % %/()) )2 &" & / 3 (;' % 2 &2: &" ! "! "! # $! % % '<) ! "!
& !
= $6 -01 90 96
579
7** $ 9( % ' * !$$" )( ) ) 2 ) 0( () 2 ) 2 + 9(//( 6 '<) ( / ( )( ( ( (/2 ( ),: (9( 1 ) ( )( ) )( ) (: 9( / ( ) * ( 5 ( )( 2 () 2 /* ) ( + / 3$&- / *) /D ( /+ + !$ &" " !- & " ! & &2 " E * ( 2( ( ) 2 % +$-&++$,;FG,HIJ+G +K +
7*7* $ / $ 9( ) ( ) ( : ) ( 5 ()( ( % ) LCK))% 2 ) ( !)" ( K, ) ( )( 9( ) 2 ( ( 2 / 5A; (2( ) ( : ! % ))* % 1 "+ /!>?@*@@(@*@@@@(@*@@@@@(@@@@@@@@A .!>?@*@(@*@(@*@(@*@(@*@(@*@( @*@(@*@(@*@(@*@(@*@((@@(@@AB .01>?(@@@(@@@A.(01>@*@@@@(/01>?<@@ @*@@@(@*@@@(@*@@@(@*@@@(@*@@@(@*@@@(@*@@@(@*@@@(@*@@@(@*@@@(A 3>7 9( ) 5C*M+ +&" (+& (* +&" +& 6 + &" : + & : 5 ( / % 2 ( % ( ) 2 ) + % + !'+ ( ) / (
/ ( 3 % ( 2 !'( & ( % ( 9( : ( * 1 2((% ) (2 ( ( N)N 9( / / *1 ) &( &!O &= !P>4 &4O$ !5C" ! ' ; 7 O7:O .(01>@*@(& ()% (
) ) /( %))) + % /!5K" )) ( %"( "!'( & ( : ) % ) 2( (/
5Q .(01>(@*"!'9( ) ) % ( 2((:( 2 /!5R" .(01>@*( 9( % ( ) " "! ' ; 7 O7:O 9( ) $-& / ( 2 &" ( /
/ (2 !5 Q" / /" / ( ) ) ' % /!5M"
580
*+ !*
5 ! "
5A0 / 3
55( ( %
5C:2( < )/ (%
//////////////// 5K: )/ ( ) ) 2
5R: ( )
= $6 -01 90 96
581
5Q: )/ ( 5M: )/ ()
)!% * 9(2 % (
)(: )) / ) ) ( )2 / ) ;' 9(2 ( ) ( ) )( ) 2 ) % ( ( / ( ( (:( 2 )) 2( /! /" ( )) 9( / / () ( (: ( % /( % ) ( 2 5( % ) ( / / / 2(( ) * ( 64 % ()) )( ( 2
* 0( ) 9 ) ); %; ) ( %
9() / )7!,$CQ$Q, ,$K8$Q," ) 2
+ 7& ( $$5 7 $$$ % : 1 ) ( ) .*9(.(; % )0 S& 7 2 .T $$8 % + 9( . / : /( 75 $$C5(. ; % 9
$$ ) % (.( ; % )0 ' .U
( %9 &0(7E/ 2$$9(* ) + (*
% M(45 S
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
'HWHUPLQLVWLFJOREDORSWLPL]DWLRQRINLQHWLFPRGHOV RIPHWDEROLFQHWZRUNVRXWHUDSSUR[LPDWLRQYV VSDWLDOEUDQFKDQGERXQG &DUORV3R]RD*RQ]DOR*XLOOpQ*RViOEH]D$OEHUW6RUULEDVE/DXUHDQR-LPpQH]D D
'HSDUWDPHQWG¶(QJLQ\HULD4XLPLFD8QLYHUVLWDW5RYLUDL9LUJLOL7DUUDJRQD6SDLQ
E
'HSDUWDPHQWGH&LqQFLHV0qGLTXHV%jVLTXHV8QLYHUVLWDWGH/OHLGD/OHLGD6SDLQ
$EVWUDFW 7KHELRWHFKQRORJLFDOLQGXVWU\VHHNVPLFURRUJDQLVPVZLWKHQKDQFHGSKHQRW\SHVOHDGLQJ WR ODUJH SURGXFWLRQ UDWHV RI D PHWDEROLWH RI LQWHUHVW 7KH HQ]\PDWLF SURILOH OHDGLQJ WR VXFK SHUIRUPDQFH FDQ EH LGHQWLILHG E\ VROYLQJ DQ RSWLPL]DWLRQ SUREOHP ZKHUH NLQHWLF PRGHOV VXFK DV WKH *HQHUDOL]HG 0DVV $FWLRQ *0$ DOORZ WR FDSWXUH WKH LQWULQVLF QRQOLQHDU EHKDYLRU RI PHWDEROLF SURFHVVHV ,I WKHUH LV D OLPLW RQ WKH QXPEHU RI HQ]\PDWLF PRGXODWLRQV DOORZHG WKH SUREOHP FDQ EH SRVHG DV D QRQFRQYH[ 0,1/3 ZKLFKFRQWDLQVPXOWLSOHORFDOVROXWLRQV,QWKHSUHVHQWZRUNZHLQWURGXFHDFXVWRPL]HG VSDWLDOEUDQFKDQGERXQGVWUDWHJ\GHYLVHGWRVROYHHIILFLHQWO\WKHVHSDUWLFXODUSUREOHPV WR JOREDO RSWLPDOLW\ 7KH FDSDELOLWLHV RI WKH SURSRVHG VWUDWHJ\ DUH FRPSDUHG ZLWK DQ RXWHUDSSUR[LPDWLRQDOJRULWKPSUHYLRXVO\LQWURGXFHGE\WKHDXWKRUVDQGDOVR ZLWK WKH VWDWHRIDUWFRPPHUFLDOJOREDORSWLPL]DWLRQSDFNDJH%$521
.H\ZRUGV JOREDO RSWLPL]DWLRQ PHWDEROLF HQJLQHHULQJ *HQHUDOL]HG 0DVV $FWLRQ *0$ V\VWHPVELRORJ\V%%
,QWURGXFWLRQ 5HFHQWDGYDQFHVLQPROHFXODUELRORJ\KDYHPDGHLWSRVVLEOHWRPRGXODWHWKHH[SUHVVLRQ RI JHQHV LQ D JLYHQ RUJDQLVP LQ RUGHU WR REWDLQ VWUDLQV ZLWK HQKDQFHG SKHQRW\SHV >9LODSULQ\RHWDO%DQJD@7KLVKDVUHQHZHGWKHLQWHUHVWLQELRWHFKQRORJLFDO DSSOLFDWLRQVVXFKDVWKHXVHRIPRGLILHGRUJDQLVPVLQLQGXVWULDOVFHQDULRV 7KH LQWULQVLF FRPSOH[LW\ RI PHWDEROLF QHWZRUNV PDNHV LW GLIILFXOW WR LQIHU WKH PRVW SURPLVLQJ JHQHWLF FKDQJHV WR EH LPSOHPHQWHG LQ D JLYHQ V\VWHP %HFDXVH RI WKLV FRPSOH[LW\PXWDWLRQDQGVHOHFWLRQRIQHZSURFHVVHVKDYHEHHQWUDGLWLRQDOO\SHUIRUPHG RQ D WULDODQGHUURU EDVLV >3ROLVHWW\ HW DO @ 7KLV VWUDWHJ\ KDV WKH OLPLWDWLRQ RI OHDGLQJ LQ PDQ\ FDVHV WR VXERSWLPDO VROXWLRQV 7KLV GUDZEDFN FDQ EH RYHUFRPH E\ XVLQJRSWLPL]DWLRQWRROVEDVHGRQPDWKHPDWLFDOSURJUDPPLQJ 'LIIHUHQWPRGHOLQJDSSURDFKHVEDVHGRQWKLVJHQHUDODSSURDFKKDYHEHHQSUHVHQWHGVR IDU LQ WKH OLWHUDWXUH 0DQ\ RI WKHP UHO\ RQ OLQHDU PRGHOV WKDW IDLO LQ FDSWXULQJ WKH FRPSOH[LW\ RI WKH PHWDEROLF QHWZRUN ,Q FRQWUDVW NLQHWLF PHWDEROLF PRGHOV DUH PRUH DFFXUDWH EXW XQIRUWXQDWHO\ WKH\ W\SLFDOO\ JLYH ULVH WR QRQOLQHDU PRGHOV DQG KHQFH WR PXOWLPRGDOLW\LHH[LVWHQFHRIPXOWLSOHRSWLPDOVROXWLRQV >%DQJD@
'HWHUPLQLVWLFJOREDORSWLPL]DWLRQRINLQHWLFPRGHOVRIPHWDEROLFQHWZRUNVRXWHU DSSUR[LPDWLRQYVVSDWLDOEUDQFKDQGERXQG
583
,Q WKLV ZRUN ZH DGGUHVV WKH JOREDO RSWLPL]DWLRQ RI NLQHWLF PRGHOV RI PHWDEROLF QHWZRUNVWKDWDUHPRGHOHGYLDWKH*0$IRUPDOLVP>9RLWDQG6DYDJHDX$OYHVHW DO @ :H SUHVHQW D QRYHO FXVWRPL]HG VSDWLDO EUDQFK DQG ERXQG V%% DOJRULWKP DQG FRPSDUH LWV SHUIRUPDQFH ZLWK WKDW RI DQ RXWHU DSSUR[LPDWLRQ PHWKRG SUHYLRXVO\ LQWURGXFHGE\WKHDXWKRUVDVZHOODVZLWKWKHFRPPHUFLDOJOREDORSWLPL]DWHU%$521
0DWKHPDWLFDOIRUPXODWLRQ 7KHFRPSOHWH PDWKHPDWLFDO IRUPXODWLRQRIWKH PHWDEROLFQHWZRUNFDQEHIRXQGLQWKH ZRUNE\3ROLVHWW\HWDO +HUHGXHWRVSDFHOLPLWDWLRQVZHRQO\SURYLGHDEULHI RXWOLQHRILW :HDGGUHVVWKHRSWLPL]DWLRQRIPHWDEROLFQHWZRUNVXQGHUVWHDG\VWDWHFRQGLWLRQV7KDW LV ZH DVVXPH WKDW WKH FRQFHQWUDWLRQ ; RI WKH Q PHWDEROLWHV DPRQJ WKH PHWDEROLF QHWZRUN GRHV QRW YDU\ ZLWK WLPH W +HQFHIRUWK WKH QHW EDODQFH RI SURFHVVHV U FRQWULEXWLQJWRWKHSURGXFWLRQDQGWKHGHSOHWLRQRIDPHWDEROLWHLHTXDOV S G; L = ¦ μ LU Y U = GW U = +HUH ȝLU UHSUHVHQWV WKH VWRLFKLRPHWULF FRHIILFLHQW RI SURFHVV U LQ WKH PDVV EDODQFH RI PHWDEROLWH L 7KH UDWH DW ZKLFK SURFHVV U RFFXUV ZKLFK LV GHQRWHG E\ YU FDQ EH GHWHUPLQHG IURP D NLQHWLF HTXDWLRQ RI FKRLFH IRU LQVWDQFH WKH VRFDOOHG SRZHUODZ IRUPDOLVP(T Q+P
I Y U = γ U ∏ ; M UM
M =
7KHEDVDOVWDWHDFWLYLW\RIWKHHQ]\PHJRYHUQLQJSURFHVVULVUHSUHVHQWHGE\ȖUZKHUHDV IUMLVXVHGWRGHQRWHWKHNLQHWLFRUGHURIPHWDEROLWHMLQSURFHVVU,I(T LVLQWURGXFHG LQ(T ZHREWDLQD*HQHUDOL]HG0DVV$FWLRQ*0$ PRGHODVIROORZV S Q+P § I · = ¦ ¨¨ μ LU γ U ∏ ; M UM ¸¸ U = © M = ¹ 7KHRSWLPDOHQ]\PDWLFDFWLYLW\FDQEHH[SUHVVHGDVDIROGFKDQJHFRQWLQXRXVYDULDEOH .U RYHUWKHEDVDOVWDWHDFWLYLW\SDUDPHWHUȖLU DVLOOXVWUDWHGLQHTXDWLRQ(T S Q+P § I · = ¦ ¨¨ μ LU . U γ U ∏ ; M UM ¸¸ U = © M = ¹ 7KH YDOXH RI YDULDEOH .U LQGLFDWHV ZKHWKHU WKH JHQH FRGLQJ D JLYHQ HQ]\PH PXVW EH RYHUH[SUHVVHG .U ! LQKLELWHG .U RU OHIW XQPRGLILHG .U LQ RUGHU WR PD[LPL]HWKHV\QWKHVLVUDWHRIWKHGHVLUHGSURGXFW :KLOHLWLVFOHDUWKDWPRGLI\LQJDVPDQ\HQ]\PHVDVWKHUHDUHLQWKHQHWZRUNZLOOOHDGWR WKHEHVWSHUIRUPDQFHSRVVLEOHLWLVDOVRREYLRXVWKDW VXFK QXPEHURIFKDQJHV PD\EH SURKLELWLYH+HQFHIRUWKDOLPLWPXVWEHLPSRVHGRQWKHQXPEHURIHQ]\PHVDOORZHGIRU PRGLILFDWLRQ(TDQG
584
S
¦ (<
U
U =
+
&3R]RHWDO
,Q (T 9HFFKLHWWL HW DO /HH DQG *URVVPDQQ @ 7KHVH YDULDEOHV DUH WKHQ XVHG LQ (T WR LPSRVH DQ XSSHU OLPLW 0( RQ WKH QXPEHU RI HQ]\PDWLFPRGXODWLRQVDOORZHG 6LQFHZHVHDUFKIRUSUDFWLFDOVROXWLRQVERXQGVDUHDOVRLPSRVHGRQWKHFRQFHQWUDWLRQRI WKHGLIIHUHQWPHWDEROLWHVDQGDOVRRQWKHIROGFKDQJHVLQHQ]\PHDFWLYLWLHV ; L/% ≤ ; L ≤ ; L8% . U/% ≤ . U ≤ . U8% (TXDWLRQV (T WR DUH WKH NH\ HTXDWLRQV LQ WKH RYHUDOO RSWLPL]DWLRQ SUREOHP 20,1/3 LQ ZKLFK WKH FRQFHQWUDWLRQV RI WKH GLIIHUHQW PHWDEROLWHV ;L DQG WKH IROG FKDQJHV LQ WKH HQ]\PH DFWLYLWLHV .U WKDW PD[LPL]H WKH V\QWKHVLV UDWH RI D JLYHQ PHWDEROLWHDUHWKHGHFLVLRQYDULDEOHVWREHRSWLPL]HG (20,1/3 ) PLQ − ¦ μ LU YU U∈2)5 VW (T WR 1RWH WKDW FRQVWUDLQWV LQ 20,1/3 GHILQH D QRQFRQYH[ VHDUFK VSDFH ZKHUH PXOWLSOH ORFDORSWLPDO VROXWLRQV PD\ H[LVW +HQFHIRUWK LQ RUGHU WR VROYH 20,1/3 WR JOREDO RSWLPDOLW\ZHPXVWUHVRUWWRJOREDORSWLPL]DWLRQWHFKQLTXHV
6ROXWLRQSURFHGXUH 7KH VSDWLDO EUDQFKDQGERXQG DOJRULWKP ZH SURSRVH IRU 20,1/3 LV EDVHG RQ VHTXHQWLDOO\VROYLQJVXESUREOHPVREWDLQHGE\SDUWLWLRQLQJWKHRULJLQDOGRPDLQ(DFKRI WKHVH VXESUREOHPV LV DVVRFLDWHG ZLWK D QRGH LQ D VSDWLDO EUDQFKDQGERXQG WUHH LH V%%WUHH ZKHUHWKHRULJLQDOSUREOHP20,1/3LVDOORFDWHGLQWKHURRWQRGHN= 6XESUREOHP &0,/3 LH WKH OLQHDU UHOD[DWLRQ RI SUREOHP 20,1/3 LV ILUVWO\ JHQHUDWHGDQGVROYHGLQRUGHUWRREWDLQDYDOLGORZHUERXQGRQWKHJOREDORSWLPXPRI WKH RULJLQDO SUREOHP 7KLV VROXWLRQ LV XVHG DV D VWDUWLQJ SRLQW WR VROYH ORFDOO\ WKH RULJLQDOSUREOHP20,1/3ZKLFKZLOOSURYLGHDQXSSHUERXQGRQLWVJOREDORSWLPXP ,IWKHRSWLPDOLW\JDSRIWKHQRGHLVDERYHWKHWROHUDQFHZHVSOLWWKHGRPDLQRIRQHRI WKH S YHORFLWLHV YU LQ RUGHU WR JHQHUDWH VXESUREOHPV 20,1/3 DQG 20,1/3 EUDQFKLQJ ZKLFKDUHDVVRFLDWHGZLWKWKHFRUUHVSRQGLQJGHVFHQGDQWQRGHVLQWKHV%% WUHH 7KHVH VXESUREOHPV DUH VROYHG H[DFWO\ LQ WKH VDPH PDQQHU DV ZH GLG ZLWK 20,1/3 1RGHVWKDWHLWKHUSURYLGHZRUVWERXQGVWKDQWKHFXUUHQWEHVWERXQGRUDUHLQIHDVLEOHDUH IDWKRPHG%HVLGHVDQRGHFDQDOVREHIDWKRPHGZKHQWKHRSWLPDOLW\JDSRIWKHQRGHLV VPDOOHU WKDQ WKH İWROHUDQFH ,Q WKLV FDVH LI WKH XSSHU ERXQG LV ORZHU WKDQ WKH FXUUHQW 28%ZHXSGDWHLWDQGSUXQHDOOWKHDFWLYHLHXQH[SORUHG QRGHVZLWKD/%!28% 7KHRYHUDOOORZHUERXQG2/% FRUUHVSRQGVWRWKHORZHVWDPRQJWKHORZHUERXQGVRI WKHDFWLYHQRGHVLQWKHV%%WUHH7KHDOJRULWKPWHUPLQDWHVZKHQWKHJDSEHWZHHQ28% DQG2/%LVUHGXFHGEHORZWKHİWROHUDQFH
'HWHUPLQLVWLFJOREDORSWLPL]DWLRQRINLQHWLFPRGHOVRIPHWDEROLFQHWZRUNVRXWHU DSSUR[LPDWLRQYVVSDWLDOEUDQFKDQGERXQG
585
,QWKHSURSRVHGDOJRULWKPWKHRYHUDOOVROXWLRQSURFHGXUHLVH[SHGLWHGWKURXJKWKHXVHRI ERXQG WLJKWHQLQJ WHFKQLTXHV EDVHG RQ ERWK RSWLPDOLW\ DQG IHDVLELOLW\ FULWHULD DQG D VSHFLDOW\SHRIFXWWLQJSODQHVWKDWDUHWDLORUHGIRUWKLVSUREOHP
5HVXOWV 7KH PD[LPL]DWLRQ RI WKH FLWULF DFLG SURGXFWLRQ LQ $VSHUJLOOXV QLJHU LV WKH SUREOHP RI FKRLFH IRU WHVWLQJ WKH FDSDELOLWLHV RI RXU FXVWRPL]HG V%% DOJRULWKP ,Q SDUWLFXODU ZH FRPSDUH WKH UHVXOWV RI WKLV PHWKRG ZLWK WKRVH REWDLQHG E\ WKH RXWHUDSSUR[LPDWLRQ 2$ WHFKQLTXH LQWURGXFHG E\ WKH DXWKRUV LQ DQ HDUOLHU ZRUN >*XLOOpQ*RViOEH] DQG 6RUULEDV 6RUULEDV HW DO 3R]R HW DO @ DQG DOVR ZLWK WKH JOREDO RSWLPL]DWLRQSDFNDJH%$5217KHLQVWDQFHVVROYHGDUHGLUHFWO\DUHWKRVHSUHVHQWHGLQ 3ROLVHWW\HWDO>@DQG3R]RHWDO>@DQGGLIIHULQWKHQXPEHURIUHDFWLRQV0( DOORZHGIRUVLPXOWDQHRXVPRGLILFDWLRQFDVH%0(= FDVH&0(= FDVH'0(= DQGFDVH(0(= 7KHRSWLPL]DWLRQFRQVWUDLQWVDUHWKHVDPHDVLQWKHUHIHUHQFHG SDSHU &3/(;DQG&21237ZHUHXVHGWRVROYHWKH0,/3DQG1/3VXESUREOHPV UHVSHFWLYHO\7KHIXOOVSDFHSUREOHP20,1/3ZDVVROYHGE\PHDQVRI%$521Y IRUFRPSDULVRQSXUSRVHV7KHPDLQERG\RIWKHDOJRULWKPVZDVLPSOHPHQWHGLQ*$06 RQDQ,QWHO*+]PDFKLQH$QRSWLPDOLW\WROHUDQFHRIZDVIL[HGLQDOORI WKHFDVHV 7DEOH &RPSDULVRQ EHWZHHQ WKH UHVXOWV REWDLQHG ZLWK WKH 2$ DQG WKH FXVWRPL]HG V%%IRUHDFKLQVWDQFH/%ORZHUERXQGRQWKHJOREDORSWLPXPLQP0PLQ8% XSSHUERXQGRQWKHJOREDORSWLPXPLQP0PLQ&38&38WLPHLQVHFRQGV V%% %$521 2$ &DVH /% 8% &38 /% 8% &38 /% 8% &38 % % & & ' ' ( ( 5HVXOWVLQ7DEOHGHPRQVWUDWHWKDWWKHSURSRVHG PHWKRGRORJ\LVDEOHWRVROYHDOOWKH LQVWDQFHV ZLWKLQ WKH VSHFLILHG WROHUDQFH LPSURYLQJ WKH UHVXOWV LQ WHUPV RI &38 WLPH REWDLQHGE\WKH2$IRUWKHWZRPRVWFRPSOLFDWHGLQVWDQFHVLQZKLFKDODUJHUQXPEHURI HQ]\PHVDUHDOORZHGIRUPDQLSXODWLRQLH(DQG( 7KLVLVGXHWRWKHDELOLW\WKDW WKH V%% KDV WR WLJKWHQ WKH UHOD[DWLRQ RI WKH SUREOHP ZLWKRXW DGGLQJ QHZ ELQDU\ YDULDEOHVWRWKHIRUPXODWLRQ 2QWKHRWKHUKDQG%$521ZDVQRWDEOHWRLPSURYHWKH VWDUWLQJSRLQWZKLFKFRUUHVSRQGVWRWKHEDVDOVWDWHVROXWLRQ HYHQDIWHURQHKRXURI&38
%$521IDLOHGWRSURYLGHDULJRURXVXSSHUERXQGLQFDVHVZLWK
586
&3R]RHWDO
WLPHLQDQ\RIWKHLQVWDQFHV7KLVPLJKWEHGXHWRORRVHERXQGVEHLQJREWDLQHGE\XVLQJ JHQHULFWHFKQLTXHVIRUEXLOGLQJWKHUHOD[HGSUREOHP
&RQFOXVLRQV ,Q WKLV ZRUN ZH KDYH DGGUHVVHG WKH JOREDO RSWLPL]DWLRQ RI PHWDEROLF QHWZRUNV GHVFULEHGWKURXJKWKH*0$IRUPDOLVP,QSDUWLFXODU ZH KDYHSUHVHQWHGDFXVWRPL]HG V%%DOJRULWKPWKDWH[SORLWVWKHVSHFLILFVWUXFWXUHRIWKLVSUREOHP2XUVWUDWHJ\KDVEHHQ FRPSDUHG ZLWK DQ 2$ DOJRULWKP DQG WKH JOREDO RSWLPL]DWLRQ SDFNDJH %$521 1XPHULFDOUHVXOWVKDYHVKRZQWKDWWDLORUHGPHWKRGVRXWSHUIRUPHG%$521LQDOORIWKH LQVWDQFHV XQGHU VWXG\ 7KH UHVXOWV REWDLQHG LQGLFDWH WKDW ZH FDQ WDFNOH SUREOHPV RI PRGHUDWHFRPSOH[LW\ZKHQH[SUHVVHGDV*0$PRGHOV
$FNQRZOHGJHPHQWV 7KH DXWKRUV ZLVK WR DFNQRZOHGJH VXSSRUW RI WKLV UHVHDUFK ZRUN IURP WKH 6SDQLVK 0LQLVWU\ RI (GXFDWLRQ DQG 6FLHQFH SURMHFWV '3, 3+%3& %)8 DQG &74& WKH 6SDQLVK 0LQLVWU\ RI ([WHUQDO $IIDLUV SURMHFWV +6 DQG $ DQG WKH *HQHUDOLWDW GH &DWDOXQ\D ), SURJUDPV
5HIHUHQFHV 5$OYHV(9LODSULQ\R%+HUQDQGH]%HUPHMR$6RUULEDV0DWKHPDWLFDOIRUPDOLVPV EDVHGRQDSSUR[LPDWHGNLQHWLFUHSUHVHQWDWLRQVIRUPRGHOLQJJHQHWLFDQGPHWDEROLFSDWKZD\V 0DWK%LRVFL -%DQJD2SWLPL]DWLRQLQFRPSXWDWLRQDOV\VWHPVELRORJ\%0&6\VW%LRO **XLOOpQ*RViOEH]$6RUULEDV,GHQWLI\LQJTXDQWLWDLYHRSHUDWLRQSULQFLSOHVLQPHWDEROLF SDWKZD\VDV\VWHPDWLFPHWKRGIRUVHDUFKLQJIHDVLEOHHQ]\PHDFWLYLW\SDWWHUQVOHDGLQJWR FHOOXODUDGDSWLYHUHVSRQVHV%0&%LRLQI 6/HH,*URVVPDQQ*OREDORSWLPL]DWLRQRIQRQOLQHDUJHQHUDOL]HGGLVMXQFWLYH SURJUDPPLQJZLWKELOLQHDUHTXDOLW\FRQVWUDLQWVDSSOLFDWLRQVWRSURFHVVQHWZRUNV&RPSXW &KHP(QJ 33ROLVHWW\(*DW]NH(9RLW
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimal Grade Transitions in an Industrial SlurryPhase Catalytic Olefin Polymerization LoopReactor Series Vassileios Touloupidesa,b, Vassileios Kanellopoulosb, Christos Chatzidoukasa,b and Costas Kiparissidesa,b a
Department of Chemical Engineering, Aristotle University of Thessaloniki Centre for Research and Technology Hellas, P.O. Box 60 361, Thessaloniki, Greece 570 01
b
Abstract Present market needs combined with the broad range of polyolefin applications have forced the polyolefin industry to operate under frequent grade transition policies. Consequently, under such market-driven operating schedules, the minimization of offspec polymer production and grade changeover time is prerequisite to any profitability analysis of the polyolefin production processes. In the present study, the optimal grade transition problem is examined in relation to an industrial Ziegler–Natta catalytic slurryphase ethylene-1-hexene polymerization loop-reactor series. Keywords: Slurry-phase reactors; Optimal grade transition; Catalytic olefin polymerization; Dynamic simulation; Mathematical modeling.
1. Introduction Polyolefins are the most widely used plastics today due to their low production cost, reduced environmental impact, and wide range of applications (e.g., packaging, building and construction, transportation, etc.). It is believed that the degree of technological and scientific sophistication in relation to the polyolefin manufacturing has no equal among other synthetic polymer production processes. Polyolefins are commonly produced in low-pressure catalytic (e.g., Ziegler-Natta, metallocenes, etc.) bulk, slurry and gas-phase reactors. Presently, the total world polyolefins capacity exceeds 120 million tons per year. Polyethylene (i.e., HDPE, LDPE and LLDPE) and polypropylene cover 60 % and 40 % of the total polyolefins production, respectively. The annual world-wide polyolefins market growth in the coming years is foreseen to be 4-6%, making polyolefin manufacturing a very active research area. Present market needs combined with the broad range of polyolefin applications have forced the polyolefin industry to operate under frequent grade transition policies. This trend has led the polyolefin industry to move away from large continuous production of a single polymer grade to a more flexible production scheme comprising a number of polymer grades of high quality but low volume. In fact, in a polyolefin plant as many as 30–40 polymer grades can be produced. Consequently, under such market-driven operating schedules, the minimization of off-spec polymer production and grade changeover time are prerequisite to any profitability analysis of the process. Commonly, the optimal solution to this problem is based on the minimization of a suitable objective function defined in terms of the grade changeover time, product-quality specifications, process safety constraints and the amount of off-spec polymer. However, optimal operation of a polymerization plant in terms of higher yield and better product quality at
588
V. Touloupides at al.
reduced cost can only be achieved when the process is operated under well-controlled conditions (Chatzidoukas et al., 2003). In the present study, the optimal grade transition problem is examined in relation to an industrial Ziegler–Natta catalytic slurry-phase ethylene-1-hexene polymerization loopreactor series. Transitions between the different grades are usually slow and result in production of a considerable amount of off-spec polymer. It is often the case that the polymer produced during a grade transition does not meet both the initial and the final product specifications. It is therefore important to consider how this transition can be implemented with as little economic penalty as possible. A mathematical formulation of the optimization problem is crucial for the calculation of an optimal grade transition policy. It should fully capture both the optimization objectives and the plant constraints. Note that incorporation of the designer intuition and knowledge of the process are necessary to constrain the process variables to lie within the plant capabilities. In the present study, detailed simulation results on the calculation of the time optimal grade transition and start-up policies for a fixed control structure are presented. From the implementation of the calculated optimal operating policies it is shown that significant quality and economic benefits can be achieved to a slurry-phase polyolefin loop-reactor series.
2. The slurry-phase olefin polymerization in a loop-reactor series
Fig. 1. Schematic representation of an industrial slurry-phase olefin catalytic polymerization loop-reactor series. In Fig. 1, a schematic representation of an industrial slurry-phase olefin polymerization loop-reactor series is illustrated. The process consists of two jacketed loop reactors. The reaction mixture (i.e., consisting of monomer(s), diluent, catalyst, hydrogen and polymer) flows in the loop reactor by means of an axial-centrifugal pump placed at the bottom of the reactor. The reactor’s cross-sectional area is usually uniform and the reactor operates free of any obstruction that can interfere with the circulation of the reaction mixture. An ‘O’ shape or any similar arrangements (e.g., a vertical double loop) are the most commonly employed reactor designs. The first reactor of the series is continuously fed with monomer and comonomer (e.g., ethylene and 1-hexene), catalyst and diluent (e.g., iso-butane, propane, n-pentane, i-pentane, neopentane and n-hexane) (Marechal, 2006). During polymerization, the polymer solids are gradually collected in the settling legs placed at the lower part of the loop reactor. The settling legs periodically open to remove the highly concentrated slurry (i.e., consisting of polymer
Optimal Grade Transitions in Industrial Slurry-Phase Catalytic Olefin Polymerization Loop Reactor Series 589 solids and a fraction of the liquid phase). The product stream leaving the first loop reactor is fed into the second reactor of the series together with fresh monomer(s), diluent and hydrogen Typically, industrial slurry-phase loop reactors operate at temperatures of 70–120 oC and pressures of 30–90 bars while the polymer solids concentration is approximately 45%w/w.The highly concentrated slurry product leaving the second reactor is fed to a hot-flash separator where the polymer solids are separated from the unreacted monomer(s) and diluent. The diluent is completely recovered due to the high monomer(s) conversion (i.e., 95–98%), thus there is no need for monomer(s) recovery. The polymer product is then dried and pelletized. Various polyolefin grades with broad or/and bimodal MWDs can be produced using the slurry-phase loop-reactor technology. Typically, polyolefins of high molecular weight and low density are produced in the first reactor of the series, commonly operated at a low hydrogen concentration and a high comonomer concentration. On the other hand, in the second reactor of the series, high hydrogen and low comonomer concentrations result in the production of low molecular weight and high density polyethylene. 2.1. Polymerization Kinetics The polymer molecular properties (i.e., number-and weight-average molecular weights and molecular weight distribution) are determined by employing a generalized multisite, Ziegler–Natta kinetic scheme using the well-known method of moments (Table 1). The kinetic mechanism comprises a series of elementary reactions, including site activation, propagation, site deactivation and site transfer reactions. Table 1. Kinetic mechanism of ethylene-1-hexene copolymerization over a ZieglerNatta catalyst k Activation by co-catalyst S p A o P0k B k aA
Chain initiation Chain propagation Chain transfer by hydrogen (H2) Spontaneous chain transfer Spontaneous deactivation
kk
0i P0k M i o P1,ki
kk
pij k Pn,i M j o Pnk1, j k
ktrH Pnk,i H 2 o P0k Dnk
kk
trSpi Pnk,i o P0k Dnk k
kdSP Pn,ik o CDk Dnk k kdSP
P0k o CDk
2.2 Thermodynamic Considerations For the calculation of the monomer(s) consumption rate(s) and molecular weight properties of polyolefins in the slurry-phase loop-reactor series, the concentrations of the sorbed monomers and other species in the amorphous polymer phase should be known. In the present study, the S–L EOS was employed to calculate the solubilities of the various reaction species (i.e., ethylene, 1-hexene, isobutene, hydrogen) in semicrystalline polyolefins over a wide range of pressures and temperatures. 2.3 Modeling of the slurry-phase cascade loop-reactor series According to the proposed modelling approach, each loop reactor (i.e., consisting of the loop reactor and its settling legs) is modelled as an ideal CSTR in series with a semicontinuous product removal unit. Dynamic macroscopic mass and energy balances are derived for each loop reactor in the series to predict the time variation of the concentrations of the various molecular species as well as the reactor and jacket temperatures in the two loop reactors. The non-continuous product withdrawal rate as
V. Touloupides at al.
590
well as the outflow species concentrations from each reactor of the configuration is properly calculated via the solution of a settling leg model (Touloupides et al., 2010). 3. The Optimal Grade Transition Problem The numerical solution to the optimal grade transition problem is based on the minimization of a suitable objective function via the manipulation of selected process variables, which can drive quickly the process from one grade to the other, using a nonlinear dynamic optimization algorithm. For the dynamic optimization of the transient operation of a polymerization reactor between two polymer grades of different densities, the hexene feed rate (i.e., the comonomer feed rate) is chosen as a control variable because of its direct impact on the polymer crystallinity, which in turn affects the polymer density. The time optimal sequence of hexene feed set-point changes was calculated by minimizing the following performance index, expressed in terms of the total grade-transition time and the squared deviations of the final product density from its desired one: min
Fhex (t ),t f
J
tf § D p t D dp wD ³ ¨ ¨ D dp 0©
2
· ¸ dt w t t f ¸ ¹
(1)
where Fhex (t ) is the time optimal control trajectory that forces the process to follow an admissible state trajectory and t f is the final polymerization time. D p t and D dp is the time varying polymer density and its desired value respectively, while wD and wt are properly selected weights. A sequential optimization method was employed to minimize the objective function subject to the constraints Equation (Eq. 2) and the system of DAEs, describing the dynamics of the plant. Fhex,l d Fhex (t ) d Fhex,u ; tl d t f d tu (2) where the subscripts l and u denote the lower and upper bound respectively of the constraints imposed on the comonomer flow rate and the final grade transition time. The calculation of the optimal piecewise constant trajectory of the comonomer flow rate and the corresponding length of each time interval which minimized the above objective function was carried out with the aid of a sequential quadratic programming algorithm. 4. Results In catalytic olefin polymerization processes, the polymer crystallinity decreases with the percentage of the incorporated comonomer (i.e., 1-hexene) in the polymer chains. As the 1-hexene incorporation in the polymer increases, crystallinity decreases. Thus, the comonomer content is strongly related with polymer density. In Figures 2 and 3, model based optimization results are compared with experimental measurements obtained from the industrial plant. The off-line calculated time optimal 1-hexene feed policy was applied to the polymerization reactor series so that a polymer grade with lower density could be produced in a minimum grade transition time. As can be seen in Fig. 2, the optimizer dictates a sharp increase on the hexene feed rate in the first loop reactor in order to drive the process as fast as possible to the new grade. Then, this feed rate is properly adjusted in order make-up for other process variable changes (i.e., ethylene and/or hydrogen feed rate) which required for the maintenance of the polymer molecular weight at the desired level. In Figure 3, the response of the two loop reactors on the above change of the hexene inflow rate with respect to the polymer density is depicted. Note that the density drop caused on the polymer produced from the first loop
Optimal Grade Transitions in Industrial Slurry-Phase Catalytic Olefin Polymerization Loop Reactor Series 591 reactor results in a proportional drop on the density of the final product (i.e., the polymer at the exit of the second loop reactor) despite the absence of hexene in the feed stream of the second loop reactor. Finally, in both loop reactors the polymer crystallinity profiles closely follow the respective density profiles during the transition.
Fig. 2. Comparison between optimal and experimentally applied 1-hexene feed policies.
Fig. 3. Calculated and measured time-optimal density trajectories in an industrial slurryphase ethylene-1-hexene catalytic polymerization loop reactor series.
References C. Chatzidoukas, J.D. Perkins, E.N. Pistikopoulos and C. Kiparissides, 2003, ‘Optimal grade transition and selection of closed-loop controllers in a gas-phase olefin polymerization fluidized bed reactor’, Chem. Eng. Sci., 58,16, pp. 3643-3658. P. Marechal, 2006, ‘Process for producing bimodal polyethylene resins’, US Patent No, 7,034,092 B2. V. Touloupides, V. Kanellopoulos, P. Pladis, C. Kiparissides, D. Mignon and P. Van-Grabezen, 2010, ‘Modeling and simulation of an industrial slurry-phase catalytic olefin polymerization reactor series’, Chem. Eng. Sci., 65, pp. 3208-3222.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Nonlinear State Estimation with Delayed Measurements. Application to Polymer Processes Ruben Galdeano, Mariano Asteasuain, Mabel C. Sanchez PLAPIQUI (CONICET – UNS), Camino la Carrindanga km 7, Bahia Blanca 8000, Argentina.
Abstract Measurement and control of polymer processes is a very challenging task due to their high nonlinearity, strong relationship between quality variables and process conditions, and the absence of reliable on-line sensors. In these processes many important variables cannot be measured on-line or can only be measured at low sampling frequencies. In this context, state estimation becomes an important step for the proper implementation of control systems. A relatively new method known as Unscented Kalman Filter (UKF) has been developed for nonlinear processes. This method overcomes disadvantages of classical techniques like the extended Kalman filter. In this work a methodology is developed for incorporating delayed measurements into the UKF framework, without resorting to re-estimation steps. In this procedure, the filter algorithm is adapted in order to include delayed measurements by modifying the gain matrix. Very good results are obtained in terms of accuracy and computational time. Keywords: state estimation, delayed measurements, unscented Kalman filter.
1. Introduction The problem of state estimation in nonlinear processes has been covered extensively in the past. The most widespread approach in process control is the extended Kalman filter (EKF). However, this strategy may present problems in the case of highly nonlinear systems [Simon, 2006]. A relatively new method, known as the Unscented Kalman Filter (UKF), has been developed for this type of processes. It is based on the unscented transform technique, a mechanism for propagating the mean and covariance of a random variable through a nonlinear transformation [Julier and Uhlmann, 1997]. The process measurements that are available for estimation purposes are usually of the following types: continuous measurements which can be obtained at high sampling frequency, delayed measurements from different devices that are available at a fixed sampling rate, and manual laboratory measurements which are available at unequal intervals and with varying delays [Schei, 2008]. In polymer processes in particular, several critical quality variables belong to the last two of these types [Fonseca et al., 2009]. In spite of its significance, few works related to state estimation in control systems refer to the use of delayed measurements and how they affect the quality of the estimate. A number of methodologies have been presented for the optimal treatment of time delayed measurements in the framework of linear Kalman filters [Bar-Shalom, 2002]. Also some authors have applied a procedure which works in two time scales. In this strategy, the delayed measurements are stored together with the output data, and the filter is rerun from the time when the measurement sample was drawn from the process. Then, the status is updated with these new values, which are the starting points for the next estimation [Kim and Choi, 1991]. In a previous work [Galdeano et al., 2010] we
Nonlinear State Estimation with Delayed Measurements…
593
successfully applied this approach to the UKF. However, it was observed that the reestimation step can be time-consuming under certain situations, which may cause that the estimated data are unavailable at the right time. In this paper, a methodology for the treatment of delayed measurements within the framework of the UKF is presented. The filter algorithm is modified to take into account this type of measurements avoiding the execution of the re-estimation step. The gain matrix is changed in such a way that the update of the state vector and covariance matrix can be done in a single step. In this way the required computational time is significantly reduced.
2. Methodology A discrete-continuous state-space model is considered for the state estimation problem: dx = f ( x,u, t ) + w ( t ) , w ( t ) N ª¬0,Q ( t ) º¼ dt yk = h (xk ) + vk ,
v k N [ 0, R k ]
(1) (2)
where x is the n-dimensional state vector, f(x,u,t) is a n-dimensional vector-valued function, y is the m-dimensional observation vector, u is a vector of known manipulated inputs, w(t) is the n-dimensional plant noise vector, and v is the m-dimensional observation noise vector. Variable Q(t) is an n-dimensional diagonal matrix representing the covariance of the process model errors and R is a m-dimensional diagonal matrix for the covariance of the measurement errors. Vectors v and w(t) are zero mean white Gaussian noise assumed to be independent of each other. The actualization step of the filter with delayed measurements employs the following discrete model: x k -τ = F ( x k , k -τ ) + w ,
w N ª¬ 0 , Q k , k -τ º¼
y d , k -τ = H d ( x d , k -τ ) + v d , k -τ ,
(3)
v d , k -τ N ª¬ 0, R d , k -τ º¼
(4)
where IJ is the measurement delay, subscript d indicates a variable associated with delayed measurements, and Qk,k-IJ and Rd,k-IJ are the covariance matrixes of the process errors and measurement errors, respectively, at k-IJ. The dimensions of Rd,k-IJ and Rk depend on the number of variables measured at the corresponding time. Assuming that delayed measurements are present, the calculation of the estimated values and the covariance are defined as: xˆ ( k k -τ ) = E ª¬ x k y k , y k -τ ¼º
(5)
T ½ P( k k -τ ) = E ® ª x k - x ( k k -τ ) º ª x k - x ( k k -τ ) º y k , y k -τ ¾ ¬ ¼ ¬ ¼ ¯ ¿
(6)
The UKF follows the conventional structure of prediction and actualization stages. The state vector x is approximated by a matrix of sigma points Ȥ selected as shown in Eq. (7) . In the prediction stage, the sigma points are propagated as indicated in Eq. (8). Ȥ k = ª xˆ k ¬«
xˆ k +
(
( n + λ ) Pk )i
xˆ k −
(
( n + λ ) Pk )i º¼»
(7)
594
R. Galdeano et al.
Ȥ -k = F [ Ȥ k , u k ]
(8)
In Eq. (7), Ȝ is a tuning parameter. The sigma points Ȥi are associated with their corresponding weights Wi. The mean and covariance of x are calculated as: 2n
xˆ -k = ¦ Wi m Ȥ -k
(9)
i=0
2L
Pk- = ¦ Wi c ª¬ Ȥ i-, k − xˆ -k º¼ ª¬ Ȥ i-, k − xˆ -k º¼
T
(10)
i =0
In the modification of the UKF proposed in this work, the actualization stage is divided in two parts: actualization with on-line measurements and actualization with delayed measurements. The first one follows the standard UKF algorithm:
xˆ +k = xˆ k- + K k [ y k - yˆ k ]
(11)
Pk+ = Pk- - K k Pyk , yk K Tk
(12)
where Kk is the gain matrix at tk for on-line measurements defined as K k = Pxk , yk Py-1k , yk
(13)
The actualization with delayed measurements corrects the states updated with on-line measurements in a similar fashion: xˆ k ,d = xˆ k+ + K k ,d ( y k − d - yˆ k -d )
(14)
Pk ,d = Pk+ - Pk ,d Pk , d , y PkT,d
(15)
where Kk,d is a gain matrix at tk for delayed measurements. For convenience we let K k ,d = Pk ,d Pk-1,d , y
(16)
In this equation, Pk ,d , y is a conditional covariance of observation statistics and Pk , d represents a conditional cross covariance of states with delayed observations. In order to apply Eqs. (14)-(16) the covariance of the retrodicted measurements using delayed measurements has to be determined. This is carried out considering the conditional statistic of xk given yk-d. This is a suboptimal approximation to the solution of the state estimation problem that yields the following equations for Pk ,d , y and Pk , d :
(
) (h ( Ȥ ) - y )
(
) ( h ( Ȥ ) - yˆ )
Pk ,d , y = ¦ Wi h k -d ( Ȥ i ,k -d ) - y k -d 2N
i=0
Pk , d = ¦ Wi f k - d ( Ȥ i , k - d ) - xˆ k - d 2N
i =0
T
k -d
i ,k - d
k -d
T
k -d
i,k -d
k -d
(17) (18)
where yˆ k − d = ¦ Wi hk -d ( Ȥ *i ,k -d ) 2N
i =0
(19)
Nonlinear State Estimation with Delayed Measurements…
595
3. Results and Discussions The proposed UKF (UKF-d) was applied to the state estimation in a free-radical solution polymerization of styrene in a jacketed continuous stirred tank reactor (CSTR). Toluene and azo-bis-iso-butyronitrile (AIBN) are used as solvent and initiator, respectively. The mathematical model of the process involves six state variables which include reactants (initiator, solvent and monomer), the 1st order moment of the polymer MWD and the number-average (Mn) and weight-average (Mw) molecular weights of the polymer. This model has been used before as case study in several works about state estimation [Vachhani et al., 2006]. Details about the mathematical model and operating conditions can be found elsewhere [Zambare et al., 2002]. In this study it is considered that the initiator and monomer compositions are measured on-line with a sample time of 5 min, while Mn and Mw are measured off-line with a delay of 30 min. Simulation runs for step changes at time 1h in the inlet feed of initiator, from an initial value of 15×10-5 kmol/s to 7.9×10-5 mol/s, were performed. The process was sampled 100 times. The filter algorithm was implemented in Matlab with an embedded process model developed in gPROMS, in PC with a Pentium 4 (3GHz) processor and 2 GB of RAM. The performance of the UKF-d for the estimation of Mn and Mw is shown in Fig. 1. It can be seen that the filter responds satisfactory in terms of accuracy and stability to a system change. Besides, the UKF-d was compared to our previous two-time scale UKF by calculating root mean square errors (RMS) in Mn and Mw and the computational time required for the state actualization step. The results are shown in Table 1. It can be seen that the quality of the estimations is very satisfactory, although the RMS values are slightly higher for the UKF-d. Nevertheless, the RMS values of the proposed UKF are less than 3% of the actual values of Mn and Mw, which is very reasonable for these variables. On the other hand, it can be observed that the computational time is 40% smaller with the UKF-d. These results ensure efficient use for on-line applications, in particular for more complex models with a larger number of states. Table 1 Evaluation of Computational time and RMS UKF-d
Two-time scale UKF
Computational Time (s)
0.8536
1.3560
RMS (Mw)
175.32
143.65
RMS (Mn)
109.75
95.73
596
R. Galdeano et al.
Figure 1 Estimation of Mw and Mn with the UKF-d. ʊ True value; x Measurements; ···· Filter.
4. Conclusions In this work a methodology for incorporating delayed measurements in the UKF was presented. The computational load and the square root error were employed to evaluate the performance of the UKF for its on-line application to polymer processes. The results show that the use of the proposed method provides quality and stability in the estimation. Besides, it reduces the computation time significantly, making it a good alternative for its on-line application. These features make the proposed methodology a promising tool for state estimation when delayed measurements are present. Implementation of this approach in constrained state estimation is under investigation.
References Y. Bar-Shalom, 2002, Update with out-of-Sequence Measurements in Tracking: Exact Solution, IEEE Trans. Aero. Electron. Syst., 38, 3, 769-777. G. E. Fonseca, Dubé, M. A., Penlidis, A., 2009, A Critical Overview of Sensors for Monitoring Polymerizations, Macromol. React. Eng., 3, 7, 327-373. R. Galdeano, Asteasuain, M., Sánchez, M., Two-Time Scale Unscented Kalman Filters for Nonlinear State Estimation with Time-Delayed Measurements. Application to Polymer Processes, 2010, CD of the 2010 AIChE Annual Meeting. K. J. Kim, Choi, K. Y., 1991, On-Line Estimation and Control of a Continuous Stirred Tank Polymerization Reactor, J. Process Control, 1, 2, 96-110. T. S. Schei, 2008, On-Line Estimation for Process Control and Optimization Applications, J. Process Control, 18, 9, 821-828. P. Vachhani, Narasimhan, S., Rengaswamy, R., 2006, Robust and Reliable Estimation Via Unscented Recursive Nonlinear Dynamic Data Reconciliation, J. Process Control, 16, 10, 1075-1086. N. Zambare, Soroush, M., Grady, M. C., 2002, Rea-Time Multirate Estate Estimation in a PilotScale Polymerization Reactor, AIChE J., 48, 5, 1022-1033.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimal controlled variable selection using a nonlinear simulation-optimization framework Mahdi Sharifzadeh a,*, Nina F. Thornhill a a Centre for Process System Engineering (CPSE), Department of Chemical Engineering, Imperial College London, * Email: [email protected]. Address: Department of Chemical Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ.
Abstract In feedback control, controlled variables are those process variables which are measured and fed back to controllers. Then in the presence of disturbances, controllers by the means of manipulating the inputs aim to maintain the controlled variables at their setpoints. The objectives for the selection of controlled variables can be conflicting and competing. These objectives include minimization of (1) economic losses, (2) input manipulations, (3) output variations and (4) changes in process states. This research aims to present a systematic framework for optimal selection of controlled variables. Each of the above-mentioned objectives is defined within a multi-objective function. In addition, the reasoning behind the selection of nonlinear steady state model is explained. The proposed methodology is benchmarked on an industrial distillation train. Optimization programming is presented and the paper discusses how the size of the optimization problem can be reduced by means of engineering insights and addressing the concerns regarding feasibility of the developed control structure. The methodology is scalable to large industrial problems, while maintaining its rigour. The results confirm that a very good trade-off is established between different objectives. Keywords: controlled variables, optimal control, simulation-optimization, profitability.
1. Introduction Some process variables are more important than others, in that if they are optimally controlled, they ensure optimal operation of the process (Morari, 1980). The decision regarding optimal selection of controlled variables is at a higher level than pairing controlled and manipulated variables. The optimal controlled variable benefits an online optimizing control structure in that maintaining these controlled variables at their setpoints maximizes profitability and minimizes the requirement for setpoint manipulation. In addition, in a control structure with constant setpoint policy, optimal controlled variables induce a self-optimizing framework, (Skogestad, 2000). This paper presents a simulation-optimization framework for the optimal selection of the controlled variables. Its novel contributions are: (i) it explicitly establishes a tradeoff between conflicting and competing desirable properties of controlled variable, (ii) operability of the process is ensured through nonlinear modelling. As a result, the method is not limited to linearization region, and (iii) engineering insights are used to reduce the size of the problem. Thus, although the methodology systematically addresses the problem, the formulation is manageable and practicable. The paper is organized as follows. Section 2. discusses the selection of optimization variables, mathematical statement of the problem, and the reasoning behind the selection of the type of model. Optimal selection of controlled variables for an
598
M. Sharifzadeh et. al.
industrial distillation train is presented in Section 3. It is explained how engineering insights are employed to reduce the problem size. The results are discussed in Section 4.
2. Problem formulation This section discusses the methodlogy for formulating the optimal slection of controlled variables as an optimization problem. 2.1. Selection of which controlled variables are going to be optimized Two categories of controlled variables are imposed by the process and are exempt from the economic optimization. Firstly, as discussed by Luyben (1998), control structure selection must ensure consistency and feasibility of the process operation such as setting throughput, ensuring total and component mass balances (i.e. inventory controls), and energy management. Secondly, during process optimization some inequality constraints become activated. An example is the reactor temperature, limited by catalyst tolerance. Such a variable cannot be included in economic optimization because higher values invoke technical concerns and lower values involve degradation of the profitability. Therefore, there is no choice of whether to control or not, because it must be controlled. 2.2. Mathematical statement The following mathematical statement of the problem is adopted from Halvorsen (2003), but is not restricted to the linear case. The overall objective function can be expressed in terms of maximization of profits or minimization of losses: ܡൌ ܽ݃ݎሺܼ݉݅݊ሺܝǡ ܠǡ ܌ሻሻsubject to: ଵ ሺܝǡ ܠǡ ܌ሻ ൌ , ଶ ሺܝǡ ܠǡ ܌ሻ ሾͳሿ In the above, ܝis a vector1 of manipulated variables (also called inputs), ܌is a vector of disturbances that independently influence the process. Vector ܠrepresents state variables. Some state variables can be measured directly, but the rest, if needed, must be estimated. Vector ܡrepresents the vector of controlled variables (also called outputs) which is selected from the measurable state variables. Vector represents equality constraints such as the process model and vector represents inequality constraints, such as technical, safety or environmental concerns. State variables can be calculated from equality constraints, in terms of disturbance and input variables: ܡൌ ܽ݃ݎሺܼ݉݅݊ሺܝǡ ܌ሻሻsubject to: ሺܝǡ ܌ሻ , ܠൌ ଷ ሺܝǡ ܌ሻሾʹሿ As discussed above, when an inequality constraint becomes active during optimization of the process, it changes to an equality and therefore removes a degree of freedom. Consequently, the initial set of manipulated variablesܝwill be reduced toሼ୰ ሽ אሼሽ: ܚ ܡൌ ܽ݃ݎሺܼ݉݅݊ሺ ܚܝǡ ܌ሻሻsubject to: ܚ ሺ ܚܝǡ ܌ሻ ൏ Ͳ, ܠൌ ሺܝǡ ܌ሻሾ͵ሿ where superscript ݎrepresents the reduced search space. More insights can be achieved about the problemሾ͵ሿ by examining the fact that disturbances occur independently, while the inputs will be determined by solving the optimization problem. Therefore, the value of objective function depends on the value of disturbances only, and is a function only of disturbance variables:ܼ௧ ൌ ܼ ௧ ൌ ܼሺ܌ሻሾͶሿ 2.3. Objective functions and optimization variables This sub-section discusses optimization objectives, i.e. which indices can measure the fitness of candidate controlled variables. The proposed objective functions are listed in Table 1 and will be discussed in the following. The implication of the first objective, ݂ଵ , is that maintaining the candidate controlled variables at their setpoints must guarantee the specification of products. The second objective, ݂ଶ , aims to minimize changes in manipulated variables, in order 1
In this paper, the ܌ܔܗ܊case is reserved for vectors and scalar values are shown by݅ݏ݈ܿ݅ܽݐ.
Optimal controlled variable selection using a nonlinear simulation-optimization framework
599
to avoid valve saturation (thus, maintaining controllability), to reduce the interaction between process variables, to minimize the consumption of inputs, and to minimize the required time for disturbance rejection, (Qin, 2003; McAvoy, 1999). The third objective ݂ଷ measures the resistance of the closed loop process against disturbances when the controlled variables are maintained at their setpoints. An example of this objective is the changes in the temperature profile of a distillation column when flow or composition of feed is disturbed. The implication of the above three objectives is to minimize trajectories Table 1. objective function for optimal selection of controlled variables ݂ଵ ሺܠሻ ൌ
between ultimate steady ݂ଶ ሺܠሻ ൌ
states of the process, hence ݂ଷ ሺܠሻ ൌ
minimizing the required ݂ଷ ሺܠሻ ൌ
control time and efforts. The fourth objective concerns operational loss, i.e. decrease in profitability due to disturbances. The origin of this objective refers to the notion of self-optimizing control and its implication is that maintaining the optimal controlled variables at their setpoints must minimize losses in the presence of disturbances, (Skogestad, 2000). 2.4. The simulation-optimization framework The equality constraint of Problem ሾʹሿin the linear form, is as below: ܡൌ ۵ଷ ൈ ܝ ۵ଷǡୢ ൈ ܌՜ ܝൌ ۵ ି ൈ ܡെ ۵ ି ൈ ۵ଷǡୢ ൈ ܌ሾͷሿ where ܡis subset of state variables which are selected as controlled variables. The implication of equation ሾͷሿfor optimal selection of controlled variables is that for each set of controlled variables a specific linearized model ۵ଷ must be constructed and the model matrix must be inversed,۵ ି. This model matrix, however, is not always invertible or a good approximation, resulting in inoperable designs. Therefore, a nonlinear model is employed, in which inverse model is replaced by the simulation. Figure 1. shows the proposed simulation-optimization framework. The simulator is based on the first principle laws. The disturbance model represents the uncertainty associated with independent exogenous variables which affect the process, such as changes in the flow or composition of the feed. When the selected controlled variables, decided by the optimizer, are set in the simulation software, the disturbance scenarios are imposed to the model and then the H2 as Fuel Gas CW consequence of controlling these choices Pyrolysis Air Cooler C5 Cut to Storage of controlled variables will be evaluated Gasoline C6 Cut to through the objective functions. The H2 Storage H2 result will be reported to the optimizer. C7+ Cut to 46
42
16
Optimization (genetic) Algorithm
Structural decision regarding selection of controlled variables Scalar objective values
Disturbance model Evaluation of multiobjective functions
Simulation (AspenHYSYS)
Hot Separator Drum
Hydrogenation Reactor
Storage
40 24
Rerun Column
Cold Separator Drum
15
Depentanizer Column Dehexanizer Column
13
Heavy Ends
Figure 1: Simulation –optimization framework for optimal selection of variables Figure 2. PGH plant; the framed part of the flowsheet, including the distillation sequence, is nominated for case study
3. Case study This section discusses process description and engineering insights regarding the benchmarking problem. 3.1. Process description of pyrolysis gasoline hydrogenation (PGH) plant The case study is adapted from the olefin plant of Arak Petrochemical Co. The process
600
M. Sharifzadeh et. al.
description for the overall olefin plant is available in the literature (e.g. Othmer 2007). In the products of olefin plant, there is a blend with properties very similar to gasoline. The disadvanatge of this product is that the dissolved light olefins are highly reactive with risk of polymerization, if are stored untreated. Therefore, this blend must be saturated by hydrogenation reactions. Then, a sequence of three distillations will separateܥହ , ܥ, ܥା and heavy-ends products. The process schematic is shown in Figure 2. The distillation train, studied in this research, is enframe on the lower right hand side of the Figure 2. The first column, depentanizer, has three products. The column has a partial reflux configuration and the gaseous overhead product is mostly hydrogen. The main product isܥହ cut, and is withdrawn as the side stream. The bottom stream is fed to the dehexanizer column. The ܥcut is produced as the top product and the bottom stream is fed to a vacuum (last) column to be resolved to ܥା and heavy-ends. The required computational effort of simulating the process is relatively high because the pyrolysis gasoline must be estimated by ͵Ͷ components. The modified PengRobinson equation of state is applied for thermodynamic calculation (AspenHYSYS(V7.1)), The simulation is performed using Aspen-HYSYS® and the optimization algorithm is GA® toolbox of MATLAB® which is linked to Aspen HYSYS using COM® automation interface as the client. The feed stream to the depentanizer column is assumed as disturbance. The feed can be interpreted as the mixture of four products:ܥହ , ܥ, ܥା and heavy-ends cuts. Assuming േͷΨ disturbance in each of these cuts, there are sixteen disturbance scenarios in flowrate and composition of the feed. 3.2. Reducing the size of optimization problem As discussed in Section 2.1., those controlled variables associated with feasibility and consistency of control structure are decided in advance (explained below). In addition, the design space is limited to the controlled variables with reasonable measurement characteristics. These concerns can be employed to manage the size of the problem. Degree of freedom analysis: if the feed is assumed as disturbance, then, in a total reflux column, there are five degrees of freedom including boil-up, cooling duty, reflux, and the flowrates of the overhead and bottom products. However, controling the overhead and bottom level of liquid inventories, and column pressure of vapor inventory consume three degrees of freedom, and two degrees of freedom remain. In a column with side product stream, there is an extra degree of freedom. Inferential control: having liquid level and pressure control loops closed, the distillation column is still unstable due to composition drift, (Skogestad 2007). However, direct measurement of the composition with an analyzer is expensive and involves delays. Therefore, temperatures should be measured for inferential composition control, (Luyben, 2005; Luyben 2006). Flow control: If a degree of freedom remains from the last decisions, a flow or flowratio variable (D, B, R, D/F, Table 2. the optimal controlled variables elected for three distillation columns CV C5 Column C6 Column C6 Column R/F, B/F) will be selected. 1 Temperature of 45 tray Temperature of 24 tray Temperature of 6 tray st
4. Results study
of
the
case
2nd 3rd
th
Temperature of 33rd tray Temperature of 10th tray
th
Reflux/Feed none
th
Reflux none
Table 2. presents the result of the simulation-optimization program. These include the controlled variables, which are selected by the optimizer for each distillation column. Table 3. shows that a very good trade-off between different competing objective functions is established. The implication is that while the manipulated variables are preserved from excessive movement in different disturbance scenarios, operational costs are minimized; the products specifications are met and minor changes in average
Optimal controlled variable selection using a nonlinear simulation-optimization framework Table 3. the average of optimal values of objective functions in different disturbance scenarios Changes in manipulated Changes in each tray Changes in operational Changes in product variables [%] temperature [oC] net Profit [%] molecular weight [%] 1.47 0.18 -0.041 0.81
601
Changes in product density [%] 0.28
temperature profiles indicates short trajectories between different process steady states. For the case of distillation columns, process insights can be employed to partition the manipulated and control variables. For more complex processes (e.g. heat integration or recycle streams) RGA or RHP zeros analysis can be applied (Pham 2009). Assuming a decentralized control structure, Figure 3 shows the final control structure including the selected controlled variables for optimization and liquid/vapor inventories. PC
PC
LC
PC
Temperature tray
FIC
R/F
TC
LC
LC
45
TC
33
TC
Temperature tray
24
Temperature tray
10
Temperature tray
TC
5. Conclusion LC
Temperature tray
LC
6
TC
LC
In this research, an optimization-simulation framework is presented for optimal selection of controlled variables. The criteria for selection of controlled variables are explained and a steady state simulation is employed for modelling, in order to avoid unnecessarily increasing the size of the problem. In addition, it is explained why nonlinear model is needed for ensuring operability of the closed loop process. The proposed methodology is benchmarked on an industrial case study of a distillation train. Engineering insights are applied to reduce the size of problem. The results show a very good trade-off between different objective functions is established which ensure controllability and profitability of the control structure. In addition, the methodology is not limited to the linearized space and is scalable and practicable to industrial problems. Figure 3. Control structure for distillation train of PGH Process (Stage numbering is bottom up)
References Aspen-HYSYS (V7.1), (2009), Simulation Basis (software document), Aspen Technology. I. J. Halvorsen, S. Skogestad, (2003), Optimal selection of controlled variables, Industrial & Engineering Chemistry Research 42(14), 3273-3284. W. L. Luyben, (2006), Evaluation of criteria for selecting temperature control trays in distillation columns, Journal of Process Control, 16(2), 115-134. W. L. Luyben, (2006), Distillation Design and Control using Aspen Simulation, John Wiley. W. L. Luyben, B. D. Tyreus. (1998), Plantwide Process Control, McGraw-Hill. T. J. McAvoy, (1999), Synthesis of plantwide control systems using optimization, Ind. Eng. Chem. Res., 38 (8), 2984–2994. M. Morari, Y. Arkun, (1980), Studies in the synthesis of control structures for chemical processes: part I: formulation of the problem. Process decomposition and the classification of the control tasks. Analysis of the optimizing control structures, AIChE Journal 26(2), 220-232. K.Othmer, (2007), Encyclopedia of Chemical Technology, 5th edition, Wiley-Interscience, Volume 10, 593-631. L.C. Pham, S. Engell, (2009), Control structure selection for optimal disturbance rejection, 18th IEEE International Conference on Control Applications, Part of 2009 IEEE Multi-conference on Systems and Control, 707-712. S. J. Qin, T. A. Badgwell, (2003), A survey of industrial model predictive control technology, Control Engineering Practice 11(7), 733-764. J. B.Rawlings, D. Q. Mayne, (2009), Model Predictive Control: Theory and Design, Nob Hill publishing, 1st edition. V.Sakizlis, J. D. Perkins, (2004), Recent advances in optimization-based simultaneous process and control design, Computers & Chemical Engineering 28(10), 2069-2086. S. Skogestad, (2000). Plantwide control: the search for the self-optimizing control structure, Journal of Process Control 10(5), 487-507. S. Skogestad, (2007), The dos and don'ts of distillation column control, Chemical Engineering Research and Design 85(1), 13-23.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Branch-and-Sandwich: An Algorithm for Optimistic Bi-Level Programming Problems Polyxeni M. Kleniati,a Claire S. Adjimana a
Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom
Abstract We consider optimistic bi-level programming problems that possess a nonconvex inner program. We propose a solution strategy based on the refinement of two sets of converging lower and upper bounds. Namely, valid lower and upper bounds are computed simultaneously for the outer and inner objective values. Furthermore, appropriate fathoming tests are introduced in order to permit branching on all the variables without violating the hierarchical decision making. Examples are presented. Keywords: global optimisation, nonconvex inner problem.
1. Introduction Bi-level programming problems (eq. (1) and (2)) model a hierarchical decision making process. Their structure facilitates the formulation of practical problems that involve a hierarchical system such as network design, resource allocation and optimal design in process systems engineering. (1) min{F ( x, y ) s.t. G ( x, y ) ≤ 0, x ∈ X ⊂ R n , y ∈ M ( x)}, x, y
where M ( x) = { y 䌷 y ∈ arg min f ( x, y ) s.t. g ( x, y ) ≤ 0, y ∈ Y ⊂ R m }.
(2)
y
Bi-level programming problems can be interpreted as leader-follower games, where the leader chooses the outer vector x, and the follower responds with an inner vector y optimising his own objective. Formulation (1) is an optimistic bi-level program implying that if the follower has more than one global optimal solutions, the leader can choose the one that minimises his own objective function. Severe implications stem from the hierarchical structure of bi-level problems, such as nonconvex and even disconnected feasible regions. Various classes of bi-level programming problems have been tackled, e.g. [1,2,3,4], while the general class of nonconvex bi-level problems was an open problem until recent works [5,6]. In this paper, we propose a novel solution strategy for the general class of nonconvex bi-level problems based on the computation of two pairs of converging bounds over refined host sets. We consider problems that satisfy the following assumptions: (i) all the functions involved are twice differentiable and continuous; (ii) the sets X and Y are compact; (iii) a constraint qualification holds for problem (4) below for all values of x; and (iv) bounds on the Lagrange multipliers of all inner constraints are known. The proposed approach can be interpreted as the exploration of two solution spaces (corresponding to the inner and outer problems) using a single branch-and-bound tree and is motivated by the following equivalent formulation of (1) [2]: (3) min {F ( x, y )s.t. G( x, y) ≤ 0, g ( x, y) ≤ 0, f ( x, y ) − w( x) ≤ 0}, x∈ X , y∈Y
603
Branch-and-Sandwich: An Algorithm for Optimistic Bi-level Programming Problems where the outer and inner problems are not coupled by the set (2) of optimal solutions of the inner problem but by its optimal value function: (4) w( x) = min{ f ( x, y ) s.t. g ( x, y ) ≤ 0, y ∈ Y }. y
The benefit of formulation (3) is that a restriction of the inner problem (4) yields a relaxation of the overall problem (3) and vice-versa [5]. In view of this observation, we bound the value of (4) by employing well-known convexification techniques to compute valid inner lower and upper bounds. The upper (constant) bound cut augments our proposed (nonconvex) lower bounding problem that approximates problem (3). A similar constant bound cut was also discussed in [7]. The lower bound on the inner objective value is used to perform an appropriate fathoming test that permits branching on all the variables without violating the hierarchical decision making. The three main differences of our approach compared with the algorithm of [5] are that: (i) we never solve the nonconvex inner problem globally, (ii) we permit branching on all the variables proving the use of artificial inner variables redundant, and (iii) the convergence of our approach is established within a branch-and-bound framework, while convergence in [5] is independent of branching. In Section 2, we present the bounding and branching schemes, we outline the algorithm and we also discuss convergence briefly. In Section 3, examples are presented.
2. Branch-and-Sandwich Algorithm 2.1. Bounding Scheme 2.1.1. Inner Problem Bounding Scheme Following the ideas on constructing convex underestimators of nonconvex functions developed in [8,9] and the references therein, let f xu, y ( x, y ) and g xu, y ( x, y) express the convex underestimators of functions f ( x, y ) and g ( x, y ) on X × Y , respectively. Then, we may construct a convex auxiliary problem whose optimal value is always a lower bound on the optimal value of (4) for all x ∈ X : (5) f = min { f xu, y ( x, y ) s.t. g xu, y ( x, y ) ≤ 0}, x∈ X , y∈Y
since for any feasible ( x , y ) in (1) the following holds: f ≤ min { f ( x, y ) s.t. g ( x, y) ≤ 0} ≤ f ( x , y ) = w( x ). x∈ X , y∈Y
In the same vein, let
(6)
f xo, y ( x, y ) and g xo, y ( x, y) express the convex (e.g. linear)
overestimators of f ( x, y ) and g ( x, y ), respectively. Then, a conservative inner problem with respect to x and y may be written as follows: (7) f = max min{ f xo, y ( x, y ) s.t. g xo, y ( x, y ) ≤ 0}. x∈ X
y∈Y
The value f is always a valid upper bound on the optimal value of (4): f ≥ max min{ f ( x, y ) s.t. g ( x, y ) ≤ 0} ≥ min{ f ( x, y ) s.t. g ( x, y ) ≤ 0}; x∈ X
y∈Y
y∈Y
(8)
hence, f ≥ w( x) for all x ∈ X . If the overestimators are linear, the resulting inner minimisation problem is linear; hence, we can employ strong duality theory and write the dual to derive a linear maximisation problem in x and the dual inner variables. An alternative valid inner upper bound may also be obtained usinginterval analysis [10,11]: (9) f ≤ f = min{[ f xo, y ( X , y )]U s.t. g xo, y ( X , y )]U ≤ 0}, y∈Y
U
where [·] denotes the maximum value of the interval inclusion over X.
604
P. M. Kleniati and C. S. Adjiman
2.1.2. Overall Problem Bounding Scheme Assuming that a constraint qualification holds for (4) for all x ∈ X , the proposed lower bounding problem is formulated as follows by using w = f or w = f : F = min {F ( x, y )s.t. G( x, y ) ≤ 0, g ( x, y) ≤ 0, f ( x, y) ≤ w, ( x, y) ∈ ΩKKT }, x∈ X , y∈Y
(10)
where Ω KKT is the set of all points satisfying the KKT conditions of problem (4). To show that (10) is a relaxation of (3), consider a feasible point ( x , y ) in (3), i.e. G ( x , y ) ≤ 0, g ( x , y ) ≤ 0 and f ( x , y ) ≤ w( x ) ≤ w by (8) and (9). Finally, by regularity assumption, y ∈ M ( x ) implies that ( x , y ) ∈ Ω KKT and we can conclude that ( x , y ) is feasible in (10)Ǥ For the upper bounding problem, we first solve the following problem: (11) w( x ) = min{ f yu ( x , y ) s.t. g uy ( x , y ) ≤ 0, y ∈ Y }, y
where x is the solution of the lower bounding problem (10) and f yu ( x , y) and g uy ( x , y ) denote the convex underestimators over Y of, respectively, f ( x , y ) and g ( x , y ). The value w( x ) is then used in the upper bounding problem which follows the form of [5]: (12) F = min{F ( x , y ) s.t. G ( x , y ) ≤ 0, g ( x , y ) ≤ 0, f ( x , y ) ≤ w( x )}. y∈Y
A feasible point ( x , y ) in (12), if it exists, satisfies G ( x , y ) ≤ 0, g ( x , y ) ≤ 0 and f ( x , y ) ≤ w( x ) ≤ min{ f ( x , y ) s.t. g ( x , y ) ≤ 0, y ∈ Y } = w( x ), y
(13)
but w( x ) ≤ f ( x , y ) since g ( x , y ) ≤ 0; hence, f ( x , y ) = w( x ) and ( x , y ) is feasible in (3). 2.2. Branching Scheme & Inner Fathoming Test We describe the branching framework in this section. Henceforth, the auxiliary inner problems (6) and (7) (or (9)), the relaxed inner problem (11) and the overall lower and upper bounding problems (10) and (12), respectively, are considered on subdomains X k × Y k ⊆ X × Y , where k ∈ N denotes a partition element (node) and N = {…, nnodes } the list of unexplored nodes. It has been argued that branching on the inner variable may yield an invalid solution [5,6] since the hierarchical structure of bi-level problems implies consideration of the whole space Y. However, the authors in the previous works support branching on y by introducing dummy variables z in place of y in the inner problem [5]. Thus, branching is permitted on y, the outer variables, but not permitted on z, the inner variables. In this work, we pursue a different strategy, in order to permit branching on y, without the need to introduce artificial inner variables. In particular, consider a partition of X, i.e. X = ∪X p and X i ∪ X j = ∂Xi ∪ ∂X j for all i, j (i ≠ j ), where ∂Xi denotes the relative boundary of Xi [12], such that the unfathomed nodes are classified into collections N X of sublists of N and each collection covers a subregion of X, X p ⊆ X and all of Y, p
namely X p × Y . Finally, let f XLB and f XUB denote the best (lowest) known inner bounds p
p
over X p × Y . Then, the inner fathoming test for a node k ∈ N X involves two steps: (i) if p
f k > f XUB , fathom k by removing it from N and N X p , else (ii) if f k > f XLBp , postpone k p
and select another node for exploration. This procedure guarantees that no node holding an inner lower bound greater than the actual inner optimal value is selected prior to a node with a valid inner lower bound. Hence, an invalid outer upper bound cannot be obtained. As a result, we need not make any distinction during branching between the outer and inner instances of the inner variable. Furthermore, with regards to standard
Branch-and-Sandwich: An Algorithm for Optimistic Bi-level Programming Problems
605
fathoming tests, a node which is known not to contain the global solution of the bi-level problem, e.g. F > F UB , where F UB is the best (lowest) known outer upper bound, may not be eligible for immediate full fathoming because it may contain the global solution of the inner problem. In this case, the node is “outer fathomed”, i.e. removed from N but not from N X . This node is fully fathomed eventually, as long as no node in N contains p
its X domain. As far as the domain partitioning and the branching variable selection rule are concerned, we apply standard strategies for the former, such as bisection, but for the latter, the impact of existing, e.g. [13], and/or new rules remains to be investigated. 2.3. Algorithm The branch-and-sandwich algorithm is outlined here: 0. Initialisation; 1a. node fathoming; if N ≠ ∅ stop; 1b. node selection; 2. solve (5); 3. inner fathoming test; 4. solve (7) and update inner “incumbent”, if necessary; 5. solve (10) globally, fathom if infeasible; 6. if strict outer value dominance holds, outer fathom; 7. solve (11) and (12) and update overall incumbent, if necessary; 8. ε − optimality test, if successful update inner “incumbent”, if necessary, delete node and goto 1; 9. branch and goto 1. At initialisation, the well-known relaxation of (1) yields a point ( x 0 , y 0 ). Then, the inner “incumbent” over X × Y is calculated as f XUB = [ f ( X , y 0 )]U , where X1 = X . If we bisect 1
X, the subdomains at the successor nodes, X 1 × Y and X 2 × Y , form a partition of X but cover the whole Y. We set X1 = X 1 and X 2 = X 2 , and two inner “incumbents” are generated, namely f XUB and f XUB , which initially inherit the parent value but at step 4 of 1
2
the algorithm none or one or both may be updated accordingly. Next, if at the left node, for example, we branch on y, the two successor nodes together cover region X1 × Y and one inner “incumbent”, i.e. f XUB , corresponds to this region/list of nodes. On the other 1
hand, if at the right node, we branch on x, f XUB2 and f XUB3 are created and three inner Dzincumbents” exist in the tree, each “incumbent” covering the whole Y and a subset of X. The creation of inner “incumbents” is managed at the branching step. 2.4. Convergence A sketch of convergence is given based on [12, Theorem IV.3]. The bounding scheme can be shown to be consistent based on the convexification of the inner problem, the properties of the approximate overall problems, the partitioning of the inner space, as well as the fathoming rules that rule out infeasible points both for the inner and outer problems. The selection operation is customised to ensure that the node with the lowest inner lower bound and lowest outer lower bound is selected after a finite number of steps and is thus shown to be bound improving. Therefore, the procedure is convergent.
3. Examples A few results are presented. Nonconvex constraints due to the KKT conditions are reformulated using standard integer programming techniques, e.g. [1, Ch.3]. Nonconvex problems are solved with αBB [13]. In the tables, “ − ” stands for no computation. Example 1 (branching on y variable): Consider Example 3.4 in [14] with y* = 1 : (14) min { y s.t. y ∈ arg min{− y 2 s.t. y ∈ [−0.5,1]}}. y∈[ −0.5,1]
y
Example 2 (nonconvex parametrised in x inner problem): Consider Example 3.11 in [14] with unique global optimal solution ( x* , y* ) = (0, −0.8) :
606
P. M. Kleniati and C. S. Adjiman
min
{ y s.t. y ∈ arg min{x (16 y 4 + 2 y 3 − 8 y 2 −
x∈[ −1,1], y∈[ −0.8,1]
y
3 1 y + ) s.t. y ∈ [−0.8,1]}}. 2 2
(15)
Example 3 (nonconvex parametrised in x inner problem): Example 3.24 in [14] with global optimal solution ( x* , y* ) = (0.2106,1.799) yielding F * = −1.755 and f * = 0 : (16) min { x 2 − y s.t. y ∈ arg min{(( y − 1 − 0.1x) 2 − 0.5 − 0.5 x ) 2 s.t. y ∈ [0, 3]}}. x∈[0,1], y∈[0,3]
y
Example 1 Initialisation
f
f
F
F
F UB
f UB
−∞
0
-0.5 -0.5
1
∞ 1
-0.25 -0.25
1
-1
[ −0.5,1] [ −0.5, 0.25] [0.25,1] [ −0.5, 0.25]
-1 -0.25 -1
-0.0625
Example 2 Initialisation
f
f
F
F
F UB
f XUB
X
−∞
2.1096
-0.8 -0.8
-0.8
∞
-9
-0.8
2.1096 0
[ −1,1] [ −1,1]
f
f
F
F
F UB
f XUB
−∞
0.25 0.25
-3 -1.755 -1.755
-
∞ ∞ ∞
12.25 0.25 0.25
X [0,1] [0,1] [0, 0.5]
0.299 0.25 0.1243
-1.755 -1 -1.667
[ −1,1]× [ −0.8,1]
Example 3 Initialisation [0,1]× [0,3] [0, 0.5] × [0, 3] [0, 0.5]× [0,1.5] [0, 0.5]× [1.5,3] [0, 0.5]× [0,1.5] [0.5,1] × [0, 3]
-30.469 -26.447 -2.285 -2.345 -27.231
Postpone 1 1 Fathom
∞ ∞
Postpone -1.755 -1.755 0 Outer Fathom Outer Fathom
[0, 0.5]
Acknowledgment The authors are grateful to the Leverhulme Trust (Philip Leverhulme Prize) for support.
References [1] J.F. Bard, 1998, Nonconvex Optimization and its Applications, 30 [2] S. Dempe, 2002, Nonconvex Optimization and its Applications, 61 [3] Z.H. Gümüs and C.A. Floudas, 2001, J. Global Optim., 20, 1, 1-31 [4] N.P. Faísca, V. Dua, B. Rustem, P.M. Saraiva and E.N. Pistikopoulos, 2007, J. Global Optim., 38, 4, 609-623 [5] A. Mitsos, P. Lemonidis and P.I. Barton, 2008, J. Global Optim., 42, 4, 475-513 [6] A. Tsoukalas, 2009, Global Optimization Algorithms for Multi-Level and Generalized SemiInfinite Problems, PhD thesis, Imperial College London [7] J.F. Bard, 1983, Math. Oper. Res., 8, 2, 260-272 [8] C.A. Floudas, 2000, Nonconvex Optimization and its Applications, 37 [9] M. Tawarmalani and N.V. Sahinidis, 2002, Nonconvex Optimization and its Applications, 65 [10] R.E. Moore, 1979, SIAM Studies in Applied Mathematics, 2 [11] B. Bhattacharjee, P. Lemonidis, W.H. Green Jr. and P.I. Barton, 2005, Math. Program., 103, 2, Ser. B, 283-307 [12] R. Horst and H. Tuy, 1993, Global optimization, Springer-Verlag, 3rd Edition [13] C.S. Adjiman, S. Dallwig, C.A. Floudas, A. Neumaier, 1998, Comp. Chem. Eng., 22, 9, 1159 [14] A. Mitsos and P.I. Barton, 2010, A Test Set for Bilevel Programs, Technical Report, Massachusetts Institute of Technology, http://yoric.mit.edu/sites/default/files/bileveltestset.pdf
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Comparison of Gradient Estimation Methods for Real-time Optimization Bala Srinivasan, Grégory François* and Dominique Bonvin* Ecole Polytechnique Montreal, Montreal, H3C 3A7 Canada *Laboratoire d’Automatique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland. Abstract Various real-time optimization techniques proceed by controlling the gradient to zero. These methods primarily differ in the way the gradient is estimated. This paper compares various gradient estimation methods. It is argued that methods with model-based gradient estimation converge faster but can be inaccurate in the presence of plant-model mismatch. In contrast, model-free methods are accurate but typically take longer to converge. Keywords: Real-time optimization, Extremum-seeking control, Neighboring extremals, Self-optimizing control, Gradient estimation.
1. Introduction Process optimization based on ¿rst-principles models is challenging due to the complexity of the underlying physico-chemical processes. Hence, simpler models are typically formulated and used for optimization by updating their parameters on-line using measurements. Alternatively, measurements can also be used to update the inputs without the intermediary of a physical model. Any measurement-based optimization approach will (i) select the active constraints and keep them active, and (ii) push the reduced gradients to zero. To do so, one needs to measure or estimate the constraints, the gradients of the constraints, and the gradient of the cost function. Constraints are often straightforward to measure. In contrast, gradients must be estimated since they cannot be measured directly. This paper explores various ways of estimating gradients in real time for optimizing the steady-state performance of dynamic processes. In particular, gradient estimation techniques are classi¿ed as either model-based or model-free, and each class is analyzed in terms of accuracy and convergence time.
2. Real-time Optimization using Gradient Control 2.1. Problem formulation In this paper, the unconstrained optimization of a process at steady state is considered, which can be formulated as follows: min J u
= φ (x, u, θ )
s.t. xÚ = F(x, u, θ ) ≡ 0,
(1)
where J is the cost to be minimized, x ∈ ℜn the states (considered at equilibrium), u ∈ ℜm the inputs, and θ ∈ ℜq the uncertain parameters. In addition, it is assumed that
Bala Srinivasan et al.
608
y = h(x, u, θ ), where y ∈ ℜ p are the measurements. F, h, and φ are smooth functions that describe the system dynamics, the outputs, and the cost function, respectively. It is assumed that the state variables can be expressed as functions of u and θ by using (1) and thus can be eliminated to give the following optimization problem: min J u
= Φ(u, θ )
y = H(u, θ ),
(2)
where Φ and H are corresponding lumped functions. Let θ0 be the nominal parameter values, u0 the nominal optimal inputs and y0 the nominal optimal outputs. The necessary conditions of optimality indicate that the derivative g = ∂∂ uJ = 0. All the real-time optimization methods presented in this work will adapt the input to force the gradient to zero. They only differ in the way the gradient is computed. With k being the adaptation gain in [h−1 ] and P an approximation of the Hessian, the general adaptation law is uÚopt = −k P−1 g. 2.2. Model-free gradient estimation techniques In all the model-free techniques, it is assumed that the cost is directly measured, i.e. y = J. Also, since no structural information regarding F is available, the gradient can only be obtained by presenting the system with different input values and calculating the gradient from the corresponding output values. The presentation is for the single-input case, but it can be easily extended to the multi-input scenario. Finite-difference gradient estimation (FD) - Kreysig (1988): Two different input values are given over a period of time T , which allows the system to each time reach steady state. The gradient is computed using the ¿nite difference, with i the iteration number: J((2i + 2)T ) − J((2i + 1)T) uopt (i) 2iT ≤ t < (2i + 1)T u(t) = . (3) ,g= uopt (i) + Δ (2i + 1)T ≤ t < (2i + 2)T Δ Gradient estimation by excitation/correlation (EC) - Ariyur and Krstic (2003): A sinusoidal excitation is added and the gradient estimated by correlation: ¯ sin(ω t) 2(J − J) d J¯ dg ¯ u(t) = uopt (t) + Δ sin(ω t), = α (J − J), =β −g , (4) dt dt Δ where α and β represent ¿lter coef¿cients. The ¿rst ¿lter eliminates the bias and the second determines the gradient. The frequency of excitation is so chosen that the period of oscillation is slower than the system settling time. Gradient estimation from multiple units (MU) - Srinivasan (2007): The availability of multiple process units is assumed, and the inputs to the units differ by an offset. The gradient is estimated using ¿nite difference between units (labeled ’a’ and ’b’ here): Δ ua (t) = uopt (t) + , 2
Δ ub (t) = uopt (t) − , 2
g(t) =
Ja (t) − Jb (t) . Δ
(5)
2.3 Model-based gradient estimation techniques In model-based techniques, it is assumed that a structurally correct process model is available. However, the parameters θ are either unknown or uncertain. Furthermore, it will be assumed that there are more measurements than the number of uncertain parameters, i.e.
Comparison of Gradient Estimation Methods for Real-time Optimization
609
p ≥ q. Since the measured information is suf¿cient to estimate the unknown parameters, there will be no external excitation, and u(t) = uopt (t). Parameter estimation - gradient calculation (PE) - Adetola and Guay (2007): This is the classical two-step method, whereby the parameters are estimated and the updated model used for calculating the gradient. Numerical optimization is replaced by controlling the gradient to zero. With θˆ the parameter estimate and kθ the gain used in parameter estimation, one has: ∂H + ∂Φ ˆ (θ ). (y − H(u, θˆ )), θˆ (0) = θ0 , g= θÚˆ = kθ (6) ∂θ ∂u Neighboring extremals (NE) - Gros et al. (2009): The parametric variation δ θ = θ − θ0 is calculated from the variations δ u = u − u0 and δ y = y − y0 , respectively: ∂H ∂H ∂H + ∂H (δ y − δy = δu → δθ = δ u). (7) δθ + ∂θ ∂u ∂θ ∂u Then, from variational calculations, the gradient is given by: ∂ 2Φ ∂ 2Φ ∂ 2Φ ∂ H + ∂ 2Φ ∂ 2Φ ∂ H + ∂ H g= δu+ δy+ δ u. (8) − δθ = ∂ u2 ∂ u∂ θ ∂ u∂ θ ∂ θ ∂ u2 ∂ u∂ θ ∂ θ ∂u Self-optimizing control (SOC) - Alstad and Skogestad (2007): This method calculates the sensitivity of the optimal outputs and inputs with respect to parametric variations. From the variational form of the necessary conditions (8), the sensitivity matrix becomes: ⎡ −1 2 ⎤
" ! ∂H ∂ H ∂ 2Φ ∂ Φ ∂ yopt ∂ yopt δy ⎢ ∂ θ − ∂ u ∂ u2 ∂ u∂ θ ⎥ ∂ θ ∂u , P=N S = ∂ uopt =⎣ , (9) 2 −1 2 ⎦, g = N δu ∂ Φ Im − ∂∂ uΦ2 ∂θ ∂ u∂ θ where the m × (p + m) matrix N lies in the null space of ST and Im is the m × m identity matrix. The m controlled variables are selected as c = N [δ y, δ u]T , which indeed represents the gradient.
3. Comparison of Gradient Estimation Techniques The goal of this section is to compare the various gradient estimation techniques in terms of their basic requirements as well as accuracy and convergence characteristics. Measurements: Model-free methods require only the cost to be measured, whereas model-based methods rely on output measurements. Note that the model relates the measured outputs y to the cost J, thereby making cost measurement unnecessary. Model: Among the various model-based techniques, only PE uses the model on-line, while the other two use the model off-line to design the controller. Excitation: In model-based techniques, since information regarding uncertainty can be obtained from the outputs, no temporal excitation is necessary. In contrast, in model-free techniques with only cost measurement, one needs to excite the system to estimate the gradient. Temporal excitation is provided in the FD and EC methods, while the use of multiple units provides the needed excitation in the MU method.
Bala Srinivasan et al.
610
Accuracy: Model-based techniques work well when the model is structurally correct and the disturbances can be represented by parametric variations. When there is plant-model mismatch or when there are other variations that are not accounted for, the convergence will not be to the desired optimum. Since NE and SOC are based on linearization, they tend to give good results only for small parametric variations. Convergence time: Model-based techniques have a clear edge when it comes to convergence time. Except for the MU adaptation, model-free methods are slow since the excitation has to respect a certain time-scale separation and be slower than the system settling time. MU adaptation is faster since the excitation is not temporal. Among the model-based techniques, PE is slower due to the dynamics of the parameter adaptation. In NE and SOC, the gradient information is readily available, which makes them fast.
4. Illustrative Example Steady-state optimization of an isothermal CSTR is investigated, with the reactions A + B → C, 2B → D. The manipulated variables are the feed rates of A and B. The following cost function is considered: max
uA ,uB
J=
cC2 (uA + uB)2 − w(u2A + u2B ). uA cAin
(10)
The ¿rst term of J corresponds to the product of the amount of C produced cC (uA + uB) B) and the yield factor cCu(uAAc+u , while the second term penalizes the control effort with Ain w = 0.004. The model equations result from standard mass balances and read: uA uA + uB uA + uB cAin − cA = 0, cC = 0, (11) cÚC = k1 cA cB − V V V uB uA + uB uA + uB cÚB = −k1 cA cB − 2 k2 c2B + cBin − cB = 0, cÚD = k2 c2B − cD = 0. (12) V V V
cÚA = −k1 cA cB +
where cX denote the concentration of species X , V = 500 L the reactor volume, cAin = 2 mol/L and cBin = 1.5 mol/L the inlet concentrations and k1 = 0.75 L/(mol h) and k2 = 1.5 L/(mol h) the rate constants of the two chemical reactions. The parameters that are subject to change are θ = [k1 k2 ]T , with the plant values being k1 plant = 1.4 L/(mol h) and k2 plant = 0.4 L/(mol h). In addition, an unmodeled disturbance, cAin,plant = 2.5 mol/L is considered to study the effect of plant-model mismatch. The values of the adaptation gain k are given in Table 1. All methods use the Hessian evaluated at the nominal optimum for the matrix P, except for the SOC method that uses (9). All model-free methods use Δ = 0.4 Lh−1 . The EC method uses ω1 = 2π /150, ω2 = 2π /200, α = β1 = β2 = 1/200 h−1 . The parameter estimation uses kθ = 1 h−1 . Table 1 also summarizes the results obtained with the different approaches, in terms of accuracy (optimality loss) and convergence time, with and without plant-model mismatch. The normalized cost is computed by dividing the actual plants cost J(t) by the corresponding steady-state optimal cost J ∗ . Thus, the optimality loss is given upon convergence by ) η = 1 − J(TJconv , where Tconv is the convergence time. In addition, Figure 1 depicts the ∗ evolution of the normalized cost for the six methods for the case of plant-model mismatch. The model-based methods are clearly less accurate. However, these methods do quite well in the case of a perfect model, for which case the optimality loss is practically zero with the PE scheme, while small errors persist with NE and SOC due to the effect of
Comparison of Gradient Estimation Methods for Real-time Optimization
611
linearization. In terms of convergence time, PE is slightly inferior due to the time taken for parameter estimation. Model-free methods are able to reject the effect of both parametric uncertainty and plant-model mismatch at the price of a larger convergence time (thousands of hours for FD and EC and about 150 hours for MU).
Strategy No adapt FD EC MU PE NE SOC
No model mismatch cAin,plant = 2 mol l η [%] Tconv [h] 19.06 1200 0.22 3000 0.44 150 0.03 75 0.001 45 0.50 45 0.84
k [h−1 ] 0.003 0.001 0.02 0.1 1 1
Model mismatch cAin,plant = 2.5 mol l Tconv [h] η [%] 26.35 1200 0.07 4000 0.33 150 0.05 75 6.82 50 4.89 60 17.39
Table 1. Convergence time and optimal loss of model-free (FD, EC and MU) and modelbased methods (PE, NE and SOC) for the cases without and with plant-model mismatch. Normalized Cost vs. Time
Normalized Cost vs. Time 1.3 FD EC
1.3
1.1
1.1 J(t) / J*
1.2
J(t) / J*
1.2
1
1
0.9
0.9
0.8
0.8
0.7 0
1000
2000
3000 Time [h]
4000
5000
PE SOC NE MU
0.7 0
20
40
60
80 100 Time [h]
120
140
160
Figure 1. Normalized cost for the case of plant-model mismatch (model-free methods FD, EC and MU converge to the optimal cost, whereas model-based methods do not).
References Adetola, V., Guay, M., 2007. Guaranteed parameter convergence for extremum-seeking control of nonlinear systems. Automatica 43 (1), 105–110. Alstad, V., Skogestad, S., 2007. Null space method for selecting optimal measurement combinations as controlled variables. Ind. Eng. Chem. Res. 46 (3), 846–853. Ariyur, K., Krstic, M., 2003. Real-Time Optimization by Extremum-Seeking Control. John Wiley, New York. Gros, S., Srinivasan, B., Bonvin, D., 2009. Optimizing control based on output feedback. Comput. Chem. Engng. 33 (1), 191–198. Kreysig, E., 1988. Advanced Engineering Mathematics, Sixth edition. John Wiley And Sons, Inc, New York. Srinivasan, B., 2007. Real-time optimization of dynamic systems using multiple units. Int. J. Robust Nonlinear Control 17, 1183–1193.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multiobjective optimization of the pulp/water storage towers in design of paper production systems Aino Ropponen, Miika Rajala, Risto Ritala Tampere University of Technology, Department of Automation Science and Engineering, P.O. BOX 692, FI-33101 Tampere, Finland
Abstract This paper presents an optimization strategy for pulp/water flow management in a paper production system. The problem is formulated at two levels – the upper/design level concerning optimal volumes of the pulp/water tower and the lower/operational level concerning how to control the system optimally with given tower volumes. At both levels of there are multiple and conflicting objectives. Keywords: Multi-objective optimization, process design, stochastic systems
1. Introduction Papermaking is a complex process in which paper is produced from pulp (wood), water, filler, and chemicals. The goal in papermaking is to produce paper having the specified quality while minimizing the costs. The process consists of several subprocesses in which raw materials are prepared, mixed and diluted with water, the paper web is formed and water is removed. In Figure 1 a simplified sketch of the papermaking process is presented. The process is strongly affected by the web breaks. If the paper web breaks during the run, all the production is discarded. The discarded production, called broke, is diluted, stored and reused as raw material for papermaking. In order to manage stochastic disturbances the papermaking process has several storage towers for pulp and water acting as buffers. One of the key tasks when operating paper production is to manage the flows in this tower system. Rapid changes in the flows may be needed to prevent the storages running empty or over, and as the flows also cause changes to the paper quality, the goals of flow management are conflicting. The flow management is the easier, the larger the tower volumes, but the capital cost increases as a function of the volume. This paper focuses on the flow management issues described above. We consider four storage towers and present an optimization strategy for the design and operation of these towers. The overall goal is to optimize the volumes of the towers by finding a tradeoff between the capital/investment costs of the towers and the operational performance of the process. For each design candidate an optimal operational policy for flows needs to be solved. Thus a bi-level multiobjective [1-2] stochastic optimization model is addressed featuring at the lower level a dynamic operational optimization problem, and at the upper level a design problem formulation for the optimization of the storage volumes and operational policy. At both levels the objectives are conflicting. The solution strategy presented for the bi-level problem enhances that of the broke tower optimization [3] to a more complex system of four towers.
Multiobjective optimization of the pulp/water storage towers in design of paper 613 production systems This paper is organized as follows. In Section 2, the optimization solution strategy for the stochastic operational problem is presented while Section 3 discusses the design solution strategy. Finally, Section 4 provides a short summary and highlights the ongoing research directions.
Figure 1. In papermaking, pulp, raw materials and water are first mixed, then at the paper machine the web is formed and water is removed. The water removed and the production discarded are stored into towers and reused in the process. In this study, the towers considered are: clean water, 0-water, broke and dry broke towers. The flows to be optimized are denoted by u1…u5.
2. Optimization of the operational decisions The goal in the optimization of the process operation is to produce paper of specified quality while maximizing the effective production time and preventing storage towers running empty or overflowing. In operational optimization the capacities of the towers are fixed and the five flows, marked in Figure 1 by u1…u5, need to be dynamically optimized. We consider three quality variables – filler content, amount of material per web area (basis weight) and web strength. The squared deviation of each quality variable from its set point is minimized over an optimization horizon KH. By adding a term penalizing fast flow changes, the operational optimization problem finally consists of four quality objectives and eight constraints for the tower volumes. Scalarization [2] of these four objectives leads to a following optimization problem. minK H 1
^u ( n k )`k
s.t.
0
K H 1
J k Qn k 1 Q0 T W Qn k 1 Q0 D T u n k u n k 1 ¦ k 0
Vmin,i d Vi n d Vmax,i
2
(1)
u min, j d u j n d u max, j
where n is the current time instant, i=1…4 stands for the clean water, 0-water, broke, and dry broke towers, respectively, Vi(n) is the volume of water/pulp in the tower i at time n, Vmax,i and Vmin,i are the maximum and minimum volumes of the towers, u(n) is a vector of controls as [u1(n) u2(n) u3(n) u4(n) u5(n)]T, D is a scalarization vector for the controls, J(k) is a time-wise weighting factor, Q is a vector of the quality variables, Q0
A ino Ropponen et al.
614 includes the set points for the quality variables, weights: ª q fillern º ª q 0, filler º ª w1 « » » « « Qn « q bw n » , Q0 « q 0,bw » , W « 0 «q strength n » «q 0, strength » «¬ 0 ¼ ¬ ¬ ¼
and W is a matrix for the scalarization
0 w2 0
0º 0 »» . w3 »¼
The paper quality can be modeled as Km
Qn k Qn ¦ C k ' u n k k ' u n k '
(2)
k' 1
where C(k’) is a matrix of coefficients obtained through step response tests by changing one control variable at a time and Km is the model order. Thus, the current quality is assumed to depend only on the actions done within Km timesteps. By replacing Q(n+k) in Eq. (1) by Eq. (2), and denoting u=[u(n+K H 1) T ,…,u(n) T ] T the problem can be reformulated as a quadratic form e.g.
min uT Hu 2c T u
(3)
u
Web breaks affect the tower volumes management, as during a break broke tower fills in quickly while 0-water tower runs empty. By including some simplifications, the tower dynamic at a discrete time n+1 can be presented as
V n 1 V n Au Bbn D
(4)
where A, B, and D are modeling matrices obtained through expert knowledge, and b(n) is a binary variable indicating the break state (1 = On, 0 = Off). Break state is a twostate Markov chain with state transition probabilities depending on the web strength. The probabilities pbr(n) and prec(n), for onset and end of a break are assumed to be in the following relationship to strength:
>
@
pbr n min^a A exp a B sn s 0,br ,1`
>
@
(5)
p rec n max^1 aC exp a D sn s 0,rec ,0`
where s0,br and s0,rec are the nominal strength indices for a break and a recovery from the break, respectively, s(n) is the current strength of the paper, and aA, aB, aC, and aD are parameters. Optimization constraints for the storage volumes are thus Ai
¦ un k ' d Vmax,i Vi n kDi Bi FZ1 k pi(up ) k
K H 1
b n
k' 0
Ai
K H 1
¦ un k ' d Vi n kDi Bi F
k' 0
1 Z b n k
p
( down ) i
k Vmin,i
(6)
where Zb(n)(k) is the number of breaks between n+1 and n+k and F denotes its cumulative distribution. Here pi(up / down ) k is a function for accepted risk that tower runs over or empty, for exact expression, see Table 1. As web breaks do not cause major disturbances to clean water and dry broke towers, no risk of overflow/running empty is accepted, and ܤ are chosen to be 0 for them.
Multiobjective optimization of the pulp/water storage towers in design of paper 615 production systems Table 1. Function for the accepted risk that 0-water and broke towers run empty or over.
p i(up ) k 0-water tower Broke tower
1 1 p 2(up )
1 p
k
( up ) k 3
p i( down ) k
1 p 1 1 p ( down ) k 2
( down ) k 3
As web breaks are random events depending on the future actions, the cumulative distribution is not known in advance, and an approximate method presented in [3] is used to estimate the cumulative distribution. Thus, as the objective function Eq. (3) is quadratic and the constraints Eq. (6) linear, the operational optimization can be solved as a quadratic optimization problem. By running the process model without simplifying assumptions Eq. (2) and Eq. (4) in parallel with the optimization model, the performance of the process can be studied with simulation. An example of such a simulation is presented in Figure 2.
Figure 2. Example of the process simulation, in which flows u1…u5 obtained from the operational optimization. In each plot the vertical axis is time (min), time step being 10 min. Dash lines are the minimum and maximum values of each variable. The parameter values used are: KH=50, Km=30, p2(up)=p3(down)=0, p2(down)=p3(up)=0.05, Jk =0.99k, w1=1/0.012, w2=1/0.12, w3=1/0.052, D1=D3=D5=0.05/uj(nom), D2=D4=0.5/uj(nom), where uj(nom) is the nominal value of each control flow.
3. Optimization of the design decisions The goal at the design level is to optimize the storage tower volumes by minimizing the investment cost related to the tower volumes and maximizing the expected process performance. The degrees of freedom (d) are the towers volumes Vmax,1,…,Vmax,4, and the scalarization parameters of the multiobjective operational optimization p2(up), p3(up), p2(down), p3(down), w1, w2, D1,…,D4. The problem can be formulated as follows.
A ino Ropponen et al.
616 4 H Vmax,i ¦ ° i 1 ° E < T ps ° 2 °E q min® < fillern q 0, filler d ° E < qbw n q 0,bw 2 ° 2 ° E < q strength n ° E u n 1 u n 2 ¯ <
^
^ `
^
^
^
`
`
`
(7)
`
where Tps is a time until a production stop i.e. time till one of the towers goes empty or over. For a chosen set of design candidates, the expected values for each candidate are obtained through large set of operational process model simulations with online optimization running in parallel. An example of such simulation results is shown in Figure 3. A strategy for selecting the most preferred design solution is presented in [4].
Figure 3. Example of the design plane obtained through simulations. Left: Design solutions with respect to the investment cost and time until production stop, Pareto-optimal designs circled. Right: Same designs and Pareto-optimal set at the filler-breaks plane.
4. Conclusion and discussion In this paper, we have presented a strategy for solving a decision making problem of a pulp/water system management in papermaking. The problem is formulated in design and operational levels with multiple objectives at both levels. In our future studies, we shall further extend the strategy by including the chemical and mechanical pulp storage towers to the optimization and shall examine the influence of the electricity price on the decision making.
Acknowledgements The financial support provided by Forestcluster Ltd and its Effnet program is gratefully acknowledged.
References [1] P. Clark, A. Westerberg, 1983, Optimization for design problems having more than one objective, Computers & Chemical Engineering, Vol. 7, No. 4, pp. 259-278. [2] K. Miettinen K., 1999, Nonlinear multiobjective optimization, Boston, Kluwer. [3] A. Ropponen, R. Ritala, E.N. Pistikopoulos, Broke management optimization in design of paper production systems, In Pierucci S, Buzzi Ferraris G (editors), European Symposium on Computer Aided Process Engineering – 20, Naples, Italy, June 2010, pp. 865-870. [4] A. Ropponen, R. Ritala, E.N. Pistikopoulos, Optimization issues of the broke management system in papermaking, accepted to Computers and Chemical Engineering.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Combined nonlinear model reduction and multiparametric nonlinear programming for nonlinear model predictive control Pedro Rivotti,a Romain S.C. Lambert,a Luis Dominguez,a Efstratios N. Pistikopoulos,a a
Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK
Abstract This work presents a methodology which combines nonlinear model reduction techniques with recent advances in multiparametric nonlinear programming (mp-NLP) to derive explicit multiparametric controllers for nonlinear MPC (NMPC). Nonlinear model order reduction (NMOR) techniques based on empirical gramians are used for the model reduction step. The approach is illustrated on a 32 states distillation column model example. Keywords: Multiparametric programming; Nonlinear MPC; Nonlinear model reduction
1. Introduction The potential of using multiparametric programming in the context of model predictive control (MPC) has been widely recognised in the open literature. In this approach, the optimal control actions are computed offline, as a function of the states, and the computational burden of implementing online MPC could be effectively reduced. Recently, Dominguez et al. (Dominguez & Pistikopoulos, 2010) presented a novel algorithm to solve convex nonlinear MPC problems based on local sensitivity analysis results, using multiparametric programming techniques. However, the framework for the development of explicit multiparametric controllers, presented by Pistikopoulos (Pistikopoulos, 2009), highlights the need for reducing the scale of high dimensional models. This remains one of the key challenges limiting a wider application of multiparametric MPC. For nonlinear systems, the problems arising from having a high dimensional system are even more challenging. To address the issue of high dimensional linear models, Narciso et al. (Narciso & Pistikopoulos, 2008) presented a combined approach of balanced truncation and multiparametric programming in which the original system is projected into a subsystem of lower dimensionality, based on which the derivation of explicit multiparametric controllers becomes effective. In this work, this approach is extended to nonlinear systems, making use of the algorithm proposed in (Dominguez & Pistikopoulos, 2010). This approach is illustrated by revisiting an example of a distillation column system with 32 states, for which it was shown (Hahn & Edgar, 2002) that a reduced model with only 1 state may successfully be used for closed-loop control purposes. Based on the combination of an explicit nonlinear model reduction scheme and nonlinear multiparametric programming, an explicit multiparametric controller is derived for a reduced order system. The results obtained with this controller are then validated against the original, full space model.
Pedro Rivotti et al.
618
2. Theoretical background 2.1. Nonlinear Multi-parametric Programming and nonlinear Model Predictive Control A general multiparametric programming problem may be formulated as follows, z T
min f u ,T u
s.t. g u ,T d 0 (1) hu ,T 0 u U T 4 where, in the case of an MPC formulation, T represents the vector of parameters, corresponding to the initial states of the system and u the vector of control inputs for a given control horizon. Usually the inequality constraints, g, refer to operational or safety restrictions on the state variables or the control inputs, while the equality constraints, contain the mathematical model of the system. In the case when f is a quadratic function and the functions g and h are linear, problem h, (1) is a mp-QP. Bemporad et al. (Bemporad et al., 2002) presented a procedure to determine the exact solutions, u(T), as piece-wise affine functions of the parameters, for this class of problems, as well as the map of critical regions, i.e., the regions in the state space where these solutions are valid. However, for the case when the functions g or h are nonlinear, problem (1) is a nonlinear multiparametric programming problem, for which only approximate solutions may be obtained. Dominguez et al. (Dominguez & Pistikopoulos, 2010) presented an algorithm for convex nonlinear multiparametric problems, based on local sensitivity analysis results, using multiparametric programming techniques. The algorithm uses successive linearizations of the dynamic system and nonlinear constraints and provides an approximate solution u(T), as a piece-wise affine function of the parameters. The solution may be used to implement an explicit multiparametric controller in closedloop, by applying the first control input at each time step. The steps of the nonlinear multiparametric algorithm are presented in Table 1. Table 1 – Steps of the multiparametric nonlinear programming algorithm (Dominguez & Pistikopoulos, 2010).
1– 2– 3– 4– 5– 6– 7–
Define list of regions to explore, ࣬. Select ܴܥfrom ࣬ and a point ߠ ܴܥ א. Solve NLP at ߠ and store solution ߭ כ. Compute optimal solution uሺߠሻ. Linearize the nonlinear constraints at ߭ כand obtain the critical region ܴܥ where ݑሺߠሻ is valid. Partition ܴܥand add resulting regions to ࣬. Continue from step 2 until the whole region ࣬ has been explored.
Nonlinear model reduction and nonlinear parametric programming for NMPC
619
2.2. Nonlinear Model Reduction based on Empirical Gramians Current multiparametric algorithms to solve problem (1), for both linear and nonlinear cases, are only available for certain classes of problems, with a small number of state variables. In the case when problem (1) is a nonlinear multiparametric programming problem, the problems arising from having a high dimensional system are even more relevant. It is therefore necessary to apply a model reduction scheme. Hahn et al (Hahn & Edgar, 2002) presented a method for nonlinear model order reduction using balancing of empirical gramians. Initially, the empirical controllability and observability matrices of the system are obtained by collecting data from simulations. Afterwards, the method consists of finding the transformation matrix that balances the two empirical gramians and reduce the number of state variables either by truncation or residualization. The resulting reduced order model is therefore projected in a space that does not retain the same physical meaning as the original space. Even though the order of the system is reduced, this does not imply that the complexity of the model is equally reduced. In fact, since the dynamics of the system are projected on only a few states, the resulting model is usually more dense than the original. However, for the implementation of the nonlinear multiparametric algorithm (Dominguez & Pistikopoulos, 2010), it is possible to use such reduced models, as long as the number of reduced states is small enough and the reduced model is convex.
3. Results To demonstrate the combined NMOR and mp-NMPC method, we revisit an example of a distillation column with 32 states, presented by Hahn et al. (Hahn & Edgar, 2002). The mp-NMPC algorithm (Dominguez & Pistikopoulos, 2010) requires the system of ODEs describing the dynamics of the system to be discretized. The discretization step was carried out using an implicit Runge-Kutta method (Zavala & Laird, 2008). The number of collocation points was set to 3, while the number of finite elements was set to 9 and 6 for the reduced order controllers with 1 state and 2 states, respectively. The mp-NMPC algorithm was then applied to two reduced models, with 1 state and 2 states, respectively. The reduced state variables, T, were left unconstrained and the control input, u, was bounded in the interval א ݑሾͲǡͷሿ. The algorithm resulted in 11 and 49 critical regions, for the controllers with 1 state and 2 states, respectively. As an example, Table 2 presents the critical regions containing the steady state point of the system and the corresponding optimal solutions.. Figure 1 presents the map of control inputs for the different values of the state variables. Note that, as mentioned in Section 2.2, the reduction scheme projects the system dynamics in a different space and therefore the values of the state variables don’t have physical meaning. Table 2 – Critical regions containing the steady state point of the system for the reduced order controllers with 1 and 2 states, respectively, and corresponding optimal solutions.
Region 1 state
-5.563 ߠ െͷǤ͵ͷͺ
2 states
ߠଵ ͲǤͲ͵͵ߠଶ െͶǤͺͶ͵ െߠଵ െ ͲǤͲʹߠଶ ͷǤ͵ʹͶ ͶǤͶͲͲ ߠଶ ǤͲͲ
Optimal Solution ݑሺߠሻ ൌ ͶͻǤ͵Ͷʹߠ ʹͶǤͶͻ ݑሺߠሻ ൌ ͳͷǤͳʹͲߠଵ ͲǤͻ͵ʹ͵ͳߠଶ ͺͲǤͶͻ͵
620
Pedro Rivotti et al.
6
u(T)
4 2 0 -2 -6
-5.5
-5
T
-4.5
-4
1
(a) (b) Figure 1 – Control input as a function of the states for the controllers based on the reduced models with (a) 1 state (b) 2 states.
The closed-loop performance of the controller was assessed by applying several distrubances to the steady state output (concentration) of the system. The results below refer to a disturbance of -5% Figure 2 shows the trajectory of the system for this disturbance using the explicit multiparametric controller with 2 states.
(a)
(b)
Figure 2 –System trajectory for a disturbance of -5%. (a) State trajectory ż – Initial point Ŷ – Steady state point. (b) Output trajectory in time. It may be observed from Figure 2-(a) that only one critical region is visited throughout the disturbance of -5%. Figure 2-(b) shows that the explicit multiparametric controller very closely approximates the performance of the NMPC controller based on the same model. It should be noted that the computational time required to compute each control action is significantly lower for the explicit multiparametric controller. While the NMPC based on the reduced model with 2 states took an average of 10.4s to compute each control input, the explicit multiparametric controller based on the same model took less than 0.001s*. Finally, the performance of the explicit multiparametric controller based on the reduced model with 1 state was assessed against a NMPC controller based on the original full order model. The results, presented in Figure 3, suggest that, although a small offset is *
Computational times refer to the processor Intel Core Quad Q9400 @ 2.66GHz .
Nonlinear model reduction and nonlinear parametric programming for NMPC
621
detected (~0.2%) , a controller based on a reduced order model with only 1 state is enough to obtain good closed-loop performance. 0.95 0.94
Concentration
0.93 0.92 Full order controller (32 states) 0.91
Reduced order controller (1 state)
0.9 0.89 0.88 0.87
0
5
10
15
20
25
Time (min)
Figure 3 – Closed-loop controller performance for disturbance rejection.
4. Concluding remarks This work demonstrates the combined use of nonlinear model order reduction techniques and nonlinear multiparametric control for the design and implementation of fast responding explicit multiparametric controllers, for nonlinear systems. It was shown that the multiparametric algorithm provides a very close approximation for the corresponding online control problem, while significantly reducing the required computational time. The explicit multiparametric controller also showed a good closedloop response, when compared to a full order online controller, based on the original model. The ongoing research will focus on further developments of the proposed methodology towards the design of explicit multiparametric controllers for nonlinear systems.
5. Acknowledgments The authors are thankful for the financial support from the European Research Council (MOBILE, ERC Advanced Grant, No: 226462), EPSRC (EP /I019640) and the CPSE Industrial Consortium.
References Bemporad, A, Morari, M, Dua, V and Pistikopoulos, E N (2002) ‘The explicit linear quadratic regulator for constrained systems’, Automatica, 38(1), pp. 3-20. Dominguez, L F. and Pistikopoulos, E N (2010) ‘Recent Advances in Explicit Multiparametric Nonlinear Model Predictive Control’, Industrial & Engineering Chemistry Research, (ii). Hahn, J and Edgar, T F (2002) ‘An improved method for nonlinear model reduction using balancing of empirical gramians’, Computers & Chemical Engineering, 26(10), pp. 13791397. Narciso, D and Pistikopoulos, E N (2008) ‘A combined Balanced Truncation and MultiParametric Programming approach for Linear Model Predictive Control’, In Elsevier Science Ltd, 18th European Symposium on Computer Aided Process Engineering, p. 405. Pistikopoulos, E N (2009) ‘Perspectives in multiparametric programming and explicit model predictive control’, AIChE Journal, 55(8). Zavala, V M and Laird, CD (2008) ‘Fast implementations and rigorous models: Can both be accommodated in NMPC?’, International Journal of, (July 2007), pp. 800-815.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-Model MPC for Nonlinear Systems: Case Study of a Complex pH Neutralization Process Weiting Tang, M. Nazmul Karim Department of Chemical Engineering, Texas Tech University, 6th Street and Canton, Lubbock , TX 79409, USA
Abstract Implementation of a multi-model neural network based model predictive control scheme is presented for a nonlinear pH neutralization process. The process gain observed from the pH titration curves can vary thousand folds. In this work, control of a pH neutralization process is established by performing the following tasks: (i) development of three neural network models, (ii) tuning the multiple neural networks using a novel real coded genetic algorithm (RCGA), and (iii) design of a multi-model predictive controller (MPC). The proposed methodology is demonstrated on a wastewater neutralization process. Simulation studies demonstrate the effectiveness of the proposed scheme to control nonlinear processes. Keywords: Neural network; Multi-model predictive control; Genetic algorithm; Wastewater neutralization
1. Introduction Control of pH is a well-known nonlinear problem in chemical industries, wastewater treatment, polymerization reactions, and biochemical processes [12]. However, pH control is a challenging task due to the following: (i) steady state titration curves show the presence of highly nonlinear relation between the pH and the acid/base added to the system, (ii) the process has a wide operating range and, (iii) the dynamics of the process vary drastically with the change in feed pH. The process is further complicated by the presence of single or multiple buffers and time delays [12]. Several advanced control algorithm have been developed to control the pH process [4], [11], [12]. The complexity of the control algorithms used for pH control problem ranges from a simple PI controller to multi-model and neural network based schemes [12], [4], [7]. Due to the varying nonlinear characteristics and dynamics, simple PI control algorithms are inefficient for control of feed disturbance in a typical neutralization process [12]. This lead to the use of multi-model based controllers for pH control problem. Typically multiple linear models are constructed and adaptive control schemes are used in each region [8], [4]. Although these control algorithms are promising, the presence of external disturbance and model-plant mismatch degrades the controller performance [8]. Hence, multi-model MPC were designed to address the above mentioned issues. In this article, a new multi-neural network based MPC algorithm for nonlinear processes is developed. The advantages of the proposed method are: (i) a computationally tractable optimal multiple model approach and, (ii) an effective
Multi-Model MPC for Nonlinear Systems: Case Study of a Complex pH Neutralization Process 623 predictive control algorithm which uses these multiple models to control the pH in a neutralization process. This paper is organized as follows. In section 2, a first principle model of the pH process is developed. In section 3, the basic concepts of fuzzy c-means and neural network are briefly reviewed. Subsequently, we explain our novel RCGA algorithm in detail. Design of multiple neural network based MPCs are delineated in section 4. Simulation study in section 5 shows the effectiveness of the proposed algorithm in a nonlinear pH process.
2. Wastewater neutralization process modeling In industry, the inlet to wastewater treatment may contain various components, e.g., different kinds of acid and alkali. The process studied in this work consists of a tank with mixtures of a strong base (NaOH), a weak base (NaHCO3), and a weak acid (CH3COOH). A strong acid (H2SO4) stream and/or a strong base (KOH) stream were/was introduced to maintain the pH of outlet stream at 7. A schematic diagram of the process is shown as Fig.1. A first principle model built on the knowledge of mass balances over acid and base in the tank was derived in the same manner as presented in [9]. A charge balance was applied to all the reactions to calculate the tank hydrogen ion concentration (H+) and the tank pH, whose equations are: [H+]5 + (XB1 + XB2 - XA + XB + k1 + k4) [H+]4 + [(XB1 - XA + XB + k2 + k4) k1+ (XB1 + XB2 - XA + XB – XA1)k4 - kw] [H+]3 + [(XB1 – XB2 - XA + XB )k1k2+( XB1 - XA + XB – XA1) k1k4 ( k1+k4)kw + k1k2k4] [H+]2 + [( XB1 – XB2 - XA + XB – XA1) k1k2k4 -( k2 + k4) k1kw] [H+](1) k1k2 k4kw = 0 pHtank = - log ([H+])
(2)
where XA, XB, XA1, XB1 and XB2 are the H2SO4, KOH, CH3COOH, NaOH, and NaHCO3 concentrations in the outlet stream respectively; k1, k2, k4, and kw are the equilibrium constants for H2CO3, HCO3-, CH3COOH, and H2O respectively.
Fig.1. Schematic diagram for pH process
3. Methodology 3.1. Fuzzy c-means For developing multiple models for predictive control, a pre-requisite is to divide the given input-output data into several sub-regions [4]. FCM is one of the clustering
624
W. Tang et al.
methods, which allows one piece of data to belong to more than one group and is widely used in pattern recognition [2]. Since the objective in our method is to recognize various patterns in the given dataset, we use FCM to identify multiple regions. In this work, the number of clusters is set to 3. This is chosen by considering adding an intermediate region between acid and base region to avoid sudden jumps when switching between these two regions. With this FCM method, the given dataset is clustered into three different regions after which three neural network models trained by genetic algorithm are developed for predictive control. 3.2. Genetic algorithm based recurrent neural networks Neural networks (NN) are one of the popular tools used for identification of complicated nonlinear processes [7], [4]. Neural networks can be used for modeling of static as well as dynamic processes. Recurrent neural network (RNN) is one of the most widely used NN to model dynamic processes. RNN has internal feedback loops and, therefore, is able to capture the process dynamics effectively. In fact, RNNs provide good control performance in the presence of unmodeled dynamics [1]. For the wastewater neutralization process, three RNNs are developed for the sub-regions obtained from FCM. A novel real coded genetic algorithm (RCGA) is proposed to tune the number of hidden nodes and connecting weights of the RNN, simultaneously. 3.3. Genetic Algorithm (GA) GA is a global optimization technique which is used in many industrial applications. In GA, a set of possible solutions to the problem of interest are encoded as a population of strings called chromosomes. The basic idea behind GA is to evolve the population of chromosomes toward better solution, through four basic operators: initialization, selection, crossover, and mutation. Real-coded GA (RCGA) is used in this paper due to the reason that the length of chromosome reduces to the same as the number of variables in RCGA [3]. This feature makes it extremely suitable to encode the neural network with large number of nodes and associated weights, which helps to reduce the computational burden to a significant extent. In this application, the proposed RCGA is accomplished by the following steps: (i) a population of chromosomes with variable length is initialized, (ii) binary tournament selection (BTS) [10] is applied for reproduction, (iii) a novel multi-parents simulated binary crossover (MPSBX) is designed, (iv) structure mutation (SM) [1] is used for tuning the structure of the NNs, and (v) micro-genetic algorithm (GA) [1] is implemented for local fine-tuning. 3.3.1. Simulated Binary Crossover In this work, SBX, a self-adaptive crossover operator which uses probability distribution to create offspring is used. There are two main properties of SBX that enable RCGA with SBX operator to have the self-adaptive capability: (i) the spread of children solutions is proportional to the spread of parent solutions and (ii) near parent solution are more favorably created than the ones far from parents [5].
Multi-Model MPC for Nonlinear Systems: Case Study of a Complex pH Neutralization Process 625 A novel MPSBX technique whose properties are altered is first proposed in this paper. The MPSBX is established by the following two steps: (i) design of a self-defined Gaussian distribution (g) whose width decreases as generation increases, and (ii) two parents whose distance is closest to g are selected from the mating pool generated by BTS. In this proposed method, the children solutions farther from parent solutions are more likely to be created at the earlier stage, while near parent solutions are preferred at the later stage. These properties exactly match the idea in GA that larger search space is preferred for converging faster in the beginning, but local area is more favorable in the late period to find a global minimum. Besides possessing this desired property, MPSBX is also more disruptive, more explorative, and less sensitive for premature convergence [6]. 3.4. Multi-model Predictive Control After three well-tuned neural network models are developed by the proposed algorithm, a multi-model predictive control strategy is applied to handle the highly nonlinear pH process. The switching scheme (see Fig.2.) among the three predictive controllers is based on FCM. Once a new measurement is available in the pH tank, FCM will decide which sub-region it belongs to. Then the corresponding MPC will be activated to take action. Fig.2. Flowchart of the multi-model MPC
4. Simulation Results To demonstrate the effectiveness of the proposed control strategy, computer simulations were carried out for a complex pH process. A bank of titration curves were generated by varying the feed composition using the first principle model. Fuzzy c-means algorithm was applied to partition the information contained in titration curves into three subregions. The resulting clusters are shown in Fig.3. Table 1 shows the comparison results between fuzzy set theory (FST) [13] and the proposed modeling method. As can be seen, the performance of the proposed RCGA method is superior in terms of both the number of hidden nodes and prediction errors. Fig.4. gives the disturbance rejection results of the multi-model MPC, subject to a severe disturbance pattern in the feed which varies from 3 to 13 in pH, and 2% white noise in output measurement and a drift with a magnitude of 0.03. A comparison study of the disturbance-rejection performances between the proposed control algorithm and fuzzy gain-scheduling [14-15] is shown in Fig.5. Both of these two methods are subjected to the same disturbances as shown in Fig.4. The proposed MM-MPC method has short settling time and small fluctuations compared to the fuzzy gain-scheduling method.
626
W. Tang et al.
Table.1. Comparison results
FST: Fuzzy Set Theory Model 1
FST
Model 2
Model 3
RCGA
FST
RCGA
FST
RCGA
Training Error (MSE)
0.0198
0.0146
0.0042
0.0021
0.0084
0.0044
Testing Error (MSE)
0.0348
0.0176
0.01
0.0064
0.0084
0.0074
50
43
81
46
53
49
Number of hidden nodes
Fig.3. Clusters obtained from FCM
Fig.4. Disturbances-rejection response
Fig.5. Disturbances-rejection performance of MM-MPC vs gain scheduling
5. Conclusions A novel multi-parents SBX with variable length chromosomes are used in RCGA to find a near-optimal solution at a very fast pace. The proposed RCGA algorithm together with GA has the power to tuning the number of hidden nodes and connecting weights of neural networks simultaneously. Compared to fuzzy set theory, RCGA method gives better results in terms of both number of hidden node and prediction errors. Simulation
Multi-Model MPC for Nonlinear Systems: Case Study of a Complex pH Neutralization Process 627 results show that the proposed control scheme can handle severe disturbance in the feed, as well as measurement noise and drift in the output, very effectively. A comparison study of the disturbance-rejection performance between the proposed MM-MPC algorithm and fuzzy gain-scheduling shows the superiority of the proposed method in terms of both settling time and robustness.
Acknowledgement Financial support from Process Control and Optimization Consortium at Texas Tech University is gratefully acknowledged. The authors are grateful to Dr. Ryan Senger and Srinivas Karra for their assistance with the modeling part of the neutralization process.
References 1. J. H. Ang, C.K. Goh, E.J. Teoh, and A. A. Mamum, 2007, Muti-objective evolutionary recurrent neural networks for system identification, IEEE CEC, 1586-1592. 2. J. C. Bezdek, 1981, Pattern recognition with fuzzy objective function algorithms. New York: Plenum. 3. A. Blanco, M. Delgado, and M. C. Pegalajar, 2001, A real-coded genetic algorithm for training recurrent neural networks, Neural Networks, 14 (1): 93-105. 4. L. Chen and K. S. Narendra, 2001, Nonlinear adaptpive control using neural networks and multiple models, Automatica, 37(8): 1245-1255. 5. K. Deb and H. G. Beyer, 2001, Self-adaptive genetic algorithms with simulated binary crossover, Evolutionary Computation, 9 (2): 197-221. 6. A. E. Eiben, C. H. M. van Kemenade, and J. N. Kok, 1995, Orgy in the computer: multiparent reproduction in genetic algorithm, Proc. of ECAL, 934-945. 7. B. Eikens and M. N. Karim, 1999, Process indentification with multiple neural netowrk models, Int. J. Control, 72 (7-8): 576-590. 8. B. Gu and Y. P. Gupta, 2008, Control of nonlinear processes by using linear model predictive control algorithms, ISA Transactions, 47: 211-216. 9. T. J. McAvoy, E. Hsu, and S. Lowenthal, 1972, Dynamics of pH in controlled stirred tank reactor, Ind. Eng. Chem. Process Des. Dev., 11 (1): 68-70. 10. B. L. Miller and D. E. Goldberg, 1995, Genetic algorithms, tournament selection, and effect of noise, Complex Systems, 9: 193-212. 11. T. Proell and M. N. Karim, 1994, Model predictive pH control using real-time NARX approach, AICHE J., 40 (2):269-282. 12. S. Syafiie, F. Tadeo, and E. Martinez, 2007, Model-free learning control of neutralization processes using reinforcement learning, Eng. Appl. Artif. Intel., 20:767-782. 13. H. Sarimves, A. Alexandridis, G. Tsekouras, and G. Bafas, 2002, A fast and efficient algorithm for training radial basis function neural networks based on a fuzzy partition of the input space, Ind. Eng. Chem. Res., 41: 751-759. 14. J. Zhang and A. J. Morris, 1995, Fuzzy neural networks for nonlinear systems modelling, IEE Proc. –Control Theory Appl., 142 (6): 551-561. 15. J. Zhang, 2001, A nonlinear gain scheduling control strategy based on neuro-fuzzy networks, Ind. Eng. Chem. Res. 40:3164-3170.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integrated Design and Control of Pressure Swing Adsorption Systems Harish Khajuriaa, Efstratios N. Pistikopoulosa a
Centre of Process System Engineering, Department of Chemical Engineering, Imperial College London, SW7 2AZ, UK
Abstract In the last few decades, pressure swing adsorption (PSA) has evidenced substantial growth in terms of size, versatility and complexity. In addition to handling multicomponent separation and purification, it offers tremendous flexibility at the design stage, requiring careful selection of key decision variables including number of beds, number of pressure equalization steps per cycle, cycletime, bed length and bed diameter. Further challenges are posed by the fact that the PSA operation is periodic in nature and never attains a true steady state. Integrated design and control of PSA system, incorporating its highly nonlinear and dynamic nature remains a challenging task and is the focus of this study. Towards this goal, a detailed first principle based model is first developed for a double bed, 6 step PSA system. In the next step, the full scale integrated design and control dynamic optimization problem is formulated and solved incorporating all operational constraints. The key design objective for the PSA system separating 70 % H2 - 30 % CH4 mixture into high purity hydrogen, is to maximize the H 2 recovery, while fast tracking the H2 purity to a set point value of 99.99 % for disturbances in feed temperature and feedrate. The results of detailed comparative studies shows that the optimal design obtained from integrated design and control case provides better hydrogen recovery and disturbance rejection as compared to sequential design and control case. Keywords: Pressure swing adsorption, integrated design and control, periodic processes, dynamic optimization.
1. Introduction Pressure swing adsorption (PSA) is at the forefront of gas separation technology. Modern PSA systems used in the industry can vary from 2 adsorbent beds separating air, to 16 bed system producing pure hydrogen in excess of 100, 000 Nm3/hr. In spite of receiving continuous attention from the system engineering community, rigorous design and control of industrial scale PSA operation remains a challenging task (Nikoliü et.al., 2009; Nilchan and Pantelides, 1998). This is because of the fact that PSA operation is not only highly nonlinear and dynamic but also poses extra challenges due to its unique property of exhibiting only a cyclic steady state (CSS). The absence of a true steady state is attributed to the fact that a PSA system comprises of a network of bed interconnecting valves, whose active status keep changing over time. This study is concerned with exploring the benefits of integration of the controller design problem during the process design stage in order to obtain improved PSA design with superior real time operability.
Integrated Design and Control of Pressure Swing Adsorption Systems
629
2. Problem Description A detailed graphical overview of the 2 bed, 6 step PSA system under consideration is depicted in Fig. 1. The assembly contains activated carbon as the adsorbent and also has 4 switch valves per bed for performing repressurization with feed gas, depressurizationpressure equalization, blowdown, and purge with product gas operations. The design objective is to maximize the hydrogen recovery while maintaining the hydrogen purity of 99.99 % at the end of 50 PSA cycles. Furthermore, a number of assumptions and restrictions are also applied for evaluating the final optimal design, as shown in Fig. 1. 99.99 % H2
Purge line
x x
Depressurization x line
Z=
x Z=0
Off gas
Temperature sensors Switch valves Feed
70 % H2 + 30% CH4
x x
The structural flow sheet of PSA system and its cyclic steps are assumed to be given. A PI type controller is assumed for the purity control, where purity is the control variable and feed step duration is the manipulating variable (Bitzer, 2005; Khajuria and Pistikopoulos, 2010) The production rate at the final time should be between 1 to 1.1 Nm3/hr. This is to ensure that the optimal solution point is not at the lowest possible volume, when PSA federate itself is treated as a decision variable,. Hydrogen purity at the final time should be between 99.99 % to 99.993 %. An upper bound ensures that the recovery is not compromised in an effort to produce hydrogen at purity higher than the target value. The operating superficial velocity at the bed end (z0 and zL) should be less than the minimum fluidization velocity. The bed temperature should not exceed a temperature limit. Consequently, four separate variables (soft sensors) have been allocated to measure the bed temperature at different axial locations as illustrated.
Figure 1: Two bed, six step PSA system employed for this study, and the underlying assumptions. From the control perspective, the main objective is to minimize the integral square error (ISE) in the control variable, while adjusting the manipulative variable. In addition, the controller is also expected to keep the manipulative variable values within the practical limits. A very large value of feed time, for example, can lead to irreversible bed saturation causing operational shutdown. On the other hand, very low values lead to a fast PSA cycle causing excessive wear and tear of switch valves. On similar grounds, large changes (positive or negative) in feed time should also be avoided to perform a smooth operation.
3. Problem formulation The critical challenge in simultaneous design and control problem is to efficiently incorporate, both design and control objectives under one formulation (Bansal et.al., 2000; Sakizlis et.al., 2004). This leads to a multi-objective dynamic optimization problem where the design objective is the leader while the controller design is the
630
Khajuria and Pistikopoulos
follower. The solution of these multi-objective dynamic optimization problems where the underlying dynamic model (PSA) is a distributed system with associated nonlinearities is a challenging task. In this work, we follow an approach, where the control objective is treated as a dynamic path constraint in the outer design optimization problem, eliminating the need of the inner optimization problem. Consequently, the controller constraints also become part of the outer design problem. Therefore, instead of obtaining a unique optimal solution, a pareto optimal chart/graph is obtained where the best optimal solution is chosen based on a lower value of ISE path constraint tolerance, as demonstrated in the next section. The dynamic optimization (DO) problem incorporating the dynamic and distributed model of PSA for the simultaneous solution of the optimal PSA design and purity controller tuning variables is given by the following mathematical form (Leader) obj s.t.
wCi wUCi w Qi (H (1 H )H ) U p (1 H ) b b p wt b wt wZ
max RecH 2 (t f ) d
2 C H D w i b Zi wZ 2
NCOMP
1.751 H ¦ C MW 2 i i 150 P 1 H i 1 UU U 3 2 3 wZ H dp H dp a ª NCOMP Q* º i * Qi i » « max ai K i Ci RT «1 ¦ Qi i 1 Q max » i ¼ ¬ § 'H i · Ki K f exp¨ ¸ i © RT ¹ wP
w Qi wt
NCOMP
(1 H ) U b p
¹ tf
³
2
(Follower) obj min (e(t )) dt Kc,W I
0
§ · T t feed (t ) t feed (t Ts ) Kc¨¨ (1 s )e(t ) e(t Ts ) ¸¸ W I © ¹
¦C
¦ i 1
i 1
vi
Ci
wT
Cvi Qi
wT wt
wt
NCOMP wT wT (1 H ) C p U C pi Ci U b s p wt wZ i 1
¦
(H (1 H )H ) RT b b p NCOMP
(1 H ) U b p
§ * · K LDF ¨ Qi Qi ¸ i ©
NCOMP
(H (1 H )H ) b b p
. . .
¦ i 1
NCOMP
wC
i 1
wt
¦
i
wQ i ( 'H ) i wt
O
2 w T 2 wZ
1 Prodrate (Nm3/hr) 1.1 99.99 Pur(tf) 99.993
e(t)= Pur – 99.99
Ti(t) Tlimit i = 1,2,3,4
t l o w e r t feed t u p p e r
Uz0(t) Umin_fluidization
¨ t l o w e r ¨ t feed ¨ t u p p e r
UzL(t) Umin_fluidization
Table 1: Bi-level PSA integrated and design formulation Here, Ci is the component molar concentration, T is the bed temperature, U is the fluid superficial velocity, and P is the bed pressure. The bed porosity and particle porosity are represented by İb and İp, respectively. Q*i is the adsorbed phase concentration in equilibrium with the gas phase, while Qmaxi is the maximum equilibrium adsorbed phase concentration. The manipulative variable is represented by tfeed whose values are calculated by a discrete PI controller. The decision variables of the design problem like diameter, and valve Cvs are represented by d, which also includes time invariant operating variables like time duration of repressurization and depressurization steps (table 2). It is important to note here that the specification 3) and 4) defined in section 2
Integrated Design and Control of Pressure Swing Adsorption Systems
631
above act as end point constraints for the dynamic optimization problem, while 5) and 6) as path constraint. In this study, we transform all the path constraints into corresponding end point constraints as If (Variable(t ) ! Variable _ limit )Then dError (Variable(t ) Variable _ limit ) 2 dt Else dError 0 dt
A detailed first principle based mathematical model of PSA has been developed in gPROMS modeling environment (Khajuria and Pistikopoulos, 2010), while the dynamic optimization problem is formulated in gOPT/gPROMS (PSE Ltd.) framework. Furthermore, it is assumed that all the parameters are known with complete certainty.
4. Results and discussions The complete list of PSA design variables along with PI controller design variables are shown in Table 2, where Cv are the valve Cvs, t is time duration of the corresponding step in PSA cycle, and Kc and IJI are the PI tuning parameters. The fourth column shows the results of the PSA design problem where the manipulating variable, tfeed is treated as a time invariant variable in the DO formulation. The optimal hydrogen recovery achieved in this case is around 52 %. Next, the full scale integrated design and control problem is solved, where the PSA feed temperature and feed rate are perturbed from their initial values as shown below T feed Q feed
0 T feed 15 sin(2St / 10) exp( t / 5)
Q 0feed 0.35
Decision variable Cvbldn Cvdep Cvpres Cvprod Diameter (m) L TO D ratio Feedrate (Nm3/Hr) tdep (Sec) tfeed (Sec) tpres (Sec) Kc IJI Recovery
Lower bound
Upper bound
2E-3 1E-4 1E-4 7.7E-5 0.05 1 0.5 30 40 30 -1E5 5
0.8 0.8 0.8 5E-4 0.2 6 2 150 400 150 -5000 60
Sequential design and control Optimal Controller design design 0.03066 0.00585 0.00657 5E-04 0.13213 6 1.62634 125.853 130.632 141.002 -20778.1 30.91 51.78 % 48.35 %
Integrated design and control 0.034523 0.0091995 0.008737 5E-4 0.122632 6 1.24668 73.9855 147.657 58.8395 -20513.1 31.3374 55.16 %
Table 2. Comparison of the optimal PSA design obtained from sequential vs integrated design approach The corresponding results are shown in the sixth column of Table 2. The optimal recovery possible in this case is around 55 %, which is approximately 3 % better than
632
Khajuria and Pistikopoulos
the optimal recovery achieved when no controller is integrated in the design optimization problem. Note that the value corresponding to tfeed in sixth column is the initial value of the manipulating variable, where the PI control law is written in its discrete form. As previously discussed in section 3, the optimal value of recovery obtained is effected by the tolerance limit set for ISE path constraint. For example, at the current optimal the tolerance limit is 2.7E-6, which if further increased to 4.53E-6, increases the hydrogen recovery to 56 %. Furthermore, to compare the real time operability of the two designs, the optimal PI tuning parameters of the design only problem (column 4) are also calculated as shown in the fifth column of table 2. Here, ISE in purity is treated as the objective function while all constraint specification defined in the integrated design and control problem are retained except the production rate. The result of this DO problem provides an optimal ISE of 2.63E-6, which is comparable to integrated design and control study. However, the hydrogen recovery at the optimal is only 48 %, mainly due to the effect of disturbances and the objective being to minimize the ISE in purity and not recovery maximization.
5. Conclusions and ongoing work This work presents a detailed study conducted on the integration of design and control of PSA system, incorporating a comprehensive treatment of important operational constraints and objectives on a rigorous mechanistic mathematical model. Currently, work is also under progress to develop a more advanced integrated design and control formulation, wherein the PSA purity control task is performed by an explicit/multiparametric model predictive controller (mp-MPC) instead of a PI controller. The MPC problem will make the overall design problem more challenging (Sakizlis et.al., 2004) to solve but is also expected to provide further design enhancements as the control law is not only optimal but also takes system dynamic model into the account.
6. Acknowledgements Financial support from the Royal Commission for the Exhibition of 1851, ParOS Ltd., EU project HY2SEPS (contract number 019887), and EPSRC project no EP/G059071/1 is sincerely acknowledged.
References 'Process Systems Enterprise Ltd. (2008)', gPROMS Model Developer Guide, http://www.psenterprise.com/gproms/index.html Bansal, V.; Perkins, J.; Pistikopoulos, E.; Ross, R. & van Schijndel, J. (2000), 'Simultaneous design and control optimisation under uncertainty', Computers & Chemical Engineering 24(2-7), 261 - 266. Bitzer, M. (2005), Model-based Nonlinear Tracking Control of Pressure Swing Adsorption Plants, in Thomas Meurer; Knut Graichen & Ernst-Dieter Gilles, ed., 'Control and Observer Design for Nonlinear Finite and Infinite Dimensional Systems', Springer Berlin / Heidelberg, , pp. 403-418. Khajuria, H. & Pistikopoulos, E. N. (2010), An Explicit/Multi-Parametric Controller Design for Pressure Swing Adsorption System, in 'DYCOPS, 9th International Symposium on Dynamics and Control of Process Systems'. Nikoliü, D.; Kikkinides, E. S. & Georgiadis, M. C. (2009), 'Optimization of Multibed Pressure Swing Adsorption Processes', Ind. Eng. Chem. Res. 48, 5388–5398 Nilchan, S. & Pantelides, C. (1998), 'On the Optimisation of Periodic Adsorption Processes', Adsorption 4(2), 113-147. Sakizlis, V.; Perkins, J. D. & Pistikopoulos, E. N. (2004), 'Recent advances in optimization-based simultaneous process and control design', Computers & Chemical Engineering 28(10), 2069 2086.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A robust MILP-based approach to vehicle routing problems with uncertain demands A. Aguirreb, M. Coccolab, M. Zamarripaa, C. Méndezb and A. Espuñaa a
Chemical Engineering Department, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain b INTEC (UNL-CONICET), Santa Fe, Argentina
Abstract The Vehicle Routing Problem with Stochastic Demands (VRPSD) has attracted the attention of the research community over the last decades by introducing the random behavior of the demand into the traditional routing problem. Many related works were focusing on providing suitable approaches of this large combinatorial problem for many different cases of uncertainly demand. Moreover, exact approaches that were developed up to now provide reliable results for specific demand values, e.g. using the highest demand value or the most expected value, but these solutions do not consider the concurrent effect of many possible scenarios into the objective function. So, the real necessity of more efficient and reliable approaches for this problem that provides optimal solutions for small and medium size cases in a reasonable time and also that response consistently to the random behavior of the demand has been clearly appeared in the last years (Novoa and Storer, 2009). In this work a robust MILP-based formulation for the VRPSD problem is developed. The main goal of this method is to find a reliable solution that provides an optimal result considering the occurrence of many possible scenarios in simultaneous. Keywords: Vehicle routing problems, stochastic optimization, MILP models.
1. Introduction Real-life vehicle routing problems (VRP) are characterized for been NP-hard in which most of the information required is not available beforehand (see Van Hentenryck and Bent, 2010). In this work it is studied one of the most popular and attractive routing problem in where randomness is introduced in customer demands. This problem is widely known as vehicle routing problem with stochastic demands (VRPSD) in which a set of vehicles, with equal capacity, must deliver the random demands of every customer by departing from a single depot. In this problem the real value of customer demands remains unknown until the vehicle reaches the customer. So, the direct application of pure deterministic approaches may cause infeasible solutions, due to the limited capacity of vehicles, and some corrective actions must be done in order to satisfy the restrictions imposed by customer demands (A. Juan et al., 2010). To avoid this, the VRPSD can be modeled as a dynamic problem in which routing and timing decisions in every route and possible scenario have to be done in simultaneous in order to minimize the total expected cost of routing. The principal goal of this problem is to find a set of routes that minimize the total delivered cost of every vehicle employed and at the same time fulfill the restrictions imposed by customer demands and vehicle capacity.
634
A. Aguirre et al.
2. The VRPSD problem The problem presented here consist on a routing problem in which several customers “i” (Iset: i=1…N), with uncertain demand (Dem(i)), must be served from a single depot by a set of homogeneous vehicles “j” (Jset: j=1…M). On this way, every vehicle, which has a finite load capacity (Cap(j)), is assigned to subset of customers that has to visited by following a defined route. Here is assumed that every route starts and ends at the depot and initially all vehicles are holding in the depot waiting to be assigned to a route. For each vehicle employed a fixed cost is considered (Cfix(j)) and a traveled cost (C(i)) associated with the movement of the vehicle along the route is taken into account. Every customer represent a node that has an specific space location and the distances between a pair of different customers and between the depot and each customer are reported in Cdep(i) and in Crout(i,i’) respectively. All of these customers are supplied by a single vehicle. However, vehicles can visit a customer many times adding a penalty cost. Also, in every customer a service time is considered which depends on both the amount of demand that have to be delivered in each node and the unload rate (ur). The main idea of this work is to determine the set of routes that minimize both the total cost associated with the distance traveled for every active vehicle (Ctot(j)) and also the fixed cost related to vehicles employed for multiple random scenarios s of the system (Sset: s=1…L). Then, the best configuration of vehicles that provides the cheapest set of routes for all possible scenarios in simultaneous is trying to be found. In the following section we will introduce a novel MILP-based continuous time formulation for the VRPSD problem. Then, the applicability of this methodology will be demonstrated by solving a slightly modified version of a Solomon's benchmark problem provided in the literature.
3. Exact MILP-based model The exact model presented below is based on the main ideas of immediate precedence concepts (see Méndez et al., 2000). This model introduces some important decision variables which are defined as follows: Y(j) determine the activation of a single vehicle j; W(i,j) refers that node i is assigned to vehicle j ; Z(i,j) denotes that node i is the first node visited in the route of vehicle j while F(i,j) ensure that node i is the last one attended by vehicle j and X(i,i’,j) means that node i is visited right before node i' in the route of vehicle j. 3.1. Vehicles activations. A finite number of vehicles are employed in order to fulfill all customers’ demands. The activation of a vehicle is determined by [0-1] variable Y(j), which will adopt value 1 when a vehicle is activated. Thus, equation (1) is imposed in order to activate vehicles in an arranged form. ܻሺ݆ሻ ܻሺ݆ ͳሻݐ݁ݏܬ א ݆ሺͳሻ 3.2. Assignment customers to an active vehicle. Each customer is served by a single active vehicle j, this assignment will be determined by the variable W(i,j). Thus, this binary variable will be W(i,j)=1 only if a customer i is assigned to vehicle j (see Eq. 2). ୀெ
ܹሺ݅ǡ ݆ሻ ൌ ͳݐ݁ݏܫ א ݅ሺʹሻ ୀଵ
3.3. Satisfaction of maximum load capacity for every activated vehicle. When the customers are assigned to a vehicle it must be ensured that the demands of these customers do not exceed the carrying capacity of the vehicle on Cap(j). Due to the
A novel robust MILP-based approach to vehicle routing problems with uncertain 635 demands randomness of the demands, this restriction must be enforced for every possible scenario s (s=1...L) of uncertain demand. So, this behavior stated in equation (3) is applied for every active vehicle (Y(j)=1) and for all customer assigned to this vehicle j (W(i,j)=1). ୀே
݉݁ܦሺ݅ǡ ݏሻܹ כሺ݅ǡ ݆ሻ ܽܥሺ݆ሻܻ כሺ݆ሻݐ݁ݏܬ א ݆Ǣ ݐ݁ݏܵ א ݏሺ͵ሻ ୀଵ
3.4. Minimum transportation cost to every customer. The concept of cost of a node as a time required to arrive from another customer or from the depot to this customer i at any possible scenario s (C(i,s)). In equation (4) is defined the minimum cost needed to reach the node i (C(i,s)) as the distance traveled from the depot to this node (Cdep(i)) at every scenario s. So, equation (4) is applied when a binary variable Z(i,j) adopts value 1, representing that this customer i represents the start node in the route of vehicle j. In otherwise this equation will be redundant because of the large value of MC parameter. ܥሺ݅ǡ ݏሻ ݁݀ܥሺ݅ሻ െ כ ܥܯ൫ͳ െ ܼሺ݅ǡ ݆ሻ൯ݐ݁ݏܫ א ݅Ǣ ݐ݁ݏܬ א ݆Ǣ ݐ݁ݏܵ א ݏሺͶሻ 3.5. Sequencing decisions of customers assigned to a single vehicle. The sequencing decisions between two different nodes i and i' will be given by the binary variable X(i,i',j). This variable represents the relationship of these nodes always if those ones are assigned to the same vehicle j. So, if X(i,i',j)=1 equation (5) enforce that the node i will be visited right before node i’ in the vehicle j. Then, equations (6-7) force that only a pair of nodes are related each other only if both ones are allocated in the same vehicle j. ܥሺ݅ ᇱ ǡ ݏሻ ܥሺ݅ǡ ݏሻ ݐݑݎܥሺ݅ǡ ݅ ᇱ ሻ ݉݁ܦሺ݅ǡ ݏሻȀ ݎݑെ כ ܥܯ൫ͳ െ ܺሺ݅ǡ ݅ ᇱ ǡ ݆ሻ൯ ݐ݁ݏܬ א ݆Ǣ݅ ് ݅ԢǢ ݐ݁ݏܵ א ݏሺͷሻ ܺሺ݅ǡ ݅ ᇱ ǡ ݆ሻ ܹሺ݅ǡ ݆ሻ݅ǡ ݅ ᇱ ݐ݁ݏܫ אǢ ݅ ് ݅ԢǢ ݐ݁ݏܬ א ݆ሺሻ ܹሺ݅ǡ ݆ሻ ܹሺ݅Ԣǡ ݆ሻ െ ܺሺ݅ǡ ݅ ᇱ ǡ ݆ሻ ͳ݅ǡ ݅ ᇱ ݐ݁ݏܫ אǢ ݅ ് ݅ԢǢ ݐ݁ݏܬ א ݆ሺሻ 3.6. Total routing cost for every active vehicle. The total routing cost of an active vehicle j is defined as the maximum cost to visit all the nodes assigned to this vehicle at every scenario of the system (Ctot(j,s)). So, if node i is the last one in the route of j then the [0-1] variable F(i,j) will take value 1 and equation (8) is going to be applied. ݐݐܥሺ݆ǡ ݏሻ ܥሺ݅ǡ ݏሻ ݁݀ܥሺ݅ሻ ݉݁ܦሺ݅ǡ ݏሻȀ ݎݑെ כ ܥܯሺͳ െ ܨሺ݅ǡ ݆ሻሻ ݐ݁ݏܫ א ݅Ǣ ݐ݁ݏܬ א ݆Ǣ ݐ݁ݏܵ א ݏሺͺሻ 3.7. Assignment-Sequencing decisions on a single route. Every active route j (Y(j)=1) has at least one customer at first (Z(i,j)=1) and one at the end (F(i,j)=1) (see Eq. (910)). In the same manner, at most one customer of the ones assigned to the route j (W(i,j)=1) can be the initial one and only one can be the last one (see Eq. (11-12)). In addition, constraint (13) forces that every node i should either be the first or directly preceded by another node i’. And equation (14) enforces that node i should be the last or directly succeeded by another node i’. ୀே
ܼሺ݅ǡ ݆ሻ ൌ ܻሺ݆ሻݐ݁ݏܬ א ݆ሺͻሻ ୀଵ ୀே
ܨሺ݅ǡ ݆ሻ ൌ ܻሺ݆ሻݐ݁ݏܬ א ݆ሺͳͲሻ ୀଵ
A. Aguirre et al.
636
ܼሺ݅ǡ ݆ሻ ܹሺ݅ǡ ݆ሻݐ݁ݏܫ א ݅Ǣ ݐ݁ݏܬ א ݆ሺͳͳሻ ܨሺ݅ǡ ݆ሻ ܹሺ݅ǡ ݆ሻݐ݁ݏܫ א ݅Ǣ ݐ݁ݏܬ א ݆ሺͳʹሻ ୀெ Ʋୀே
ሺܺሺ݅Ԣǡ ݅ǡ ݆ሻሻ ܼሺ݅ǡ ݆ሻ ൌ ͳݐ݁ݏܫ א ݅ሺͳ͵ሻ ୀଵ Ʋୀଵ Ʋஷ ୀெ Ʋୀே
ሺܺሺ݅ǡ ݅ ᇱ ǡ ݆ሻሻ ܨሺ݅ǡ ݆ሻ ൌ ͳݐ݁ݏܫ א ݅ሺͳͶሻ ୀଵ Ʋୀଵ Ʋஷ
3.8. Expected total routing cost and fixed cost for multiple random scenarios. The objective function to be minimized is related to the expected total routing cost and the fixed cost of every active vehicle for all the random scenarios analyzed (ETC: Expected Total Cost). Every possible scenario of the system has a probability of occurrence p(s) which is strongly related to the customer’s demand. Then, we could test all of scenarios simultaneously in order to determine the best problem configuration that minimizes the objective function. The objective function is presented below in equation (15). ୀெ ୀெ ୀே כ כ ܥܶܧ݊݅ܯൌ σୀଵ ݔ݂݅ܥሺ݆ሻܻ כሺ݆ሻ σ௦ୀ ௦ୀଵ ሺݏሻ ቀσୀଵ σୀଵ ݁݀ܥሺ݅ሻ ܼሺ݅ǡ ݆ሻ
ሺ݁݀ܥሺ݅ሻ
ሺǡ௦ሻ כ ሻ ܨሺ݅ǡ ݆ሻ ௨
ᇱ σƲୀே Ʋୀଵ ቀݐݑݎܥሺ݅ǡ ݅ ሻ Ʋஷ
ሺǡ௦ሻ כ ௨
ቁ ܺሺ݅ǡ ݅ ᇱ ǡ ݆ሻ൯ሺͳͷሻ
4. Application example The next example corresponds to a medium-size problem in which 4 vehicles with a maximum load capacity of 300 products are available to supply the uncertainly demand of 25 customers from a single depot. The random behavior of demand is represented by a uniform distribution which varied at most thirty per cent of the expected value E[Dem(i)] for each customer. The linear distances Crout(i,i’) and Cdep(i) were obtained from the information of the location (Xcoord(i) and Ycoord(i)) of every node. The node set and the location of the first 25 customers are derived from Solomon's benchmark problem R201 that can be found in the following page http://web.cba.neu.edu/~msolomon/problems.htm. The unload rate is assumed to be the same for every customer (ur=0.2 products/unit) and the fixed cost for vehicle employed is Cfix(j)=10. However, the expected value of demands of every customer i (i=1...25) is represented as follow in E[Dem(i)]=(20, 17, 23, 29, 36, 13, 15, 19, 26, 26, 22, 29, 33, 30, 18, 29, 12, 22, 27, 19, 21, 28, 39, 13, 16). The single depot has not demand and is placed in Xcoord=35; Ycoord=35. This example is solved for ten equal probable scenarios for different uncertain demands. The solution obtained by the MILP-based model presented above for each particular scenario and for the resolution of all scenarios in simultaneous is summarized as follow in Table 1. Also some particular scenarios are shown in Figure 1. The results obtained for these example shows the benefits of the solution proposed. Each of the scenarios reported were solved in few seconds of CPU time by using GAMS 23.4 with solver Cplex 12. Is worth to remark that the solution structure of scenarios (1-2-4-6-8-10) are infeasible for the treatment of the other scenarios (3-5-7-9) due to the capacity limitations and the number of the vehicles employed. However, the solution strategy proposed considering all scenarios in simultaneous can provide a reliable configuration for each particular scenario even with lower expected cost of those ones than used 3 vehicles for supply all customer demands.
A novel robust MILP-based approach to vehicle routing problems with uncertain 637 demands Table 1. Result for each possible scenario and for all of these scenarios together.
*Solution obtained by using GAMS 23.4 with solver Cplex 12 in a PC Core 2 Quad (4 threads)
Figure 1. Solution example for the following scenarios with a) most expected demand (scenario 1), b) best solution for all scenarios together.
5. Conclusions This work presents a robust MILP-based approach to the VRPSD problem. The formulation developed can be easily used to solve medium-size vehicle routing problems in a reasonable computational time. The solution obtained provides an optimal integrated result as well as a feasible configuration for all scenarios considered.
Acknowledments Financial support received from AECID under Grant PCI-D-030927/10, from CONICET under Grant PIP-2221, and from UNL under CAI+D is fully appreciated.
References A. Juan, J. Faulin, S. Grasman, D. Riera, J. Marul, C. Méndez, 2010.Trans. Res. Part. C, In Press. C. A. Méndez, G. P. Henning and J. Cerdá, 2000. Comp. and Chem. Eng. 24, 2223–2245. C. Novoa and R. Storer, 2009. European Journal of Operational Research 196,509-515. P. Van Hentenryck and R. Bent, 2010. The MIT Press, Boston USA.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
$Q,PSURYHG)RUPXODWLRQIRUWKH3URFHVV&RQWURO 6WUXFWXUH6HOHFWLRQEDVHGRQ(FRQRPLFV3UREOHP $QGUHDV3VDOWLV,RDQQLV..RRNRV&RVWDV.UDYDULV 'HSDUWPHQWRI&KHPLFDO(QJLQHHULQJ8QLYHUVLW\RI3DWUDV5LR3DWUDV*UHHFH
$EVWUDFW ,QWKLVZRUNDQXPEHURILPSURYHPHQWVLQWKHPDWKHPDWLFDOIRUPXODWLRQRIWKHEDFNRII LGHD IRU WKH VHOHFWLRQ RI UHJXODWRU\ FRQWURO VWUXFWXUHV DUH SUHVHQWHG 7KH QHZ IRUPXODWLRQ LV VKRZQ WR EH DQ LPSURYHG OLQHDU DSSUR[LPDWLRQ RI WKH H[DFW QRQOLQHDU IRUPXODWLRQDQGLWLVEHOLHYHGWKDWWKLVLPSURYHGIRUPXODWLRQZLOODOORZFRQVLGHUDWLRQRI ODUJHVFDOHFDVHVWXGLHVLQFOXGLQJSODQWZLGHFRQWUROSUREOHPV .H\ZRUGVFRQWUROVWUXFWXUHVHOHFWLRQEDFNRIIPHWKRGPDWKHPDWLFDOSURJUDPPLQJ
,QWURGXFWLRQ 3URFHVV V\VWHP V\QWKHVLV DV LW ZDV GHILQHG E\ 1LVKLGD HW DO LV DQ ³DFW RI GHWHUPLQLQJWKHRSWLPDOLQWHUFRQQHFWLRQRISURFHVVLQJXQLWVDVZHOODVWKHRSWLPDOW\SH DQG GHVLJQ RI WKH XQLWV ZLWKLQ D SURFHVV V\VWHP´ 7KH\ IXUWKHU LGHQWLI\ WKDW WKH WKUHH VXESUREOHPV DUH RI SDUDPRXQW LPSRUWDQFH LQ SURFHVV V\VWHPV V\QWKHVLV DUH D WKH UHSUHVHQWDWLRQSUREOHPE WKHHYDOXDWLRQSUREOHPDQGF WKHVWUDWHJ\SUREOHP )ROORZLQJ WKH ZRUN RI $QGHUVRQ DQG /HH DQG :HHNPDQ SURFHVV HQJLQHHUV LGHQWLILHG FRQWURO V\VWHPV V\QWKHVLV DV DQ LPSRUWDQW HOHPHQW RI SURFHVV V\QWKHVLV DV LW ZDV UHDOL]HG WKDW PDMRU FRQWULEXWLRQV WR HIIHFWLYH FRQWURO SHUIRUPDQFH RIWHQ GHULYH IURP PRGLILFDWLRQV RI WKH SURFHVV GHVLJQ 7KXV PXFK HIIRUW KDV EHHQ H[SHQGHGWRZDUGVGHYHORSLQJV\VWHPDWLFFRQWUROV\VWHPV V\QWKHVLVPHWKRGRORJLHVWKDW DUH DSSOLFDEOH WR LVRODWHG SURFHVV XQLWV DQG WR FRPSOHWH SODQWV +RZHYHU VXFFHVV KDV EHHQ OLPLWHG PDLQO\ GXH WR WKH ODFN RI XQGHUVWDQGLQJ RQ KRZ SURFHVV FRQWURO V\VWHPV VKRXOGEHHYDOXDWHGDQGFRPSDUHGLQDFRQVLVWHQWDQGXQDPELJXRXVZD\ 7KH LGHD RI EDFNRII IURP WKH SURFHVV RSHUDWLRQDO FRQVWUDLQWV DV WKH EDVLV IRU WKH GHYHORSPHQW RI D V\VWHPDWLF PHWKRGRORJ\ IRU FRQWURO V\VWHP V\QWKHVLV ZDV ILUVW SURSRVHG E\ 3HUNLQV HW DO H[WHQGHG E\ 1DUDZZD\ DQG 3HUNLQV DQG UHILQHG E\ +HDWK HW DO 7KH PHWKRGRORJ\ LV EDVHG RQ WKH IDFW WKDW GLIIHUHQW FRQWUROVWUXFWXUHVKDYHGLIIHUHQWDELOLWLHVWRRSHUDWHDSURFHVVFORVHWRDFWLYHDQGQHDU DFWLYH FRQVWUDLQWVDQGWKDWWKHG\QDPLFHFRQRPLFVLVDVWURQJIXQFWLRQRIWKLVEDFNRII IURP WKH FRQVWUDLQW ERXQGDULHV $OWHUQDWLYH FRQWURO VWUXFWXUHV FDQ EH JHQHUDWHG DQG UDQNHG XQDPELJXRXVO\ XVLQJ HIILFLHQW PDWKHPDWLFDO SURJUDPPLQJ WHFKQLTXHV 7KHUHIRUHLWLVDPHWKRGRORJ\ZKLFKKDVDOOWKHHOHPHQWVRIDV\VWHPDWLFDSSURDFK ,QWKLVSDSHUDQXPEHURIUHILQHPHQWVRIWKHEDFNRIIPHWKRGRORJ\LVSUHVHQWHG)LUVWO\ DQ HIILFLHQW OLQHDUL]DWLRQ RI WKH QRQOLQHDU PDWKHPDWLFDO SURJUDPPLQJ IRUPXODWLRQ LV SUHVHQWHG ZKLFK KDV WKH DGYDQWDJH WKDW LW LV DQ DFFXUDWH UHSUHVHQWDWLRQ RI WKH LQLWLDO IRUPXODWLRQ6HFRQGO\WKHIRUPXODWLRQLV VOLJKWO\PRGLILHGWRDFFRXQWIRUWKH IDFWWKDW ZKHQ PXOWLSOH GLVWXUEDQFHV DUH DFWLQJ RQ D SODQW WKHQ WKH ZRUVW FRPELQDWLRQ PLJKW FRUUHVSRQG WR HYHU\ GLVWXUEDQFH KDYLQJ GLIIHUHQW IUHTXHQF\ FKDUDFWHULVWLFV 7KH DGYDQWDJHRIWKHILQDOIRUPXODWLRQZKHQFRPSDUHGWRWKDWRI+HDWKHWDO LVWKDW WKHQHHGIRUDQLWHUDWLYHDOJRULWKPLVHOLPLQDWHGDQGRIIHUVWKHRSSRUWXQLW\WRLQYHVWLJDWH ODUJHVFDOHFDVHVWXGLHVLQFOXGLQJFRPSOHWHFKHPLFDOSODQWV
$Q,PSURYHG)RUPXODWLRQIRUWKH3URFHVV&RQWURO6WUXFWXUH6HOHFWLRQEDVHGRQ (FRQRPLFV3UREOHP
639
7KHEDFNRIIPHWKRGRORJ\ 7KHDSSOLFDWLRQRIWKHEDFNRIIPHWKRGRORJ\LVEDVHGRQWKHGHWHUPLQDWLRQRIWKHVWHDG\ VWDWHRSWLPDORSHUDWLQJSRLQW7KHSODQWG\QDPLFVDURXQGWKHRSWLPDOVWHDG\VWDWHFDQEH DSSUR[LPDWHGE\WKHG\QDPLFVRIWKHOLQHDUL]HGSODQWPRGHO [ W $[W %X W (ZW \ W &[W 'X W )ZW ı W +[W 3X W 0ZW ZKHUH [ LV WKH YHFWRU RI VWDWH YDULDEOHV X LV WKH YHFWRU RI SRWHQWLDO PDQLSXODWHG YDULDEOHV \ LV WKH YHFWRU RI SRWHQWLDO FRQWUROOHG YDULDEOHV DQG Z LV WKH YHFWRU RI GLVWXUEDQFHV$OOYHFWRUVDUHGHYLDWLRQVIURPWKHQRPLQDORSWLPDORSHUDWLQJSRLQWıLV WKHYHFWRURIGHYLDWLRQVRIWKHLQHTXDOLW\FRQVWUDLQWVRSHUDWLRQDOFRQVWUDLQWV IURPWKHLU QRPLQDO YDOXHV J1 ,I WKH PD[LPXP GHYLDWLRQV IURP WKH DFWLYH FRQVWUDLQWV ȝN PD[W_ıNW _ IRUDJLYHQVHWRIGLVWXUEDQFHVZHUHNQRZQWKHQWKHUHVXOWLQJHFRQRPLF SHQDOW\FDQEHDSSUR[LPDWHGE\WKHIROORZLQJOLQHDUSURJUDPPLQJSUREOHP $[ %X PLQ Į7 [ ȕ 7 X VW ® 1 J +[ 3X ȝ d [X ¯ +RZHYHU ȝ LV D IXQFWLRQ RI WKH GLVWXUEDQFH FKDUDFWHULVWLFV DQG WKH UHJXODWRU\ FRQWURO VWUXFWXUH HPSOR\HG 1DUUDZD\ DQG 3HUNLQV SURSRVH WR FDOFXODWH WKH YHFWRU ȝ HQWLUHO\ LQ WKH IUHTXHQF\ GRPDLQ 7KLV FDQ EH DFKLHYHG E\ WKH IROORZLQJ FDOFXODWLRQ SHUIRUPHGDWDQXPEHURIGLIIHUHQWIUHTXHQFLHVȦVV «16 ½ $; 5 Ȧ V ; , %8 5 (: 5 ° , V 5 , $; Ȧ ; %8 ° ° < 5 &; 5 '8 5 ): 5 ° , , , , < &; '8 ): ¾V 5 5 5 5 ° +; 38 08 Ȉ ° +; , 38 , 08 , Ȉ , ° Ȉ N5 Ȉ N, d ȝN N °¿ ZKHUH ; ;5M;, /^[W ` HWF GHQRWHV WKH /DSODFH WUDQVIRUP (TXDWLRQV DUH FRPSOHPHQWHGZLWKWKHVHOHFWLRQRIUHJXODWRU\FRQWUROVWUXFWXUHZKLFKQHFHVVLWDWHVWKH LQWURGXFWLRQRIWKHELQDU\GHFLVLRQ YDULDEOHVȣL LIPDQLSXODWHGYDULDEOHLLVXVHGLQ WKH FRQWURO VWUXFWXUH RWKHUZLVH DQG ȥM LI FRQWUROOHG YDULDEOH M LV XVHG LQ WKH FRQWUROVWUXFWXUHRWKHUZLVH )RUWKHVSHFLDOFDVHRISHUIHFWFRQWURODWDOOIUHTXHQFLHV WKHFRQWUROODZHTXDWLRQREWDLQVWKHIROORZLQJVLPSOHIRUP \ / ȥ M d < M5 d \8 ȥ M ½° ½ ¾M ° \ / ȥ M d < M, d \8 ȥ M °¿ ° ¾V X / ȣL d 8 L5 d X8 ȣL ½ ° ¾L X / ȣL d 8 L, d X 8 ȣL ¿ °¿
640
¦ȣ ¦ȥ L
L
M
$3VDOWLVHWDO
M
1<
ȣL ȥ M ^` L M
ZKHUHWKHVXSHUVFULSWV8DQG/GHQRWHXSSHUDQGORZHUERXQGVFRUUHVSRQGLQJO\DQG1< LVWKHQXPEHURISRWHQWLDOFRQWUROOHGYDULDEOHV7KHFRPSOHWHPDWKHPDWLFDOIRUPXODWLRQ RI WKHEDFNRII LGHD FRQVLVWV RI HTXDWLRQV DQG DQG LW ZDV ILUVWSURSRVHGE\ 1DUUDZD\DQG3HUNLQV DQGODWHUUHILQHGE\+HDWKHWDO 7KLVLVDPL[HG LQWHJHU QRQOLQHDU SURJUDPPLQJ SUREOHP 0,1/3 ,WV VROXWLRQ KRZHYHU RIIHUV DQ XSSHUERXQGRQWKHDFWXDOG\QDPLFHFRQRPLFVRIWKHOLQHDULVHGSODQWDVDOOGLVWXUEDQFHV DUHDVVXPHGWRDFWDWWKHVDPHIUHTXHQF\DWDOOWLPHVVHHHTXDWLRQV 1DUUDZD\DQG 3HUNLQV SURSRVHGXVLQJDQLWHUDWLYHDOJRULWKPWRDOOHYLDWHWKHODWWHUVKRUWFRPLQJ ZKHUH WKH RSWLPL]DWLRQ DOJRULWKP LV SHUIRUPHG ILUVW DQG WKHQ WKH DFWXDO G\QDPLF HFRQRPLFV LV FDOFXODWHG H[DFWO\ IRU WKH VHOHFWHG UHJXODWRU\ FRQWURO VWUXFWXUH 7KH DOJRULWKP WHUPLQDWHG ZKHQ QR VWUXFWXUH FDQ EH REWDLQHG IURP WKH RSWLPL]DWLRQ DOJRULWKP ZLWK G\QDPLF HFRQRPLFV EHWWHU WKDQ WKH FXUUHQWO\ DYDLODEOH EHVW G\QDPLF HFRQRPLFV ,Q RUGHU WR DOOHYLDWH WKH IRUPHU VKRUWFRPLQJ WKH\ SURSRVH WKH XVH RI WKH IROORZLQJ DSSUR[LPDWLRQ ZKLFK LV HTXLYDOHQW WR UHSODFLQJ D FLUFOH ZLWK D OLQHDU RXWHU DSSUR[LPDWLRQVTXDUH SHUIRUPHGDWIRXUSRLQWVʌʌDQGʌ Ȉ N5 d ȝN ½ ° Ȉ N5 d ȝN ° ¾N V Ȉ N, d ȝN ° Ȉ N, d ȝN °¿
3URSRVHGIRUPXODWLRQ 7KHPDLQGUDZEDFNLQWHUPVRIFRPSXWDWLRQDOHIILFLHQF\LQWKHIRUPXODWLRQSURSRVHG E\1DUUDZD\DQG3HUNLQV VWHPVIURPWKHIDFWWKDWIRUUHDOLVWLFFDVHVWXGLHVWKHUH LVDVLJQLILFDQWQXPEHURIFRQWUROVWUXFWXUHDOWHUQDWLYHVZKRVHH[DFWG\QDPLFHFRQRPLFV LVVLJQLILFDQWO\ODUJHUWKDWWKHDSSUR[LPDWHYDOXHFDOFXODWHGXVLQJWKHDSSURDFKSURSRVHG E\ 1DUUDZD\ DQG 3HUNLQV $V D UHVXOW D ODUJH QXPEHU RI VWUXFWXUHV KDYH DSSUR[LPDWH G\QDPLF HFRQRPLFV ZKLFK LV EHWWHU WKDW WKH H[DFW G\QDPLF HFRQRPLFV RI WKH FXUUHQWO\ DYDLODEOH EHVW FRQWURO VWUXFWXUH7KH HYDOXDWLRQ RI VWUXFWXUHV ZKLFK WXUQ RXW QRW WR EH SURPLVLQJDFFRUGLQJWRWKHLUH[DFWG\QDPLFHFRQRPLFVUHTXLUHVVLJQLILFDQWFRPSXWDWLRQ WLPHDQGWKHH[DPLQDWLRQRIODUJHFDVHVWXGLHVEHFRPHVSUDFWLFDOO\LPSRVVLEOH,QRUGHU WRDOOHYLDWHWKHGHILFLHQF\WKDWVWHPVIURPWKHIDFWWKDWDOOGLVWXUEDQFHVDUHDVVXPHGWR DFWDWWKHVDPHIUHTXHQF\WKHIROORZLQJPRGLILFDWLRQRIHTXDWLRQ LVSURSRVHG(YHU\ GLVWXUEDQFH LV FRQVLGHUHG LQ LVRODWLRQ IURP WKH RWKHUV DQG WKHLU ZRUVW HIIHFWV DUH WKHQ DGGHGSULQFLSOHRIVXSHUSRVLWLRQ LQRUGHUWRFDOFXODWHWKHZRUVWFDVHEDFNRIIYHFWRU $; "5 Ȧ V ; ", %8 "5 (":"5 ½ ° $; ", Ȧ V ; "5 %8 ", ° ½ Ȉ "5 N Ȉ ", N d ȡ" N ° <"5 &; "5 '8 "5 ); "5 ° c ¾" V ¾N " V ȡ" N d ȝ N ° <", &; ", '8 ", ° " ¿ +; "5 38 "5 08 "5 Ȉ "5 ° , , , ,° +; " 38 " 08 " Ȉ " ¿
¦
$Q,PSURYHG)RUPXODWLRQIRUWKH3URFHVV&RQWURO6WUXFWXUH6HOHFWLRQEDVHGRQ (FRQRPLFV3UREOHP
641
HUURU
HUURU
HUURU
HUURU
13
)UHTXHQF\
13
13
13
)LJXUH$FFXUDF\LQDSSUR[LPDWLQJWKHIUHTXHQF\UHVSRQVHRIDQGRUGHUV\VWHP
,QDGGLWLRQLQRUGHUWRDOOHYLDWHWKHVKRUWFRPLQJWKDWVWHPVIURPWKHOLPLWHGDFFXUDF\RI WKH DSSUR[LPDWLRQ JLYHQ E\ HTXDWLRQ WKH FLUFOH LV DSSUR[LPDWHG E\ D FDQRQLFDO SRO\JRQWKDWFLUFXPVFULEHVH[WHUQDOO\DFLUFOHZLWKUDGLXVȝN+RYGDQG.RRNRV Q 5 Ȉ "5 N Q , Ȉ ", N d ȡ" N ½ ° cc ȡ" N d ȝN ¾N " V ° " ¿ ZKHUH IURP VLPSOH JHRPHWULF DUJXPHQWV LW IROORZV WKDW Q5 FRVʌȃ3 DQG Q, VLQʌȃ3 DQG 13 LV WKH QXPEHU RI XQLIRUPO\ GLVWULEXWHG SRLQWV RQ WKH FLUFOH FLUFXPIHUHQFH WKDWDUHXVHGWRGHILQHWKHSRO\JRQ)RU13 HTXDWLRQVcc VLPSOLI\WR HTXDWLRQV 7KLVQHZLGHDZDVWHVWHGE\FDOFXODWLQJWKHPD[LPXPRIWKHPDJQLWXGH RIDQGRUGHUV\VWHPZLWKȗ IJ N XVLQJDQLQFUHDVLQJQXPEHURISRLQWVDQGWKH UHVXOWVDUHVKRZQLQ)LJXUH
¦
&DVHVWXG\ 7KH HYDSRUDWLRQ SURFHVV VKRZQ LQ )LJXUH DQG H[DPLQHG E\ +HDWK HW DO LV FRQVLGHUHG7KHGHWDLOVRIWKHFDVHVWXG\LQFOXGLQJWKHPDWKHPDWLFDOPRGHORSHUDWLRQDO FRQVWUDLQWV DQG REMHFWLYH IXQFWLRQ DUH JLYHQ LQ +HDWK HW DO 7KH SRWHQWLDO PDQLSXODWHG YDULDEOHV DUH WKH VWHDP SUHVVXUH 3 DQG FRRODQW IORZUDWH ) 7KH SRWHQWLDOFRQWUROOHGYDULDEOHVDUHWKHSURGXFWFRQFHQWUDWLRQ& WKHHYDSRUDWRUSUHVVXUH 3 WKH SURGXFW WHPSHUDWXUH 7 WKH YDSRU WHPSHUDWXUH 7 DQG WKH FRRODQW H[LW WHPSHUDWXUH 7 7KH DLP LV WR H[DPLQH ZKHWKHU WKH G\QDPLF HFRQRPLFV FDOFXODWHG ZLWK WKH SURSRVHG IRUPXODWLRQ LV DQ H[DFW HYDOXDWLRQ RI WKH G\QDPLF HFRQRPLFV 7KH SURSRVHGIRUPXODWLRQZDVLPSOHPHQWHGLQ*$06DQGVROYHGXVLQJWKH*$06LQWHUIDFH WR&3/(;
642
Ƴ
&RRODQW )7
)7
)
/
6WHDP ) 7 3 )HHG )[7
$3VDOWLVHWDO
)
7
3URGXFW )[7
)LJXUH7KHHYDSRUDWRUFDVHVWXG\
7KH QXPEHU RI SRLQWV XVHG LQ WKH DSSUR[LPDWLRQ cc DUH IL[HG WR 13 DQG WKH QXPEHU RI IUHTXHQFLHV 16 XQLIRUPO\ GLVWULEXWHG LQ WKH UDQJH ± 6L[ VWUXFWXUHV ZHUH IRXQG ZLWK ]HUR G\QDPLF HFRQRPLFV ZKLFK XVH ERWK PDQLSXODWHG YDULDEOHV WR FRQWURO SHUIHFWO\ DQ\ RI WKH FRQWUROOHG YDULDEOHV & 3 7 7 ,Q DOO FDVHV WKH FDOFXODWHG G\QDPLF HFRQRPLFV DJUHHG ZLWK WKH H[DFW YDOXH RI WKH G\QDPLF HFRQRPLFVDQGWKHUHIRUHWKHLWHUDWLYHDOJRULWKPZDVXQQHFHVVDU\
&RQFOXVLRQV ,Q WKLV ZRUN DQ LPSURYHG PDWKHPDWLFDO SURJUDPPLQJ IRUPXODWLRQ RI WKH EDFNRII PHWKRGRORJ\IRUFKRRVLQJUHJXODWRU\FRQWUROVWUXFWXUHVLVSUHVHQWHG,WZDVVKRZQWKDW WKH SURSRVHG IRUPXODWLRQ HOLPLQDWHV WKH QHHG IRU WKH H[DFW FDOFXODWLRQ RI WKH G\QDPLF HFRQRPLFV IRU WKH VWUXFWXUHV VHOHFWHG E\ WKH RSWLPL]DWLRQ DOJRULWKP ,W LV WKHUHIRUH EHOLHYHG WKDW WKH SURSRVHG IRUPXODWLRQ ZLOO RIIHU WKH RSSRUWXQLW\ WR H[WHQG WKH DSSOLFDELOLW\ RI WKH EDFNRII PHWKRGRORJ\ WR ODUJH VFDOH FDVH VWXGLHV LQFOXGLQJ SODQWZLGHFRQWUROSUREOHPV
$FNQRZOHGJHPHQWV )LQDQFLDO VXSSRUW IURP & .DUDWKHRGRU\ *UDQW & RI WKH 8QLYHUVLW\ RI 3DWUDV LV JUDWHIXOO\DFNQRZOHGJHG
5HIHUHQFHV $QGHUVRQ-6$SUDFWLFDOSUREOHPLQG\QDPLFKHDWWUDQVIHU7KH&KHP(QJSS *$06$8VHU¶V*XLGH*$06'HYHORSPHQW&RUSRUDWLRQ:DVKLQJKWRQ'&86$ +HDWK-$ǿȀ.RRNRVDQG-'3HUNLQV3URFHVV&RQWURO6WUXFWXUH%DVHGRQ (FRQRPLFV$,&K(-SS +RYG0,..RRNRV&DOFXODWLQJG\QDPLFGLVWXUEDQFHUHMHFWLRQPHDVXUHV,)$& :RUOG&RQJUHVV3UDJXH&]HFK5HSXEOLF /HH:9::HHNPDQ$GYDQFHG&RQWURO3UDFWLFHLQWKH&KHPLFDO3URFHVV,QGXVWU\$ 9LHZIURP,QGXVWU\$,&K(- 1DUUDZD\/7DQG-'3HUNLQV6HOHFWLRQRI3URFHVV&RQWURO6WUXFWXUH%DVHGRQ/LQHDU '\QDPLF(FRQRPLFV,QG(QJ&KHP5HVSS 1LVKLGD1*6WHSKDQRSRXORV$::HVWHUEHUJ$UHYLHZRISURFHVVV\QWKHVLV$,&K( - SS± 3HUNLQV-'&*DQQDYDUDSX*:%DUWRQ&KRRVLQJ&RQWURO6WUXFWXUHVEDVHGRQ (FRQRPLFV,Q&RQWUROIRU3URILW1HZFDVWOH1RYHPEHU
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Software application for intelligent control of a bioprocess. Case study Cristina Tănase,a Mihai Caramihai,a Camelia Ungureanu,a Gheorghe Sârbu,a Ana Aurelia Chirvase,b Ovidiu Munteana a
Politehnica University of Bucharest, 1, Polizu Street, district 1, CP 011061, Bucharest, Romania b National Research & Development Institute for Chemistry and Petrochemistry ICECHIM, 202, Splaiul Independetei, district 6, CP 060021, Bucharest, Romania
Abstract The research described in this paper is a part of a larger experimental project dealing with the bioprocess optimisation for an immunomodulator product preparation extracted from the harvested cells. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium, equipped with pH, temperature, dissolved oxygen, and agitation controllers. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time varying system vs. the classical control method based on a priori model. Keywords: intelligent control, fuzzy model, bioprocess optimization.
1. Introduction The research described in this paper is a part of a larger experimental project dealing with the bioprocess optimisation for an immunomodulator product preparation extracted from the harvested cells. The production of immunomodulator product is associated with cell growth rate. Hence, the main research objective was to obtain large biomass quantities. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. [1-3].
2. Paper approach 2.1. Methodology The optimization of the aerobic bioprocess with Pseudomonas aeruginosa sp. is to be performed in case of a bacterial immunomodulator preparation. As the formation of the immunomodulator product is growth-associated the main research objective was to get
C. Tănase et al.
644
big cellular concentration. The experiments were done in a bottom driven and aerated 100 L Bioengineering® bioreactor with 42 L aqueous Organotech® peptone solution as main culture substrate. The reactor was equipped with pH, temperature, dissolved oxygen, air flow, foam, and agitation controllers. The controlled parameters of the bioprocess are the followings: temperature: 37 oC; impeller speed: 250-300 rpm; air flow rate: 20-40 L/min; pH: 7.3. The cellular growth is determined by a standard dryweight method (usual procedure at drying at 105 oC) and by off-line determining of the Optical Density (OD at Ȝ = 570 nm. The substrate consumption was determined by analyzing the aminic nitrogen (Sörensen method) [4]. 2.2. BIOSIM BIOSIM, an original developed software package, implements a developed control structure and allows the comparison between the performances of closed-loop fuzzy control and of the open-loop control. 2.2.1. The controller: The BIOSIM software has a fuzzy controller at its core. The FCS characteristics are: - For the fuzzyfier three triangular membership functions were used; - The inferences engine uses the tables of rules to decide upon the degree of membership for the fuzzyfied values; - The defuzzyfier applies the centroid method, a convex optimization technique. 2.2.2. The controller use: The controller takes as crisp inputs the cellular concentrations (X) and the substrate concentrations (S). Based on the inference tables, the controller computes a value for the substrate to be added in order to get high growth rate. 2.2.3. Simulation The simulation routine makes use of this value to modify the current value of the substrate, which is employed in computing the new cellular concentration. The determination of the cells’ concentration function of the substrate concentration is done by using different microbial growth kinetic models that define the specific growth rate (ȝ). Prior to decide about the values of interest, a calibration step is necessary, e.g. to determine the saturation constant (KS) and the maximum specific growth rate (ȝmax). 2.2.4. Theoretical background Several microbial growth kinetics were used to fit the experimental data: the Monod, Tessier and Moser models (without inhibition by substrate), and Andrews (considering the substrate inhibition) [5]. 2.2.5. The algorithm The biomass increase is given by the following equation. This is used as a difference equation for software implementation. x
X ȝX and the substrate consumption: x
S
1 x X Y
The modeling of the bioprocess was done in discrete form, differential equations become difference equations. Step 1. Compute:
Xk
(1 ȝ k 1 )X k 1
Software application for intelligent control of a bioprocess. Case study
645
Step 2. Compute:
Sk
S k 1 -
1 (X k - X k 1 ) Y
Step 3. Introduce Sk and Xk into the controller and receive Aj0 Step 4. Compute:
Sk
Sk 1 Aj
Step 5. Compute k+1 according to the chosen model Step 6. Compute k ĸk+1, go to Step 1. 2.2.6. The technology The software uses in the background Matlab (version 2008b) application due to its several efficiency-boosting options. 2.2.7. The interface The application interface provides to the user many working facilities, the most important are: - definable matrix used in fuzzyfying the substrate S and the biomass X concentrations and inputs for their domains; - definable matrix used in the functioning of the inference engine; domain names are Z (zero), PM (positive small), PME (positive medium) and PMA (positive big); - initial substrate input; - drop-down box, allowing the choice of the desired model to run the simulation; - buttons for exporting and importing software configurations; - graph windows, for the evolutions of the S and X; - displayed set of values used in chart plotting; - button for exporting the results to Excel files.
Fig. 1 The interface of Biosim software 2.3. The case study The formation of immunomodulator product is associated with cell growth rate. Substrate (S) concentration and biomass (X) concentrations are considered the inputs of the proposed FCS. The output of the fuzzy system is the correction [+/-] to be applied on the substrate. Fuzzy rules, presented in Table 1, were established based on the experience of human experts.
646
C. Tănase et al.
Table 1. The rule base Xk S M L Sk Z PM PM S PM PME PMA M PM PME PME L Further, two experimental data sets are discussed. Based on the experimental data was calculated the maximum specific growth rate (ȝmax). In the first one, the cell growth duration was four hours with S starting at 115mg/100mL, ȝmax= 0.65 h-1; but in the second one, the initial concentration of substrate was set at 120mg/100mL and the cell growth was tracked over six hours, ȝmax = 0.71 h-1. A
B
Fig.2 Simulation results of the substrate and biomass, 1st experiment – 4 hours of growth, (“x” – experimental data, “o” – simulation), Monod model The Fig. 2 demonstrates the data obtained by simulation follow closely the experimental data, both for the substrate consumption and the biomass growth. A
B
Fig.3 Simulation results of the substrate and biomass, 1st experiment – 4 hours of growth, (“x” – experimental data, “o” – simulation), Tessier model According to Fig.3A, substrate consumption is quite small compared with the initial concentration, indicating that one may reduce the substrate concentration in the medium composition. Both models represent well the experimental results. However, because it appears that the bacteria population is still growing exponentially, a four hour period is not enough for the bioprocess. A
B
Fig.4 Simulation results of the substrate and biomass, 2nd experiment – 6 hours of growth, (“x” – experimental data, “o” – simulation), Monod model
Software application for intelligent control of a bioprocess. Case study
647
In Fig.4 it can be seen that substrate consumption is lower than that indicated by the simulation during the bioprocess evolution and the biomass simulation results are similar to the experimental data. A
B
Fig.5 Simulation results of the substrate and biomass, 2nd experiment – 6 hours of growth, (“x” – experimental data, “o” – simulation), Tessier model Finally, as shown in Fig.5, simulation results for the cellular concentration follow closely the experimental data, but as in previous case, during entire period the substrate consumption is lower. From the simulation of substrate consumption a higher deviation is observed by comparison with the experimental decrease, because the fuzzy software BIOSIM uses a general theoretical equation probably not enough adequate to describe the real evolution. On the contrary the biomass growth is well represented by simulation. BIOSIM allows the choice of optimum initial substrate concentration, in this case 120mg/100 mL. It determines the experimental biomass growth rate needed to put into evidence the recommended bioprocess operation conditions, duration including (six hours). At the same time the software can be used to demonstrate the kinetic behaviour in the discontinuous bioprocess, in the case study the adequancy of the Tessier and Monod models.
3. Conclusions Several sets of experimental data were used to test the proposed FCS, original package BIOSIM, two sets results being put into evidence in the paper. The objective of these experiments was to reduce the bioprocess duration, but on condition to get a higher final cellular concentration. Selecting optimum initial substrate concentration and taking into account the higher growth rate, the simulation data can be applied to initiate fed batch operation. The simulation study has showed that the fuzzy technique is quite appropriate to determine both the recommended operation conditions for this non-linear, time varying system and the cellular growth kinetics.
References H. W. Ryu, M. Kim, J. N. Kim, J. S. Zun, 2006, Appl. Biochem. Biotechnology, 10, 129-132 M. Jench, S. Gnoth, M. Beck, M. Kleinschmidt, R. Simutis, A. Lubbert, 2006, J. Biotech, 127, 84 R. C. Alavala, 2007, Fuzzy Set and Fuzzy Logic, Fuzzy Logic and Neural Networks, 6 C. Ungureanu et al., 2008, Model and kinetic parameters identification for therapeutical product obtaining according to the GMP guidelines, Revista de chimie, 59 (7), 762-765 M. Caramihai, I. Severin, A. A. Chirvase, A. Onu, C. Tanase, C. Ungureanu, 2010, Therapeutic Product Preparation. Bioprocess Modeling, World Academy of Science, Engineering and Technology, 66, 1747-1450
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Integration of a multilevel control system in an ontological information environment Edrisi Muñoz, Antonio Espuña, Luis Puigjaner∗ Department of Chemical Engineering, Universitat Politècnica de Catalunya ETSEIB, Avda. Diagonal, 647, E-08028 - Barcelona, Spain.
Abstract In the domain of chemical process engineering management, there are many specialists working together at different levels of decision-making. This work focuses on improving computational tools and models required in order to make robust decisions based on high quality information, with specific application to batch processes. The informatics system and the user interface developed to support robust and informed decision making is proposed. A basic ontological framework developed previously (the Batch Process Ontology (Muñoz et al., 2010)) is extended to consider additional decision levels, as well as further solution strategies involving overall objective functions and control actions. Regarding to the Ontological framework developed, the emphasis is in simultaneously achieving high degrees of usability and re-usability. Preliminary results are presented and discussed. Keywords: Ontology, Knowledge sharing, Knowledge management, Decision support systems.
1. Introduction Decision Support Systems (DSS) are directly or indirectly related to manufacturing indicators, like economic efficiency, product quality, flexibility, reliability, etc. Global competition has made essential for the viability of the enterprise the use of such DSS, and the need for development of new tools to integrate the different time and scale levels involved have been highlighted. The first requirement to achieve such integration is to define standardized information structures and more sophisticated information tools to exploit them, in order to improve the availability and communication of data between different decision levels and also the models behind the corresponding decision support tools. In this framework, the role of infrastructures that continuously and coherently support fast and reliable decision-making activities related to the production process is now of paramount importance (Venkatasubramanian et al., 2006). The use of an ontology is proposed, as a formal specification, that is a body of formally represented knowledge based on conceptualizations, which are abstract, simplified views of the physical or procedural elements. The ontology elements should be part of the model intended to represent a system for some purpose, managing the relationships that hold among the elements of the model, allowing this model to be usable (Gruber, 2008). On the other hand, reusing ontologies is far from being an automated process. It requires not only consideration of the ontology, but also of the tasks for which it is intended. The key for the presented ontology reusability is that it lies on the basis of the standard ANSI/ISA 88 (International Society for Measurement ∗ [email protected]
Integration of a multilevel control system in an ontological information environment
649
and Control , 2001). In general, this standard should facilitate building larger, better and cheaper systems. It should also lead to a greater dissemination of these systems.
2. Ontological model Indeed, ontologies are hierarchical domain structures that provide a domain theory, have a syntactically and semantically rich language, and a shared and consensual terminology (Klein and Noy, 2003). As it is well known, the design, construction and operation of chemical plants are considered the major engineering activities (Morbach et al., 2009) and, in this way, the ontological framework model should not only represent the terminology, but also the entire domain of chemical engineering processes, with a particular attention to the integration of control activities at different levels. Therefore, on the one hand the ontology intends to resolve eventual terminological confusions, since one of its commitments is to guarantee the consistency with respect to queries and assertions using the vocabulary defined in the ontology. Also, it relates the different mathematical models within the system, showing the correspondence that there exist among them. The aim of this relation allows the enrichment of models and the flow of information for future decision making. The proposed ontological model lies in the ANSI/ISA 88 standard that describes the entire scope of manufacturing activities. It provides a framework that an engineer may use to specify automation requirements in a modular fashion. In addition, the ontological model can be used for integrating batch-related information with Manufacturing Execution Systems and Enterprise Resource Planning Systems. The description and models of the different functional decision levels that exist in companies helps to diminish in some way the hurdles that are present in the coordination and integration between these levels. The way of accomplishing this is by using the different models (i.e. those founded in the ANSI/ISA 88 standard). These models are: i) The physical model, which defines the hierarchy of equipment used in the batch process, providing the means to organize and define the equipment used in the process, and ii) The procedural model, which describes the strategy that enables the equipment in the physical model to perform a process task; it is defined as multi-tiered (hierarchical), and composed of different elements (Williams, 1989). Both the procedural and physical models are related each other by means of the recipes: a recipe consists for the set of information that uniquely defines the production requirements for a specific product. One of the most significant contributions that the standard offers to batch manufacturing is the separation of the recipe procedure and the equipment control logics.
3. Development of software architecture The proposed informatics system allows the utilization of the ontology as a common model between actors, thus facilitating the communication and knowledge reuse among them. Even more, due to the current lack of integration between the different control levels (Purdue Reference Model) the ontology eases the decision support task by providing knowledge integration among decision layers. The architecture of the proposed Ontological framework is based on BaPrOn (batch Process Ontology) described in Muñoz et al. (2010). One of the major requirements considered for this software architecture is that the system should be constructed in an open source, modular fashion (Horridge et al., 2007). In this way this ontology could be considered as nominal ontology and high ontology as well, taking into account the factors that influence the complexity, such as concepts, taxonomy, patterns, constraints and instances. The software Protégé was used
650
Edrisi Muñoz et al.
for the generation of the ontological model using OWL (Ontology Web Language). The OWL language has the expressive power needed to represent the different domains of the solution we want to explore. This unifying aspect, for instance, may make it easier to establish, through collaboration and consensus, the utilitarian vocabularies (e.g. ontologies) needed for far-flung cooperative and integrative applications using the Word Wide Web and internal servers.
4. Background The main objective established for development of this system was to promote the capacity to integrate different perspectives (i.e., different hierarchical decision levels) and the mappings between them. In this sense, the proposed Ontological framework contemplates the enterprise control system integration, where processes are categorized, the relationships between them are examined and imposed, and the properties that aim at specifying the aforementioned relationships are introduced. 4.1. Technological architecture The application development is based on the MVC (model view controller pattern). The MVC proposes the identification and classification of the application functionalities in three different layers: model, control logic and user interface. These layers are clearly separated between them allowing the easy implementation of new features. Besides, this architecture facilitates the scaling of the infrastructure. 4.1.1. View layer The view layer represents the page design code and manages the interaction with the final costumers, delivering the information in different formats. For the implementation of this layer, the Apache Struts framework (Holmes, 2004), which is an open source for creating Java web applications, has been used. 4.1.2. Control layer The control layer includes the navigational code. A controller accepts input from the user and instructs the model and view port to perform actions based on that input. So, the controller is responsible for mapping end-user actions to application responses. 4.1.3. Model layer Business layer In the business layer, also known as the domain layer, the programs are running. This layer communicates with the view layer, to receive requests and present the results, and with the data layer, to store or retrieve data from the database manager. Its logic usually corresponds to the software engineering practice of compartmentalizing, and is usually one of the tiers in a multitier architecture. Data and Access layer Data and access layer is where data resides and is responsible for accessing them. It consists of one or more database administrators to perform all data storage, receiving requests for storing or retrieving information from the business layer. For this layer the Hibernate (Linwood et al., 2010) framework has been used. Hibernate facilitated the storage and retrieval of Java domain objects via Object/Relational Mapping. Today, Hibernate is a collection of related projects enabling developers to utilize POJOstyle domain models in their applications, extending well beyond Object/Relational Mapping.
Integration of a multilevel control system in an ontological information environment
651
4.2. Software system The Java platform has been used as a high-level programming language because it presents a good versatility, efficiency and security. In addition, Java code can run on most computers because Java interpreters and runtime environments, known as Java Virtual Machines (VMs), exist for most operating systems. Another key factor in the choice of Java is the fact that the ontological classes can be easily adapted (translated) into Java classes for the client’s web use through the interface as it is shown in figure1. In this way, the easy development and future propagation of the software will be done as open source, meaning that it can be easily adapted to any process plant requirements. The application of the ontological model takes place inside the business layer.
Figure 1. Software system architecture
5. Results and Performance Inside the domain of scope of manufacturing activities modelling, a business layer has been successfully implemented for the first time to encompass the integration of the planning and the control tasks by means the standard ANSI/ISA 88 embedded in an ontological infrastructure platform. Each task process a XML recipe which contains the necessary information and data for the process performance. Once these recipes have been processed by an external application, they are sent to the next process stage acting as a path of the flow of information. Finally, the control and monitoring data are saved in a database and, at the same time, sent to the planning level if this was required. The applications of planning and control include a Case Base Reasoning (CBR) method, to collaborate in the decision-making task. At the same time this CBR helps in the information recovery, acting as a cache or reusing the data information and facilitates significantly the handling of constraints. The use of this innovative software system improved communication and coordination procedures at the planning-scheduling level and the control-monitoring level. Besides, there was an enhancement of reactivity in the system to incidences from different sources and levels. This can be observed in more detail at Muñoz et al. (2011).
6. Conclusions A new supporting tool to coordinate and optimize the information flow among decision levels has been developed, in order to facilitate the decision making task. It makes use of a previously developed ontological framework through a friendly user interface. Specifically, the ontological framework is adopted to represent the reality among different decision levels and establish a common model. As a result, several challenges prompted
652
Edrisi Muñ oz et al.
by integration are addressed: modularity, data availability, standardization, a more effective handling of disturbances and an efficient information flow. Jointly, they enhance the access to information, enriching its meaning by incorporating knowledge description. Furthermore, the benefits of the implementation of the framework (software system) include improvements in the way that data and information are managed at the different decision levels. It is well known that it is difficult to simultaneously achieve high degrees of usability and reusability. Usability implies specialization to a particular task, whereas reusability requires genericity in order to be applicable in different contexts. However, the basis of this work lies down on the ANSI/ISA 88 standard as the base of the domain representation, moving the usable and reusable trade-off to new opportunities in its application. What is more the, easy adaptation to process plant requirements proved the framework usability. The improvement on systems integration is achieved trough enhanced communication and coordination procedures. Reactivity to incidences was also achieved from different sources and levels of decision.
7. Acknowledgments Direccion General de Educacion Superior Tecnologica (DGEST), Academy Excellence Program (072007004-E.A) from México and the research Project EHMAN (DPI200909386) funded by the European Union (European Regional Development Fund ERDF) and the Spanish "Ministerio de Ciencia e Innovación" are fully appreciated.
References Gruber, T., 2008. Ontology. Ling Liu and M. Tamer Özsu. Holmes, J., 2004. Struts: The Complete Reference (Osborne Complete Reference Series). McGrawHill Osborne Media. Horridge, M., Jupp, S., Moulton, G., Rector, A., Stevens, R., Wroe, C., 2007. A practical guide to building owl ontologies using protege 4 and co-ode tools. Tech. rep., The University Of Manchester. Klein, M., Noy, N. F., 2003. A component-based framework for ontology evolution. Linwood, J., Minter, D., Linwood, J., Minter, D., 2010. Integrating and configuring hibernate. In: Beginning Hibernate. Apress, pp. 9–25. International Society for Measurementand Control, February 2001. Data structures and guidelines for languages. Morbach, J., Wiesner, A., Marquardt, W., 2009. Ontocape 2.0 a (re)usable ontology for computeraided process engineering. Computers & Chemical Engineering 33, 1546 –1556. Muñoz, E., Capon-Garca, E., Moreno-Benito, M., Espuña, A., Puigjaner, L., 2011. Scheduling and control decision-making under an integrated information environment. Computers & Chemical Engineering In Press, Accepted Manuscript, –. Muñoz, E., Espuña, A., Puigjaner, L., 2010. Towards an ontological infrastructure for chemical batch process management. Computers & Chemical Engineering 34 (5), 668 – 682, selected Paper of Symposium ESCAPE 19, June 14-17, 2009, Krakow,Poland. Venkatasubramanian, V., Zhao, C., Joglekar, G., Jain, A., Hailemariam, L., Suresh, P., Akkisetty, P., Morris, K., Reklaitis, G., July 2006. Ontological informatics infrastructure for pharmaceutical product development and manufacturing. Computers and Chemical Engineering 30, 1482–1496. Williams, T. J., 1989. A reference model for computer integrated manufacturing (CIM). ISA Research Triangle Park.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
&RQWURO6WUXFWXUH6HOHFWLRQZLWK5HJDUGWR6WD WLRQDU\DQG'\QDPLF3HUIRUPDQFHZLWK$SSOLFD WLRQWR$7HUQDU\'LVWLOODWLRQ&ROXPQ /H&KL3KDPD6HEDVWLDQ(QJHOOD D
3URFHVV'\QDPLFVDQG2SHUDWLRQV*URXS'HSDUWPHQWRI%LRFKHPLFDODQG&KHPLFDO (QJLQHHULQJ7HFKQLVFKH8QLYHUVLWlW'RUWPXQG (PLO)LJJH6WU'RUWPXQG*HUPDQ\ (PDLODGGUHVVOHFKLSKDP_VHQJHOO#EFLWXGRUWPXQGGH
$EVWUDFW ,QSUHYLRXVZRUNZHH[WHQGHGWKHDQDO\VLVRIFRQWUROVWUXFWXUHVEDVHGXSRQIXOOSURFHVV PRGHOV WR LQFOXGH WKH G\QDPLF SHUIRUPDQFH LQ D FRQVLVWHQW PDQQHU ,Q RXU DSSURDFK 103&FRQWUROOHUV DUH DVVXPHG WR DYRLG WKH SUREOHP RI FRPSDULVRQ EHWZHHQ FRQWURO VWUXFWXUHVZKHUHWKHG\QDPLFSHUIRUPDQFHGHSHQGVRQWKHW\SHDQGWKHSDUDPHWHUL]DWLRQ RIWKHFRQWUROOHUVWKDWDUHXVHG7KHZHLJKWVRIWKHVHFRQWUROOHUVDUHRSWLPL]HGIRUHDFK VWUXFWXUHWR \LHOGDQRSWLPDOHFRQRPLFSHUIRUPDQFHIRUWKHGLVWXUEDQFHVFHQDULRVFRQ VLGHUHG,Q RUGHUWR DYRLGEDVLQJWKHGHFLVLRQRQWKHZRUVWFDVHGLVWXUEDQFHV RQO\WKH SUREDELOLW\RIWKHRFFXUUHQFHRIWKHGLVWXUEDQFHVLVWDNHQLQWRDFFRXQWFRQVLGHULQJDOVR VPDOOGLVWXUEDQFHVZKLFKKDSSHQPRUHIUHTXHQWO\7KXVWKHUHVXOWLVDJRRGDSSUR[LPD WLRQRIWKHSHUIRUPDQFHWKDWLVDWWDLQHGIRUHDFKVWUXFWXUHLQUHDOLW\DQGWKHVWUXFWXUHVDUH FRPSDUHG RQ HTXDO JURXQGV 7KH DSSURDFK LV GHPRQVWUDWHG IRU D WHUQDU\ GLVWLOODWLRQ SUREOHP .H\ZRUGV&RQWUROVWUXFWXUHVHOHFWLRQHFRQRPLFSHUIRUPDQFHGLVWLOODWLRQFRQWURO
,QWURGXFWLRQ &RQWUROVWUXFWXUHVHOHFWLRQGHDOVZLWKWKHFKRLFHRIPDQLSXODWHGDQGPHDVXUHGYDULDEOHV LQIHHGEDFNFRQWURO7KHSHUIRUPDQFHRIWKHFORVHGORRSV\VWHPLVPRUHDIIHFWHGE\WKH FKRLFHRIFRQWUROVWUXFWXUHWKDQE\WKHFKRLFHRIFRQWURODOJRULWKPV0RVWSDSHUVIURP WKHFRQWUROFRPPXQLW\GHDORQO\ZLWKWKHUHVXOWLQJWUDFNLQJDQGUHJXODWLRQSHUIRUPDQFH UDWKHU WKDQ WKH UHVXOWLQJ SHUIRUPDQFH RI WKH SODQW LQ WKH SUHVHQFH RI GLVWXUEDQFHV DQG SDUDPHWHUYDULDWLRQV 6HYHUDOLPSRUWDQWDVSHFWVRIWKHFRQWURORIFKHPLFDOSURFHVVHVIURPWKHSRLQWRIYLHZRI SODQWSHUIRUPDQFHZHUHGLVFXVVHGE\0RUDULHWDO 0RUDUL¶VLGHDZDVIROORZHG E\WKHVRFDOOHG³VHOIRSWLPL]LQJ´FRQWUROLQWURGXFHGE\6NRJHVWDG PHDQLQJWKDW LQWKHSUHVHQFHRI GLVWXUEDQFHVDZHOOFKRVHQFRQWUROVWUXFWXUHFRXOGEHDEOHWRPDLQ WDLQ WKH SURFHVV DW D FORVHWRRSWLPDO RSHUDWLQJ SRLQW $ VWHS ZLVH DSSURDFK EDVHG RQ ULJRURXV VWDWLRQDU\ DQDO\VLV ZDV SURSRVHG (QJHOO HW DO UHILQHG WKLV DSSURDFK XVLQJ REMHFWLYH FULWHULD DQG RSWLPL]DWLRQ WR UHSODFH LQIRUPHG MXGJHPHQWV WDNLQJ LQWR DFFRXQW WKH HIIHFW RI PHDVXUHPHQW HUURUV 6WDWLF SHUIRUPDQFH LQGLFDWRUV ZHUH XVHG WR DVVHVVWKHSURPLVLQJFRQWUROVWUXFWXUHV3KDPDQG(QJHOO H[WHQGHGWKHZRUNWR FRQVLGHUDOVRWKHG\QDPLFSHUIRUPDQFHIRUWLPHYDU\LQJGLVWXUEDQFHVWREHWWHUMXGJHWKH SHUIRUPDQFHRIWKHFRQWUROOHGSODQWV,QWKLVSDSHUUHVXOWVIRUDQHZFDVHVWXG\RIDWHU QDU\GLVWLOODWLRQFROXPQDUH VKRZQIRUZKLFKUHDOLVWLFVWDWLFDQG G\QDPLF GLVWXUEDQFHV DUHGHOLYHUHG7KHFRQWUROVWUXFWXUHVHOHFWLRQLVSHUIRUPHGEDVHGXSRQDIXOOQRQOLQHDU G\QDPLFSODQWPRGHO
654
/&3KDPHWDO
&RQWURO6WUXFWXUH6HOHFWLRQ3URFHGXUH 7KHSXUSRVHRIDXWRPDWLFIHHGEDFNFRQWUROIURPDSURFHVVHQJLQHHULQJSRLQWRIYLHZLV WRHVWDEOLVKWKHFORVHWRRSWLPDO SURFHVV RSHUDWLRQLQ WKH SUHVHQFH RI GLVWXUEDQFHVDQG SODQWPRGHOPLVPDWFK7KHHIIHFWRIIHHGEDFNFRQWURORQWKHSURILWIXQFWLRQ-LVWKHGLI IHUHQFHEHWZHHQWKHSURILWIURPNHHSLQJWKHPDQLSXODWHGYDULDEOHVDWWKHQRPLQDOYDOXHV ZLWKQRGLVWXUEDQFHVDIIHFWLQJWKHSODQWDQGWKHSURILWIURPUHJXODWLQJWKHPDQLSXODWHG YDULDEOHVE\WKHFRQWUROOHUZLWKGLVWXUEDQFHVDIIHFWLQJWKHSODQW7KLVFDQEHH[SUHVVHG DV(QJHOOHWDO(QJHOO '- - X QRP - X QRP G L - X QRP G L - X RSW G L - X RSW G L - X FRQ G L 7KHILUVWWHUPLVWKHORVVLIGLVWXUEDQFHVRFFXUDQGWKHPDQLSXODWHGYDULDEOHVDUHIL[HGDW WKHLUQRPLQDOYDOXHV7KHVHFRQGWHUPLVWKHHIIHFWRIWKHRSWLPDODGDSWDWLRQRIWKHPD QLSXODWHGYDULDEOHVLQWKHSUHVHQFHRIGLVWXUEDQFHV7KHWKLUGWHUPLVWKHGLIIHUHQFHEH WZHHQWKHRSWLPDODGDSWDWLRQDQGWKHRQHUHDOL]HGE\WKHFKRVHQIHHGEDFNFRQWUROVWUXF WXUH LQ WKH SUHVHQFH RI GLVWXUEDQFHV 7KH RYHUDOO SHUIRUPDQFH RI D FRQWURO VWUXFWXUH VKRXOGEHHYDOXDWHGE\WKHH[SHFWHGORVVRISURILW(QJHOOHWDO G PD[
'-
³
G PD[
GQ PD[
³
ZG - X QRP G L - X FRQ G L GG GG Q
G Q PD[
ZKHUH ZG GHQRWHV WKH SUREDELOLW\ RI WKH RFFXUUHQFH RI WKH GLVWXUEDQFH G ZKLFK LV XVXDOO\DSSUR[LPDWHGE\WKHZHLJKWHGVXPRYHUDVHWRIGLVWXUEDQFHVFHQDULRV+HUHZH DVVXPHWKDWWKHSURFHVVLVDIIHFWHGE\WZRNLQGVRIGLVWXUEDQFHVVORZDQGIDVWYDU\LQJ GLVWXUEDQFHV7KHRSWLPL]DWLRQRIWKHVWHDG\VWDWHSHUIRUPDQFHLVXVHGWRWUHDWWKHIRU PHURQHVDQDO\]LQJWKHZRUVWFDVHSHUIRUPDQFHRIWKHUHJXODWRU\FRQWUROZKHQNHHSLQJ WKH FRQWUROOHG YDULDEOHV ZLWKLQ D UDQJH DURXQG WKH VHWSRLQWV GHILQHG E\ WKH PHDVXUH PHQWHUURUV(QJHOOHWDO 7KHODWWHURQHVDUHLQFRUSRUDWHGLQDFRQVLVWHQWPDQQHU LQWURGXFLQJ RSWLPL]DWLRQEDVHG FRQWURO XVLQJ QRQOLQHDU G\QDPLF PRGHOV WR DYRLG WKH SUREOHPRIDFRPSDULVRQRIFRQWUROVWUXFWXUHVZKHUHWKHG\QDPLFSHUIRUPDQFHGHSHQGV RQWKHW\SHDQGWKHSDUDPHWHUL]DWLRQRIWKHFRQWUROOHUVWKDWDUHXVHG7KHZHLJKWVXVHG LQWKHFRVWIXQFWLRQVRIWKH103&FRQWUROOHUVDUHRSWLPL]HGWRJHWDQRSWLPDOHFRQRPLF SHUIRUPDQFH ,Q WKLV SDSHU UHDOLVWLF GLVWXUEDQFHFDVHVDUHFRQVLGHUHG E\ GHILQLQJ ERWK WKH VL]H DQG WKH IUHTXHQF\ FRQWHQW RI WKH GLVWXUEDQFHV 7KXV WKH UHVXOW LV D JRRG DS SUR[LPDWLRQRIWKHUHDOSHUIRUPDQFHRIHDFKVWUXFWXUHDQGWKHEHVWVWUXFWXUHLVIRXQGE\ HYDOXDWLQJDOOSURPLVLQJFDQGLGDWHV7KHSURSRVHGFRQWUROVWUXFWXUHVHOHFWLRQSURFHGXUH FRQVLVWVRIVWHSV 'HILQHWKHRSWLPL]DWLRQSUREOHP 7KH DYDLODEOH GHJUHHV RI IUHHGRP RI WKH SURFHVV DUH GHWHUPLQHG DQG WKH PDQLSXODWHG YDULDEOHVDUHFKRVHQ$SURILWIXQFWLRQ-WREHPD[LPL]HGDQGWKHFRQVWUDLQWVWKDWQHHG WREHIXOILOOHGGXULQJWKHRSHUDWLRQDUHGHILQHG PD[ - [ X G L X
VW I [ X G
SODQWPRGHO
K [ X d FRQVWUDLQWV 7KHRXWSXWPDSSLQJLVJLYHQE\ \
P [
&KRRVHWKHGLVWXUEDQFHV 7ZRW\SHVRIGLVWXUEDQFHVDUHDVVXPHGPHDVXUHPHQWHUURUVDQGH[WHUQDOGLVWXUEDQFHV :KLOHWKHIRUPHURQHVFDQEHIRXQGLQWKHLQVWUXPHQWGDWDVKHHWVWKHODWWHUDUHFDXVHG E\HUURUVLQWKHDVVXPHGPRGHOGLVWXUEDQFHVHWF
&RQWURO6WUXFWXUH6HOHFWLRQZLWK5HJDUGWR6WDWLRQDU\DQG'\QDPLF 3HIRUPDQFHZLWK$SSOLFDWLRQWR$7HUQDU\'LVWLOODWLRQ&ROXPQ 655 3UHVHOHFWLRQRIWKHFRQWUROVWUXFWXUHV 7KHQXPEHURISRVVLEOHFRQWUROVWUXFWXUHVLVDIXQFWLRQRIWKHQXPEHURIDYDLODEOHPHDV XUHPHQWV DQG FRQWUROOHG YDULDEOHV ZKLFK JURZV TXLFNO\ ZLWK WKH QXPEHU RI PHDVXUH PHQWV+HQFHIRUODUJHSUREOHPVXQSURPLVLQJVWUXFWXUHVVKRXOGEHSUHVFUHHQHGEHIRUH 0DQ\LQGLFHVFDQEHXWLOL]HGHJ5+3]HURVJHQHUDOL]HGQRQQHJDWLYH5*$HWF 6HOHFWLRQRIWKHVHWSRLQWVIRUUHJXODWRU\FRQWURO 7KHRSWLPDOVHWSRLQWVDUHGHWHUPLQHGE\VROYLQJ Q
PD[ ¦ - [ X L G L \
VHW
L
VW G L [ I [ X L G L
K [ X d \ VHW
P [
7KHLGHDLVWRILQGVHWSRLQWVWKDWVDWLVI\WKHFRQVWUDLQWVIRUDOOGLVWXUEDQFHV7KHRSWLPL ]DWLRQ SUREOHP FDQ EH LQIHDVLEOH PHDQLQJ WKDW IRU WKH JLYHQ FRQVWUDLQWV DQG GLVWXU EDQFHVWKHUHLVQRFRPPRQVHWSRLQWZKLFKFDQEHDWWDLQHG 4XDQWLWDWLYH HYDOXDWLRQ RI WKH EHQHILWV RI WKH FRQWURO VWUXFWXUHV ZLWK FRQVWDQW GLVWXUEDQFHV )RUDOOGLVWXUEDQFHV G L WKHIROORZLQJRSWLPL]DWLRQSUREOHPLVVROYHGWRREWDLQWKHZRUVW FDVHFRQWUROSHUIRUPDQFHIRUUHJXODWLRQRIWKHFRQWUROOHGYDULDEOHVWRYDOXHVLQWKHUDQJH DURXQGWKHQRPLQDOVHWSRLQW \ VHW GHILQHGE\WKHPHDVXUHPHQWHUURUV
PLQ - [ X L G L VW [ I [ X L G L K [ X d \
P [
\ VHW H VHQVRU d \ d \ VHW H VHQVRU ,IWKHYDOXHRIWKHPD[LPXPORVVLVODUJHLWPHDQVWKDWLQWKHSUHVHQFHRIWKHPHDVXUH PHQWHUURUVWKHFRUUHVSRQGLQJFRQWUROVWUXFWXUHLVQRWDEOHWRHQVXUHDJRRGVWDWLRQDU\ SHUIRUPDQFHDQGVKRXOGEHH[FOXGHG 4XDQWLWDWLYH HYDOXDWLRQ RI WKH EHQHILWV RI WKH FRQWURO VWUXFWXUHV ZLWK G\QDPLF GLVWXUEDQFHV ,QWKLVVWHSWKHG\QDPLFSHUIRUPDQFHZKLFKFDQEHDFKLHYHGLIGLVWXUEDQFHVRFFXUDQG WKHFRQWUROOHGYDULDEOHVDUHNHSWFORVHWRWKHVHWSRLQWVDVSRVVLEOHLVFRPSXWHG7KLVLV GRQHE\HPSOR\LQJDVLPXODWLRQRIQRQOLQHDUPRGHOSUHGLFWLYHFRQWUROIRUWUDFNLQJWKH VHWSRLQWV7KHREMHFWLYHIXQFWLRQLVGHILQHGE\ § WN + 3 · 3WN PLQ ¨ ³ \ W \VHW 3 'X 4 GW ¸ X ¨ ¸ © W WN ¹ 3 VW [ W I [W X W G W K [ X d
\ W
P [W
ZKHUH ; GHQRWHVWKHQRUPGHILQHGDV X ; X 7 ;X ;LVDSRVLWLYHVHPLGHILQLWHPD WUL[7KHFRQWUROOHG YDULDEOHVDUHVWHHUHGWRZDUGWKHVHWSRLQWVZKLOHWKHFKDQJH RIWKH PDQLSXODWHGYDULDEOHVVKRXOGEHPLQLPL]HGZKLFKLVDUHTXLUHPHQWLQUHDOLW\3DQG4
656
/&3KDPHWDO
DUHGHJUHHVRIIUHHGRPDQGDUHFKRVHQVXFKWKDWWKHHFRQRPLFSURILWIXQFWLRQ-LVPD[L PL]HGE\DQXSSHUOD\HURSWLPL]DWLRQ
PD[ 3 4
W WHQG
Q
W
L
³ ¦ - [ X G L
L
GW
VW 3 4 ; 7KHUHVXOWVDUHFRPSDUHGDQGWKHVWUXFWXUHZKLFK\LHOGVWKHEHVWSHUIRUPDQFHLVFKRVHQ
&DVHVWXG\ 7KHPHWKRGRORJ\GHVFULEHGDERYHLVDSSOLHG WR D WHUQDU\ GLVWLOODWLRQ FROXPQ 6NRJHVWDG VKRZQLQ)LJ7KHFROXPQFRQVLVWV RI VWDJHV LQFOXGLQJ WKH FRQGHQVHU DQG UHERLOHU DQG LV XVHG WR VHSDUDWH WKH PL[WXUH RI PHWKDQRO HWKDQRO DQG SURSDQRO 7KH QRQLGHDO YDSRU OLTXLG HTXLOLEULXP 9/( LV PRGHOHG E\ :LOVRQ HTXDWLRQ 7KH OLTXLG KROGXSLVPRGHOHGE\)UDQFLV:HLUIRUPXOD 7KHV\VWHPLVGHVFULEHGE\DODUJHQRQOLQHDU '$( V\VWHP ZLWK G\QDPLF VWDWH YDUL DEOHV 7KH UHERLOHU DQG WKH FRQGHQVHU OHYHO DUHDVVXPHGWREHSHUIHFWO\FRQWUROOHGXVLQJ WKH GLVWLOODWH DQG ERWWRP IORZ UDWH 7KH UH IOX[UDWLRDQGWKHERLOHUKHDWGXW\DUHOHIWDV WKHGHJUHHVRIIUHHGRPZKLFKDUHFKRVHQDV WKH PDQLSXODWHG YDULDEOHV :H DVVXPH WKDW )LJ7HUQDU\GLVWLOODWLRQFROXPQ PHWKDQROLVWKHGHVLUHGSURGXFWLQWKHGLVWLO ODWH7KHSURILWIXQFWLRQLVFKRVHQDV - F0HWKDQRO + [0HWKDQRO Q0HWKDQRO FKHDW KHDWLQSXW F)HHG ) + LV WKH +HDYLVLGH VWHS IXQFWLRQ ZKLFK LPSOLHV WKDW WKH SXULW\ RI GLVWLOODWH SURGXFW VKRXOG VDWLVI\ WKH UHTXLUHPHQW 7R DYRLG WKH QXPHULFDO SUREOHP ZKHQ ZRUNLQJ ZLWK GLVFRQWLQXRXV IXQFWLRQ IRU RSWLPL]DWLRQ VROYHU + LV DSSUR[LPDWHG E\ D ORJLVWLF IXQF WLRQ + [ WDQKN[ ZLWKNFKRVHQWREH 7KH SRVVLEOH FRQWUROOHG YDULDEOHV DUH WKH WHPSHUDWXUHV RQ WUD\V DV ZHOO DV /7 9% /7'/7)')9%%%)9%)DQGWKHFRQFHQWUDWLRQ RIPHWKDQROLQWKHGLVWLOODWH ZKHUH/7GHQRWHVUHIOX[IORZ9%KHDWGXW\'DQG%GLVWLOODWHDQGERWWRPIORZUDWH) IHHGIORZUDWH,QWRWDOWKHUHDUHSRVVLEOHFRQWUROOHGYDULDEOHV )RU WKH FRQYHQWLRQDO VTXDUHIHHGEDFN FRQWURO V\V 6WUXFWXUH 3URILW WHPZLWKPDQLSXODWHGYDULDEOHVDQGSRVVLEOH 79%% FRQWUROOHG YDULDEOHV WKHUH DUH & SRVVL /7[' EOHFRQWUROVWUXFWXUHV7KHGLVWXUEDQFHVDUHFKRVHQ 79%% DV D FKDQJH LQ WKH IHHG IORZ UDWH RI D /7)[' FKDQJH LQ WKH IHHG FRQFHQWUDWLRQ RI IRU ERWK 79%) VWHDG\ VWDWH DQG G\QDPLF FDVHV RI WKH WLPH 7DE 3HUIRUPDQFH LQGLFHV RI VWUXF WKHGLVWXUEDQFHVDUHDVVXPHGWREHDWWKHQRPLQDO WXUHV UHVXOWLQJ IURP D VLPXODWLRQ RI YDOXH DW WKH YDOXH RI RI WKH ZRUVW YDOXH RSWLPL]HG103&FRQWUROOHUV DQG DW WKH ZRUVW FDVH YDOXH ,Q WKH G\QDPLF FDVH WKH\ DUH VLPXODWHG DV VWHS FKDQJHV $GGLWLRQDOO\ XQLIRUPO\ GLVWULEXWHG UDQGRP
&RQWURO6WUXFWXUH6HOHFWLRQZLWK5HJDUGWR6WDWLRQDU\DQG'\QDPLF 3HIRUPDQFHZLWK$SSOLFDWLRQWR$7HUQDU\'LVWLOODWLRQ&ROXPQ 657 QRLVHZLWKDPDJQLWXGHRIRIWKHPD[LPXPYDOXHLVDGGHGWRWKHGLVWXUEDQFHV7KH VHQVRUHUURULV .IRU WKHWHPSHUDWXUH VHQVRU IRU WKHFRQFHQWUDWLRQ PHDVXUH PHQWDQGIRURWKHUV 7KH QRPLQDO SURILW LV $ SUHVFUHHQLQJ ZDV SHUIRUPHG EDVHG XSRQ JHQHUDOL]HG QRQVTXDUH 5*$ DQG 5+3 ]HURV UHPRYLQJ XQSURPLVLQJ VWUXFWXUHV OHDYLQJ VWUXFWXUHV ,Q VWHS FRPPRQ VHW SRLQWVFRXOGEHIRXQGIRUVWUXF WXUHV VWUXFWXUHV DUH H[FOXGHG IURP IXUWKHU FRQVLGHUDWLRQ ,Q VWHS ZKHUH WKH ZRUVW FDVH DQDO\VLV LV SHUIRUPHG VWUXFWXUHV WKDW \LHOG WKH VPDOOHVW ORVVHV DUH VHOHFWHG IRU WKH ILQDO VWHS ,Q WKLV VWHS ILUVW D OLQHDU 03& VLPXODWLRQ ZDV HP SOR\HG ZLWK WKH QRQOLQHDU ULJRURXV SURFHVV PRGHO )URP WKH UHVXOW RI WKHOLQHDU 03& VLPXODWLRQ VWUXF WXUHV ZLWK EHVW SHUIRUPDQFH ZHUH FKRVHQ )RU WKH RSWLPL]DWLRQ RI WKH ZHLJKWV RI WKH 03& FRQWUROOHUV D JUDGLHQWIUHH RSWLPL]DWLRQ SURFH GXUH ZDV XVHG 7KH SHUIRUPDQFH LQGLFHV RI VHYHUDO VWUXFWXUHV DUH JLYHQ LQ 7DE ,W FDQ EH REVHUYHG WKDWVWUXFWXUH79%) ZKLFKFRQ )LJ103&VLPXODWLRQRI79%) FRQWUROVWUXFWXUH WUROVWKHWHPSHUDWXUHRIWUD\ DQG ZLWKGLVWXUEDQFHLQWKHIHHGIORZUDWHDQGFRPSRVLWLRQ WKH UDWLR 9%) \LHOGV WKH EHVW SHU IRUPDQFH 7KH UHVXOW RI WKLV VWUXFWXUH ZLWK GLVWXUEDQFHV LQ WKH IHHG IORZ UDWH DQG WKH IHHGFRPSRVLWLRQDUHJLYHQLQ)LJ,WLVLQWHUHVWLQJWKDWDPHDVXUHPHQWRIWKHWRSFRP SRVLWLRQJLYHVQRDGYDQWDJHKHUH
UV%QPVTQNNGF8CT
4GHNWZ
VKOG O
VKOG O
VKOG O
VKOG O
PF%QPVTQNNGF8CT
$QKNWR
VKOG O
VKOG O
(GGFEQORQUKVKQP
2WTVQR
(GGFHNQY
&RQFOXVLRQ $PHWKRGRORJ\IRUFRQWUROVWUXFWXUHVHOHFWLRQZDVSUHVHQWHGZLWKWKHDLPRIRSWLPL]LQJ WKHSODQWSHUIRUPDQFHWDNLQJLQWRDFFRXQWWKHSUHVHQFHRIERWKVWHDG\VWDWHDQGG\QDPLF GLVWXUEDQFHV7KHPHWKRGZDVDSSOLHGVXFFHVVIXOO\WRWKHH[DPSOHRIDWHUQDU\GLVWLOOD WLRQFROXPQ
5HIHUHQFHV 00RUDUL<$UNXQ*6WHSKDQRSRXORV6WXGLHVLQWKHV\QWKHVLVRIFRQWUROVWUXFWXUHVIRU FKHPLFDOSURFHVVHV$PHULFDQ,QVWLWXWHRI&KHPLFDO(QJLQHHUV-RXUQDOSS 6(QJHOO76FKDUI09|ONHU$PHWKRGRORJ\IRUFRQWUROVWUXFWXUHVHOHFWLRQEDVHGRQ ULJRURXVSURFHVVPRGHO3URF,)$&WK,)$&:RUOG&RQJUHVV 6(QJHOO)HHGEDFNFRQWUROIRURSWLPDOSURFHVVRSHUDWLRQ-RXUQDORI3URFHVV&RQWURO 9ROXPH,VVXH0DUFK /&3KDP6(QJHOO&RQWUROVWUXFWXUHVHOHFWLRQIRURSWLPDOGLVWXUEDQFHUHMHFWLRQ,(((0XOWL &RQIHUHQFHRQ6\VWHPVDQG&RQWURO06& ± 66NRJHVWDG3ODQWZLGHFRQWUROWKHVHDUFKIRUWKHVHOIRSWLPL]LQJFRQWUROVWUXFWXUH-RXUQDO RI3URFHVVFRQWURO9ROXPH1XPEHU2FWREHU 66NRJHVWDG0RGHORIDWHQDU\GLVWLODWLRQFROXPQODVWXSGDWHG-XO\ KWWSZZZQWQWQXQRXVHUVVNRJHGLVWLOODWLRQQRQLGHDOBVNRXUDVWHUQDU\
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
The Coulomb Glass – Modeling and Computational Experience with a Large Scale 0-1 QP Problem *
Ray Pörna, Otto Nissfolkb, Fredrik Janssonc and Tapio Westerlundb
a
Novia University of Applied Sciences, Wolffskavägen 33, 65200 Vaasa, Finland Process Design and Systems Engineering Laboratory, Åbo Akademi University, Biskopsgatan 8, 20500 Turku, Finland c Department of Physics and Center for Functional Materials, Åbo Akademi University, Porthansgatan 3, 20500 Turku, Finland b
Abstract In this paper we model and solve a large linear equality constrained 0-1 quadratic programming (QP) problem arising in theoretical physics, more precisely in the study of lightly doped semiconductors: finding the ground state of a Coulomb glass. Semidefinite programming is used for relaxation and then for reformulation. The relaxation gives a lower bound and we device a fast upper bound heuristic based on randomization and a local search for feasibility. The reformulation is used as a preprocessing step to construct a convexified 0-1 QP with good relaxation characteristics. We experiment with large scale instances and solve them to very near global optimality within approximately one hour of cpu-time. Keywords: 0-1 quadratic program, convexification, Coulomb glass, quadratic convex reformulation, semidefinite programming.
1. Introduction and background The general form of a 0-1 QP with linear equality constraints is given by min x T Qx + c T x
x∈{0 ,1}n
(QP)
Ax = a
where Q is a symmetric n/n matrix, A an m/n matrix, c an n-dimensional vector, a an mdimensional vector and x is an n-dimensional vector with binary variables. Many different solution approaches have been suggested for (QP). Commonly used approaches are linearization procedures [4] and convexification techniques [1]. In this work we apply the quadratic convex reformulation method recently developed by Billionnet with co-workers [1]. The Coulomb glass [7,8] is a model for a lightly doped semiconductor at low temperature (a few K), where the conduction electrons are localized to impurity sites and the electrons strongly interact with each other. One interesting problem in this model is finding the ground state, i.e. the configuration of electrons that minimizes the total energy of the system. This problem is important, since at low temperatures the Coulomb interaction has strong influence on properties of the semiconductor such as the conductivity, via the creation of the so-called Coulomb gap [8] in the density of impurity energies. In the physics community this minimization problem has traditionally been solved by different heuristic approaches [6]. These methods are fast, *
Email: [email protected], [email protected], [email protected], [email protected]
The Coulomb glass
659
but they cannot guarantee global optimality nor do they produce valid lower bounds for the ground state energy. The specific problem is described in section 2. Section 3 focuses on relaxation and reformulation and section 4 on the solution of large scale instances of the Coulomb glass. The paper is concluded in section 5.
2. Problem description We assume that the system consists of n impurity sites randomly and uniformly distributed in a square [0,L]x[0,L] (or cube) with side length L and that k of these are occupied by electrons and are charged negatively. The remaining n-k sites are empty and neutral. Let the vector vi=(vxi,vyi) i=1,…,n represent the location of the sites within the square and rij=||vi-vj|| the Euclidean distance between sites i and j. To avoid edge effects, we assume periodic boundary conditions and use minimum image convention, so that rij gives the distance between site i and the nearest image of site j [6]. The total energy of the system can now be written as E ( x) =
n n q2 x x e i j ke ¦ ¦ + rij i =1 j =i +1
energy from Coulomb interactio n
n
¦ε x
i i
i =1
site specific energy
where ke is the Coulomb constant, qe the electron charge, İi~U(-1,1) the site specific energy and we set xi=1 for filled sites and xi=0 for empty ones. After normalization [6] the problem of minimizing the total energy of the system can be expressed as min E ( x) = xT Qx + cT x
(CG-QP)
x
eT x = k , x ∈{0,1}n
where e is an n-dimensional vector with each component equal to one, c is the vector with elements ci=İi and Q is the matrix defined element-wise as Qij=1/(2rij) for ij and Qii=0. In this paper we always assume that k=n/2 (n even), i.e. exactly half of the sites are occupied. The matrix Q is, by construction, indefinite and fully dense, that is the quadratic function is nonconvex and contains a maximal number of bilinear terms.
3. SDP relaxation and reformulation It is well-known that semidefinite programming can be used to derive tight relaxations for 0-1 quadratic programs [2,5]. By defining the matrix variable X=xxT, the semidefinite relaxation of (CG-QP) can be stated as min E( x) = Q • X + cT x x, X
eT x = n / 2, diag( X ) = x ª1 xT º « » 0 ¬x X ¼
min Y
Q 1ª 4 «¬(c + Qe)T 1 ªΟ 2 «¬eT Y
0
c + Qeº •Y 0 »¼
eº ªe º • Y = 0, diag (Y ) = « » 0 »¼ ¬1¼
(CG-SDR)
where Q • X = ¦ ¦ Qij X ij . The equality constraint can be lifted into the (x,X)-space i j by considering the squared norm constraint || eT x − n/ 2 ||2 = (eeT ) • X − neT x + n2 / 4 = 0 . The formulation to the right is the equivalent homogeneous variant for {-1,1} discrete variables obtained by transformation y=2x-e.
660
R. Pörn et al.
3.1. Lower bounding by SDR and upper bounding by randomization In this section we solve 30 instances of the 2-dimensional homogeneous (CG-SDR) with different sizes. The first experiment studies small to medium size problems with n=50, 100 and 150. The results are given in table 1. The SDPs were solved in matlab 2007b with Sedumi 1.21 (default settings) using the CVX package [3] on a Dell Latitude Laptop with a 2,26 GHz processor and 3,45 GB of RAM. The SDP provides a lower bound (LB) to problem (CG-QP). An upper bound (UB) can be produced by considering the SDP solution matrix Y* as a covariance matrix and sample random vectors from the distribution N(0,Y*). The elements of the random vectors are rounded to {-1,1} by signs and then projected onto the hyper plane eT y = 0 ⇔ eT x = n / 2 by a local axial search. The conclusion is now that LB v(CG-QP) UB. Table 1. Solution results for 10 instances of CG-SDR (n=50,100,150) with 200 randomizations. (n; cpu(s); mean of Sedumi iterations; variables; constraints) = (50; 0,5; 15; 1326; 52), (100; 1,3; 17; 5151; 102), (150; 3,1; 18; 11476; 152). instance
n=50 LB 111,015 114,762 116,734 115,937 114,636 115,328 114,862 121,055 116,956 112,298
1 2 3 4 5 6 7 8 9 10 mean
UB 117,410 116,483 121,064 117,346 118,856 120,648 116,974 122,473 120,860 116,310
gap
n=100
% 5,45 1,48 3,58 1,20 3,55 4,41 1,81 1,16 3,23 3,45 2,93
LB 362,186 363,163 359,391 363,895 370,075 364,915 363,182 362,282 355,044 364,853
UB 369,067 369,853 367,150 367,847 373,205 370,330 366,032 366,343 366,362 371,466
gap
n=150
% 1,86 1,81 2,11 1,07 0,84 1,46 0,78 1,09 2,93 1,36 1,53
LB 698,296 692,320 691,742 692,439 694,263 689,303 689,102 682,926 678,391 683,102
gap UB 706,740 699,132 702,781 694,594 699,080 696,653 701,978 696,293 683,890 689,111
% 1,18 0,97 1,57 0,31 0,69 1,06 1,83 1,92 0,80 0,87 1,12
The gap between the best lower and upper bound is consistent and small. In figure 1 the solutions Y* to (CG-SDR) are examined and an outcome of a single randomization for instance 100-1 is presented. A rank-1 solution for problem (CG-SDR) corresponds to a global optimum in (CG-QP). In figure 1a we observe that all Y*-solutions has a high rank1-content. The effective rank of Y* is about 4-6 and quite constant over size (n). Thus, the randomization and rounding procedure is likely to result in high quality solutions when a rank-1 (binary) solution is constructed. This is clearly seen in figure 1b and 1c as about 50% of the generated upper bounds are very low. Figure 1. a) Content in rank-1 approximations of Y*. b) Function values of 200 randomized solutions for problem 100-1. c) Histogram of 200 randomized solutions for problem 100-1. Content in rank-1 approximation
1
200 randomized solutions
550
90
Histogram of randomized solution
80
0.95
70
0.9
frequency
function value
500
450
400
100
0
2
4
6 instance
8
40
20
150 0.8
50
30
50
0.85
60
10
10
350
0
50
100 iteration
150
200
0 350
400
450 function value
500
550
The Coulomb glass
661
3.2. Reformulation by SDP The QCR-method [1] is a procedure to reformulate problem (QP) in an optimal way. The Lagrangian of QP (with the squared-norm constraints included) is n
Eu ,v ,w ( x ) = x T Qx + cT x + ¦ ui ( xi2 − xi ) + v T ( Ax − a ) + w Ax − a
2
i =1
2
= xT (Q + Diag (u ) + wAT A) x + (c − u − 2 wAT a + AT v )T x + w a − v T a
Multipliers u and w perturbs the quadratic part and multiplier v the linear part. We have only interest in u and w and we also omit w since problem (CG-QP) only contains a single equality. According to [1] a perturbation vector u can be obtained as the optimal Lagrange multiplicators of constraints diag(X)=x from the solution to problem (CGSDR). The perturbed objective Eu ( x ) = x T (Q + Diag (u )) x + ( c − u )T x is then convex and has a maximal minimal value over Rn. This property implies that the perturbed objective has good relaxation characteristics. We now reformulate and solve the same 30 instances of problem (CG-QP) from section 3.1. The objective is convexified using diagonal perturbations u from the CG-SDR solution and then solved with CPLEX 12.2 on a Windows7 PC with 2,8 GHz CPU (Intel Core i7 930) and 6 GB of RAM. All instances are solved to global optimality within 10 minutes. The global optimum (OPT) can now be compared with the upper bound (UB) from the randomization procedure from previous section. From table 2 we conclude that the convexified QPs are solved fast and that the upper bounds from the randomization return near optimal solutions. Table 2. Comparison of randomized upper bound and global solution. (n; min cpu(s); max cpu(s); mean cpu(s)) = (50; 0,17; 0,28; 0,23), (100; 0,8; 14,3; 3,7), (150; 2,3; 521; 186,4). instance 1 2 3 4 5 6 7 8 9 10 mean
n=50 OPT 116,986 116,483 120,925 117,291 118,702 120,613 116,974 122,411 120,781 116,310
UB 117,410 116,483 121,064 117,346 118,856 120,648 116,974 122,473 120,860 116,310
gap% 0,36 0,00 0,11 0,05 0,13 0,03 0,00 0,05 0,07 0,00 0,08
n=100 OPT 368,878 368,924 366,702 367,242 372,843 370,044 366,032 366,244 365,730 369,736
UB 369,067 369,853 367,150 367,847 373,205 370,330 366,032 366,343 366,362 371,466
gap% 0,05 0,25 0,12 0,16 0,10 0,08 0,00 0,01 0,01 0,04 0,08
n=150 OPT 706,279 698,707 701,798 694,545 698,331 695,910 701,669 695,642 683,801 688,784
UB 706,740 699,132 702,781 694,594 699,080 696,653 701,978 696,293 683,890 689,111
gap% 0,07 0,06 0,14 0,01 0,11 0,11 0,04 0,09 0,01 0,05 0,07
4. Experiments with large scale instances In this section we turn to larger instances. We study the solution of 10 instances of a 2dimensional Coulomb glass with sizes n=500 and n=1000, respectively. The results from the experiment are summarized in table 3. A tight gap is obtained from the semidefinite relaxation followed by the randomization procedure. The solutions are obtained in about 2 minutes for n=500 instances. The gap is about 0,61% in this case. The gap can be reduced by applying the convexification procedure from section 3.2. In this case the gap decreases to 0,31% when the branch and bound search was terminated after 1 hour. The corresponding gaps for n=1000 instances are 0,46% and 0,40%.
662
R. Pörn et al.
Table 3. Results from solutions of large scale instances. Comparison of LB-sdr (lower bound from CG-SDR problem), UB-r (upper bound from randomization and feasibility search), LB-cQP (lower bound from CPLEX after 1 hour) and UB-cQP (upper bound from CPLEX after 1 hour). instance 1 2 3 4 5 6 7 8 9 10 mean gap% mean cpu(s)
n=500 LB-sdr UB-r 4540,6 4568,1 4520,2 4549,3 4535,9 4566,5 4529,6 4560,4 4535,9 4562,8 4532,6 4566,4 4533,4 4558,2 4547,7 4579,3 4531,6 4559,7 4543,4 4560,4 0,61 91 11
LB-cQP UB-cQP 4554,7 4566,0 4535,7 4548,3 4551,3 4564,1 4545,2 4557,8 4544,8 4560,3 4545,3 4564,0 4542,0 4555,6 4557,4 4575,5 4540,7 4557,9 4549,3 4558,5 0,31 3600 3600
n=1000 LB-sdr UB-r 13138,2 13195,1 13091,6 13196,2 13121,1 13186,7 13163,7 13201,9 13096,3 13174,2 13139,5 13177,7 13139,9 13199,2 13109,0 13173,2 13123,9 13175,8 13167,2 13218,1 0,46 969 38
LB-cQP UB-cQP 13140,2 13187,6 13094,7 13191,6 13122,4 13177,9 13166,1 13198,5 13100,2 13170,7 13142,0 13174,9 13142,5 13189,6 13113,1 13167,5 13125,4 13168,8 13169,6 13211,4 0,40 3600 3600
5. Conclusions The formulation and solution of the Coulomb glass, a semiconductor related problem from material physics, was studied in this paper. The problem could be formulated as a 0-1 nonconvex QP problem and the ground state of the electron configuration corresponds to its global minimum. The problem was solved by two techniques: (i) a procedure that was based on semidefinite relaxation followed by a randomization procedure and a feasibility search and (ii) a reformulation technique that was based on optimal convexification using semidefinite programming. Medium sized instances were solved quickly to global optimality using the reformulation technique. Large scale instances were solved to very near global optimality in about one hour. Acknowledgements: The financial support from the Academy of Finland projects 116995 (Fredrik Jansson), 127992 (Otto Nissfolk) and the Centre of Excellence in Optimization and Systems Engineering at Åbo Akademi University are gratefully acknowledged.
References [1] A. Billionnet, S. Elloumi and M-C. Plateau, 2009, Improving the performance of standard solvers for quadratic 0-1 programs by a tight convex reformulation: the QCR method, Discrete Applied Mathematics, 157, 1185-1197. [2] M.X. Goemans and D.P. Williamson, 1995, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, Journal of Global Optimization ,10, 367-380. [3] M. Grant and S. Boyd, 2010, CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx. [4] P. Hansen and C. Meyer, 2009, Improved compact linearizations for the unconstrained quadratic 0-1 minimization problem, Discrete Applied Mathematics, 157, 1267-1290. [5] C. Helmberg and F. Rendl, 1998, Solving quadratic (0,1)-problems by semidefinite programs and cutting planes, Mathematical Programming, 82, 291-315. [6] A. Möbius, M. Richter and B. Drittler, 1992, The Coulomb gap in two- and three-dimensional systems: Simulation results for large samples, Physical Review B, 45, 11568-11579. [7] M. Pollak, 1970, Effect on carrier-carrier interactions on some transport properties in disordered semiconductors, Discussions of the Faraday Society, 50, 13-19. [8] B.I. Shklovskii and A.L. Efros, 1984, Electronic properties of doped semiconductors, Springer-Verlag.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Reliable optimal control of a fed-batch fermentation process using ant colony optimisation and bootstrap aggregated neural network models Jie Zhang, Yiting Feng, Mahmood Hilal Al-Mahrouqi School of Chemical Engineering and Advanced Materials, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
Abstract This paper presents a reliable optimal control strategy for a fed-batch fermentation process using ant colony optimisation and bootstrap aggregated neural network models. Bootstrap aggregated neural networks are used to enhance model accuracy and reliability. A further advantage of bootstrap aggregated neural network is that model prediction confidence bounds can be calculated from individual network predictions. The objective function of fed-batch fermentation process optimisation based on neural network models typically contains multiple local minima and traditional gradient based optimisation may be trapped in a local minimum. In order to overcome this problem, ant colony optimisation is used. The optimisation objective function is modified to incorporate model prediction confidence in order to enhance the reliability of the calculated “optimal” control policy. Application results on a simulated fed-batch fermentation process demonstrate that the proposed strategy is very effective. Keywords: Neural networks, batch processes, optimisation, ant colony optimisation, fermentation, reliability.
1. Introduction Batch and fed-batch processes are routinely used for the manufacturing of high value added products, such as specialty polymers and fine chemicals (Bonvin, 1998). Optimal control of batch processes is very important because, in the face of growing competition and stringent environmental regulations, it represents a natural way for reducing production costs and improving product quality. Mechanistic models have been utilized for many years for optimal control studies (Park and Ramirez, 1988; Luus, 1991). However, developing full phenomenological models for complex processes is usually very difficult and time consuming. To circumvent these difficulties, neural network models can be developed (Morris et al., 1994; Zhang, 2005). However, the use of neural network model based optimal control strategy is faced with two major challenges. The first challenge is the non-robust performance of neural networks when they are applied to unseen data and the second challenge is the need for powerful global optimisation method that can effectively overcome the conventional problem of falling into local minima. Neural network models are highly nonlinear and thus are rich in sub-optimal traps that can lock in the traditional gradient-based optimisation methods. Therefore, population-based optimisation methods such as genetic algorithms and ant colony optimisation (ACO) should be used to overcome this problem (Dorigo and Gambardella, 1997). A modified ACO algorithm for continuous and
664
J. Zhang et al.
mixed variable domain developed by Al-Mahrouqi and Zhang (2008) is used in this study. This paper uses bootstrap aggregated neural networks to enhance model robustness and accuracy. A further benefit of using bootstrap aggregated neural networks is that model prediction confidence bound can be calculated from individual network predictions. Due to the inevitable model plant mismatches, optimisation results are only optimal on the model and may not be optimal on the plant. To address this issue, model prediction confidence bound is incorporated in the optimisation objective function. In addition to optimising process operation objectives, the proposed strategy also searches for solutions that lead to narrow prediction confidence bounds (i.e. reliable model predictions) under the calculated control policy.
Fig. 1. A bootstrap aggregated neural network
2. Bootstrap aggregated neural networks Fig. 1 shows a bootstrap aggregated neural network model, where several neural network models are developed to model the same relationship. Instead of selecting a “best” single neural network model, these individual neural networks are combined together to improve model accuracy and robustness. The overall output of the aggregated neural network is a weighted combination of the individual neural network outputs: n
f ( X)
¦ w f ( X) i
i
(1)
i 1
where f(X) is the aggregated neural network predictor, fi(X) is the ith neural network, wi is the aggregating weight for combining the ith neural network, n is the number of neural networks, and X is a vector of neural network inputs. Since the individual neural networks are highly correlated, appropriate stacking weights could be obtained through principal component regression (Zhang et al., 1997). Instead of using constant stacking weights, the stacking weights can also dynamically change with the model inputs (Ahmad and Zhang, 2005; 2006). Another advantage of bootstrap aggregated neural network is that model prediction confidence bounds can be calculated from individual network predictions (Zhang, 1999). The standard error of the ith predicted value is estimated as
Ve
{
1 n [ y ( xi ;W b ) y(xi ;)]2 }1 / 2 ¦ n 1 b 1
(2)
Reliable optimisation control of a fed-batch fermentation process
where y(xi; .) =
¦
n
b 1
665
y( xi ;W b ) / n and n is the number of neural networks in an
aggregated neural network. Assuming that the individual network prediction errors are normally distributed, the 95% prediction confidence bounds can be calculated as y(xi; .) r 1.96Ve. A narrower confidence bound, i.e. smaller Ve, indicates that the associated model prediction is more reliable.
3. Optimal control of a fed-batch fermentation process 3.1. Neural network modelling of a fed-batch fermentation process The process considered in this paper is a fed-batch yeast fermentation process taken from Yuzgec et al. (2009), where detailed mechanistic model is presented. The operation objective is to produce maximum amount of biomass by adjusting the glucose feed rate subject to operation constraints. The neural network model is of the following form, Y = f(U), where Y is the final biomass concentration and U = [F1 F2 … Fn] is a vector of glucose feed rates over a batch. For ease of implementation, the batch duration is divided into n = 10 stages and the feed rate is kept content in each stage. Training data: -:y; ..:yp 4 2 0 -2
0
5
10
15
20
25
30
35
40
45
50
Testing data: -:y; ..:yp 3 2 1 0 -1
0
2
4
6
8
10
12
14
16
Fig. 2. Performance of the aggregated neural network on training and testing data
Simulated process operation data from 50 batches were produced from simulation. Bootstrap re-sampling with replacement (Efron, 1982) was used to generate 30 replications of the data. The data were scaled to zero mean and unit variance before they were used for neural network training. A neural network model was developed on each replication of the data. In developing a single neural network model, 60% of the data is selected as the training set and the rest as the validation set. The validation data set is for cross validation based network structure determination and training termination. Single hidden layer neural networks are considered in this study and the number of hidden neurons is determined based on the network performance on the validation data. The networks were trained by using Levenberg-Marquardt optimisation algorithm with early stopping. Then the 30 individual neural networks were combined together to give the final model predictions. A further 16 batches of data were produced as unseen testing data to evaluate the developed neural network model. Fig. 2 shows the predictions (scaled) of the bootstrap aggregated neural network on the training and
666
J. Zhang et al.
unseen testing data. It can be seen that the bootstrap aggregated neural network gives excellent prediction performance on the unseen testing data. 3.2. Optimising control In order to improve the reliability of the calculated optimal control policy, the following modified optimisation objective function is considered in this study: min J = f (U ) + OV
(3)
U
where f(U) is the bootstrap aggregated neural network output, i.e. the predicted biomass concentration at the end of a batch, U is a vector of glucose feed rates over a batch, i.e. the control policy, ı is the standard error of model prediction from Eq(2) and is indicative of the model prediction confidence, and Ois a weighting parameter for the standard error. Operational constraints on feed rate and volume are imposed. This objective function intends to maximise the amount of product and also minimise the model prediction confidence bounds leading to a reliable control policy. For the purpose of comparison, the optimisation problem is also solved using the sequential quadratic programming (SQP) implemented in the MATLAB Optimisation Toolbox. The SQP optimisation was run three times with different initial values and three different results were obtained indicating local minima being reached. ACO optimisation with a single neural network model is then carried out. Under this control policy, the neural network prediction is f(x) = 67.6598 g/L. When apply this control policy to the mechanistic model, the biomass concentration at the end of the batch is 47.2216 g/L, which is quite different from the single neural network prediction. This indicates that a single neural network is not reliable and it can give large errors when applied to data not used in network training. Optimisation performance can be improved by using bootstrap aggregated neural networks with model prediction confidence incorporated in the objective function. Table 1 shows the results of ACO optimisation with a bootstrap aggregated neural network model and the modified objective function given by Eq(3). The results in Table 1 indicate that the modified objective function given by Eq(3) does improve the reliability of the calculated optimal control policy. When O=0, the difference between neural network prediction and the actual (mechanistic model simulated) final biomass concentration under the calculated optimal control policy is larger than when O is increased away from 0. When O is 0.01, the actual final biomass concentration under the calculated optimal control policy is very close to the neural network prediction. This demonstrates that the proposed strategy can improve the reliability of the optimal control. Table 1. Final biomass concentrations of different cases under ACO optimisation
O Neural network Mechanistic model
0 59.3799 48.6181
0.0001 58.3773 48.7180
0.005 59.4043 51.1513
0.01 60.8908 56.9135
4. Conclusions A reliable optimal control strategy for a fed-batch fermentation process is developed by using ant colony optimisation and bootstrap aggregated neural networks. It is shown that bootstrap aggregated neural networks can enhance model accuracy and reliability
Reliable optimisation control of a fed-batch fermentation process
667
when the model is applied to unseen data. Furthermore, model prediction confidence bounds can be calculated from individual network predictions. Model prediction confidence bound is incorporated in the optimisation objective function so that wide prediction confidence bounds are penalised. It is shown that this can enhance the reliability of the calculated optimal control policy. Ant colony optimisation can generally find the global optimal solution. Application results demonstrate that the proposed strategy is very effective.
Acknowledgement The work is supported by the EU through the project iREMO – intelligent reactive polymer composite moulding (grant No. NMP2-SL-2009-228662).
References Z. Ahmad and J. Zhang, 2005, Bayesian selective combination of multiple neural networks for improving long range predictions in nonlinear process modelling. Neural Computing & Applications, 14, 78-87. Z. Ahmad and J. Zhang, 2006, Combination of multiple neural networks using data fusion techniques for enhanced nonlinear process modelling. Computers & Chemical Engineering, 30, 295-308. M. Al-Mahrouqi and J. Zhang, 2008, Reliable Optimal Control of a Fed-Batch Bio-Reactor Using Ant Colony Optimization and Bootstrap Aggregated Neural Networks, Proceedings of the 17th IFAC World Congress, 6 – 11 July, 2008, Seoul, Korea, 8407-8412. D. Bonvin, 1998, Optimal operation of batch reactors--a personal view. Journal of Process Control, 8(5-6), 355-368. M. Dorigo and L. M. Gambardella, 1997, Ant colonies for the travelling salesman problem. BioSystems, 43, 73-81. B. Efron, 1982, The Jacknife, the Bootstrap and Other Resampling Plans. Philadelphia: Society for Industrial and Applied Mathematics. R. Luus, 1991, Effect of the choices of the final time in optimal control of non-linear systems. Can. J. Chem. Eng. Res., 30, 1525-1530. A. J. Morris, G. A. Montague, and M. J. Willis, 1994, Artificial neural networks: studies in process modelling and control. Trans. IChemE, 72, 3-19. S. Park, and W. F. Ramirez, 1988, Optimal production of secreted protein in fed-batch reactors. AIChE J., 34, 1550-1558. U. Yüzgeç, M. Türker, and A. Hocalar, 2009, On-line evolutionary optimization of an industrial fed-batch yeast fermentation process. ISA Transactions, 48, 79-92. J. Zhang, A. J. Morris, E. B. Martin, C. Kiparissides, 1997, Inferential estimation of polymer quality using stacked neural networks. Computers & Chemical Engineering, 21, s1025s1030. J. Zhang, 1999, Developing robust non-linear models through bootstrap aggregated neural networks. Neurocomputing, 25, 93-113. J. Zhang, 2005, Modelling and optimal control of batch processes using recurrent neuro-fuzzy networks. IEEE Transactions on Fuzzy Systems, 13(4), 417-427.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Integrated process and control design by the normal vector approach: Application to the Tennessee-Eastman process* Diego A. Muñoz,a,b Johannes Gerhard,a Ralf Hannemann,a Wolfgang Marquardta a
Aachener Verfahrenstechnik, RWTH Aachen University, 52064 Aachen, Germany.
b
Research Group on Mathematics. Universidad Pontificia Bolivariana, Med.-Colombia.
Abstract This paper presents a large-scale application of the normal vector approach to demonstrate that the complexity of robust dynamic optimization with application to the integration of process and control design can be treated successfully for complex nonlinear systems. The case study further demonstrates that our approach can deal with a multi-dimensional uncertainty space. The normal vector approach is able to automatically identify the worst-case scenarios and find a solution that is optimal with respect to the cost function and robust with respect to path constraints on inputs and states in the presence of parameterized disturbances. The tedious analysis of a large number of different disturbance realizations is not required. Keywords: grazing bifurcation, robust optimal design, disturbances, normal vector, Tennessee-Eastman process.
1. Introduction The Tennessee-Eastman process was presented by Downs and Vogel [1] as an industrially relevant case study for testing plant wide process control design methods. There are a number of papers where different plant-wide control structure for this process are developed, ranging from decentralized control structures [1-3] to model predictive control [4-5]. In contrast, the aim of this work is the treatment of the integrated process and control system design using the normal vector method for transient behavior [6] to demonstrate that this approach can handle dynamic models of industrially relevant processes and that it is especially suited for a high-dimensional uncertainty space. Our approach considers maximization of an objective function, e.g. economic profit, while at the same time desired properties in the time domain of the system are guaranteed despite unknown time-varying disturbances. The time-domain constraints may represent safety constraints, such as an upper temperature limit in a *
This work has been supported by the German Academic Exchange Service (DAAD-ALECOL), and by the German Research Foundation under grant MA 1188/28-1.
Integrated process and control design by the normal vector approach
669
chemical reactor, or quality constraints such as a bound on product properties. For safe and economical operation it is therefore important to ensure that these constraints hold despite unknown disturbances d (D , t ) , with D 4 , 4 R nD denoting the robustness region. We use this class of parameterized disturbances to represent a family of bounded disturbances. It is important to keep in mind that no disturbance model is able to encompass all possible disturbances, but in principle, any complex disturbance signal can be modeled by some carefully chosen functions d (D , t ) . Note that this integrated design problem with uncertain parameters D can be framed as nonlinear semi-infinite optimization problem. In the following, the normal vector approach is presented as an alternative to worst-case (min-max) formulation to address such robust optimal design problems.
2. Normal Vector Approach We consider systems of nonlinear differential-algebraic equation (DAEs) of index at most one that can be written as
x
f ( x(t ), y (t ), p, d (D , t )), 0
g ( x(t ), y (t ), p, d (D , t )), x(t0 )
x0
(1)
with differential state variables x R nx , corresponding time derivatives x R x , n consistent initial conditions x0 , algebraic variables y R y , system parameters np p R , disturbances d (D , t ) R nd parameterized by a set of parameters D R nD , and time t R . The disturbance functions d (D , t ) thus map from R nD u R to R nd . The functions f and g are assumed to be sufficiently smooth with respect to x, y, p and d. f n n n n and g map from some subset U R nx u R y u R p u R nd into R x and R y , respectively. To consider integrated process and control design, Eqs. (1) represent the closed-loop systems with fixed control and process structure, where both, the process and control parameters, concatenated in p, are understood to be degrees of freedom for optimization. The fundamental idea of the so-called normal vector approach [7] is to consider the parametric distance between a candidate operating point and a closest point located on the so-called critical manifold. These manifolds are defined by a set of points at which a property of interest changes qualitatively. In this work, we are interested in critical manifolds related to transient process behavior [6]. In particular, we want to robustly optimize a controlled process such that time-domain inequalities are satisfied despite D-parameterized disturbances. Particularly, we assume that there is a set of inequality constraints n
0 d h( x(t ), y (t ), p, d (D , t ), t )
(2) n
n
with h a sufficiently smooth mapping from U R nx u R y u R p u R nD u R into R nh . The critical manifold is characterized by a set of time responses that tangentially touch the hypersurface spanned by an active time-domain constraint, i.e. h = 0, in order to prevent violations of the transient behavior for all parameters D 4 . These kinds of points are so-called grazing points [8]. Fig. 1(a) shows an illustrative example of the time response of state x1 after a step disturbance parameterized by D1. Note that the projection of the manifold of grazing points in Fig 1(c) separates the parameter plane
D.A. Muñoz et al.
670
Fig. 1. (a) Profile of x1 after step disturbance parameterized by D1. The grazing point is located at the extremum of the curve. (b) A fold-like surface of points in case of two parameters D1 and D2. (c) Projection of the manifold of grazing points separates the parameter plane (D1, D2), where r represents the common normal vector.
(D1, D2). Sketches of the time responses show on which side the bound h is not violated. The idea is to estimate the locally closest distance between the boundary of the uncertain region w4 and the projection of the critical manifold M(c) onto the D-space, which occurs along the direction normal to M(c). All the details about the augmented system and the derivation of the normal vector on a critical manifold constraining the transient behavior can be found in [6].
3. Robust Optimization of the Tennessee-Eastman Process The Tennessee-Eastman process produces two liquid products (G, H) and one byproduct (F) from four gaseous reactants (A, C, D, E) and one inert (B) using a twophase exothermic reactor, a flash separator, a stripper, a recycle compressor and a condenser. The model equations of the process derived in [10] are used in this work. Here the so-called base case with a mass ratio of G/H=50/50 and a total production rate of 14076 kg/h for products (G) and (H) is considered [1]. Inequality constraints are imposed on the liquid levels of the reactor, separator, and stripper, on the reactor temperature and on the reactor pressure, i.e., 50% d Li d 100%; Treactor d 150qC ;
preactor d 29.96 bar
(3)
with Li being the liquid level in unit i in % and i {reactor, separator, stripper}. Further, the positions of all valves are restricted to values between fully closed (0%) and fully opened (100%). The present study used the decentralized control structure of the liquid levels of the reactor, the separator, and the stripper and control of the reactor temperature to stabilize the open-loop unstable process at the desired operating point. In comparison to the original control structure proposed in [2] inner cascade control loops are introduced for all valves, where the tuning parameters are taken from [3]. In all loops PI control is employed. The aim of the optimization is to find a process and control design and a corresponding steady state operating point that is optimal with respect to the total annualized cost defined by )
¦ CCi / T i
¦j
N q reactants
payback
feed j cost OCC CWC
(4)
Integrated process and control design by the normal vector approach
671
where CCi refers to capital cost of unit i, i {reactor, separator stripper, mixing unit, compressor, condenser, reactor cooler}, OCC refers to compressor operating cost and CWC is the cooling water cost. A payback period of Tpayback = 3 is assumed. The process constraints that must robustly hold despite disturbances are defined by Eqs. (3) and all the bounds on the valve positions. Altogether there are 26 constraints that must hold in the presence of disturbances. Sinusoidal disturbances with uncertain amplitude and uncertain frequency are considered to approximate the random disturbances for the composition and temperature of the feed stream 4. For the temperature of the feed stream 2, for the inlet temperature of the cooling water of the reactor and of the condenser, stepwise disturbances of uncertain amplitud are used. Further, a stepwise change of the desired production rate of ±5% is included to ensure not only a robust but also a flexible process design and optimal operating point. The chosen disturbances are inspired by the disturbance scenarios proposed in [1]. The uncertainties of all parameters are defined by upper and lower bounds. We want to stress that the complexity of this problem is by far larger than the dynamical systems used in our previous work [6]. It considers a complete nonlinear process with 67 ordinary differential equations, over 450 algebraic equations and ten uncertain parameters to parameterize seven different input disturbances. There are over 130 degrees of freedom in the optimization problem, which encompass both design parameters, such as the reactor volume and the operating point, as well as control parameters. As no critical points are known in advance, the optimization is solved without any normal vector constraints. At the optimal solution, three time-domain constraints are active, Lreactor=50%, preactor=29.69, vcooling-water,r=100%. The total annual cost according to Eq. (4) is 50.633 million dollars per year. Clearly, the solution is not robust, as any disturbance will lead to violations of the active constraints mentioned before. Therefore normal vector constraints for the critical manifolds of grazing points have to be employed to ensure that all constraints hold despite disturbances. The search for other critical points employed at the corners of the uncertainty hypercube further reveals a critical point for the upper bound of the condenser cooling water valve. Thus there are four critical points for the active constraints, which are used to obtain a robust solution. The robust solution is found with four active normal vector constraints where the total annual cost increase to 50.645 million dollars per year. The time responses of the four critical points at the robust solution are shown in Fig. 2.
Fig. 2. Critical time responses: (1) reactor pressure; (2) reactor level; (3) reactor cooling water valve position; (4) condenser cooling water valve position.
D.A. Muñoz et al.
672
The values of some design parameters p and the steady-state of some manipulated variable are shown in Table 1, where for brevity the control parameters are omitted. The SQP-solver NPSOL is used for the solution of the involved NLP. Because the system of equations defining the normal vector for the critical manifold of grazing points involve sensitivities of the state variables [6], a gradient based optimization requires the computation of second-order sensitivities. For this purpose, the integrator NIXE [9] is used, which is part of a new platform for source-level manipulation of mathematical models*. This solver uses first- and second-order tangent-linear and adjoint models of the residual of linear implicit autonomous differential algebraic systems in the context of an extrapolated linearly-implicit Euler scheme. A single integration run of the Tennesse-Eastman process model together with first and secondorder takes roughly 13 seconds on a PC with 3 GHz and 2GB RAM. Therefore the integrated process and control system design problem could be solved . Table 1. Parameters p and manipulated variables at the optimal operating point. Parameter
Value
Parameter
Value
Parameter
A feed (vA), %
22.984
D feed (vD), %
62.786
E feed (vE), %
purge (vpurge), %
14.516
A and C feed (vC), %
60.349
Value 52.172 3
80
3
reactor vol., m
reac. heat exch. area
254.04
sep. underflow (vsep), %
35.712
sepator vol., m
40
sep. heat exch. area
67.831
product flow (vprod), %
46.387
reac. cool flow (vwr)
67.265
4. Conclusions The application to the robust process and control design of the Tennessee-Eastman process is a proof of concept that the normal vector approach for transient systems can handle model sizes of a few hundred equations. The case study further demonstrates the ability of our method to deal with a multidimensional uncertainty space. Additionally, our approach is able to automatically identify the worst-case scenarios and find a solution that is optimal with respect to the cost function and robust with respect to the constraints in the presence of disturbances.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] *
J. J. Downs, E. F. Vogel, Comput. Chem. Eng. 17 (3) (1993) 245. M. L. Luyben, B. D. Tyreus, W. L. Luyben, AIChE J. 43 (12) (1997) 3161. T. J. McAvoy, Ind. Eng. Chem. Res. 38 (8) (1999) 2984. P. Wang, T. McAvoy, Ind. Eng. Chem. Res. 40 (24) (2001) 5732. Z. Tian, K. A. Hoo, Ind. Eng. Chem. Res. 44 (9) (2005) 3187. J. Gerhard, W. Marquardt, M. Mönnigmann, SIAM J. Appl. Dyn. Syst. 7 (2) (2008)461. M. Mönnigmann, W. Marquardt, Ind. Eng. Chem. Res. 44 (8) (2005) 2737. A. B. Nordmark, J. Sound Vib. 145 (2) (1991) 279. R. Hannemann, W. Marquardt, U. Naumann, and B. Gendler, ICCS(2010) 297.
http://wiki.stce.rwth-aachen.de/content/research/index.html
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Calibration of a polyethylene plant model for grade change optimisations Niklas Andersson,a Per-Ola Larssonb, Johan Åkessonb, Staffan Haugwitzc, Bernt Nilssona a
Dept. of Chemical Engineering, Lund University, Box 124, SE 221 00 Lund, Sweden Dept. of Automatic Control, Lund University, Box 118, SE 221 00 Lund, Sweden c Borealis AB, SE-444 86, Stenungsund, Sweden b
Abstract A polyethylene plant model coded in Modelica and based on a nonlinear MPC model currently used at Borealis AB is considered for calibration. A case study of model calibration at steady-state for four different operating points are analysed, both when looking at one operating point separately, but also to calibrate several simultaneously. Both model parameters and reactor inputs are calibrated for true plant measurement data. To solve the parameter estimation problem, the JModelica.org platform is used, offering tools to express and solve calibration problems. Calibration was obtained with narrow confidence intervals and shows a potential to improve the model accuracy by changing the parameter values. The results will be used for dynamic optimisations of grade changes.* Keywords: calibration, Modelica, grade change, chemical industry, polymerisation
1. Introduction Polyethylene manufacturers are facing a market that is constantly changing, which create a demand to move between different grades cost-efficiently by manipulating the feeding of raw materials to the reactors. An existing Borstar® polyethylene plant at Borealis AB that produces bimodal polyethylene will be considered. Bimodal polyethylene products are polymerised in three cascaded reactors, pre-polymerisation, loop and gas phase reactors (GPR). The first and smallest reactor is the prepolymerisation reactor, whose main purpose is to gently polymerise the surface of the catalyst particles since a fast reaction may damage the particles. In the subsequent loop reactor the first peak of the bimodal molecular weight distribution is formed. The last reactor in the chain, GPR, is a fluidised bed reactor wherein the second peak is mainly formed. Accurate modelling of advanced chemical reactors is a difficult task, which if successful may help to cut expenses of raw materials. This requires calibration of the model to make the differences to the real process dynamics as small as possible. The main purpose of the calibration is to obtain valid model parameters for a model suitable for optimisation of grade changes, which has previously shown promising results in (Larsson, Andersson et al. 2010). Calibration of the model at four different steady-state operating points is shown; both when looking at one operating point separately, but also to calibrate several simultaneously. *
Sponsored by Borealis AB and the Swedish Foundation of Strategic Research in the framework of Process Industry Centre at Lund University (PICLU).
N. Andersson et al.
674
2. Modelling Languages and Tools The modelling language used to express the mathematical model is Modelica, which is a high-level language for encoding of complex physical systems, supporting objectoriented concepts such as classes, components and inheritance. In addition, textbook style declarative equations can be expressed. This modelling paradigm has significant advantages over the block-based paradigm in the context of physical modelling. In particular, acausal modelling systems do not require the user to solve for the derivatives of a mathematical model. Instead, differential and algebraic equations may be mixed, which then typically results in a differential algebraic equation (DAE). In order to strengthen the optimisation capabilities of Modelica, the Optimica extension has been proposed which adds a small number of constructs, enabling the user to conveniently specify optimisation problems based on Modelica models, see (Åkesson 2008). The calibrations in this paper has been performed using JModelica.org, which is a Modelica-based open source platform targeted at dynamic optimisation, see (Åkesson, Bergdahl et al. 2009). Calibration in JModelica.org relies on an interior point algorithm, called IPOPT(Wächter and Biegler 2006).
3. Mathematical Plant Model Modelling a series of reactors is a task including theoretical and empirical challenges. A resulting model of such work at Borealis AB for the Borstar® process is today used onsite in a non-linear Model Predictive Control (MPC) software, OnSpot, see e.g. (Saarinen and Andersen 2003), which is the same model used in this paper. The model is described with more details in (Larsson, Andersson et al. 2010). Each reactor is modelled from material balances, where either the inflow comes from a previous reactor or a fresh feed, and outflows go to subsequent reactor, a bleed, recycle, or product outlet. The reactions are modelled using extended Arrhenius expressions, depending on temperature, pressure, reactant concentrations and catalyst activity. The catalyst activity varies throughout the reactor series and demands careful modelling. Some assumptions are made to simplify modelling. Firstly, the reactor pressures are controlled by outlet valves holding the pressures constant. It is also assumed that the polymer and the fluids are well mixed and the temperatures are uniform in the reactors. Inputs used in the model are measured flows of ethylene, hydrogen and propane, but also comonomer and catalyst flows. Several outputs are available including substance masses, mass ratios, mass flows, concentrations, pressures, densities, production rates and split factor. The model contains, apart from the mentioned equations additional algebraic equations. If the inputs and outputs of the model are denoted ݑand ݕ, respectively, and the dynamic and algebraic variables are denoted ݔand ݓ, the model can be written in the general non-linear differential algebraic equation (DAE) form Ͳ ൌ ܨሺݖǡ ݑǡ ሻ ݕൌ ݃ሺݖǡ ݑǡ ሻ ் ݖൌ ሾݔሶ ் ் ݓ ் ݔሿ
(1)
in the optimisation problems. Here ݖdenotes the ܰ௭ DAE variables. The parameters to calibrate are both the model parameters and the inputs ݑ, for which measurements are available. There are many parameters to calibrate in the model. Most parameters are kinetic parameters in reaction and catalyst deactivation rates, but also controlling parameters
Calibration of a polyethylene plant for grade change optimisations
675
affecting flows, pressures and levels. In this paper five parameters are chosen, namely ሺ݅ሻ a settling leg parameter in the loop ݈ݏଶ , ሺ݅݅ሻ a reference value for the fluidised bed level in the GPR ܾ݈ଷ , (݅݅݅)-( )ݒpre-exponential factors in the Arrhenius equations for ethylene and hydrogen in the loop and butylene in the GPR, ݇ଶ , ݇ଶ and ݇ଷ . The settling legs are designed to transport the slurry to the next reactor, and its parameter rules the solid ratio of the reactor outflow. The nominal model is calibrated in an ad hoc manner using process know-how, experiments and/or by trial and error. This is satisfactory for a model when used in a model predictive controller that can correct any discrepancies between actual and estimated output measurements by updating states or parameters. However, in offline grade change optimisation, there is no corrector, and model errors will immediately be penalised by taking unrealistic optimal paths, and therefore calibration is necessary.
4. Process Model Calibration A grade change is accomplished by transferring the process from producing a product A to producing another product B, denoted transfer A-B. In the sessions between grade changes, where the process operates in steady-state, four data sets for the transfers A-B and C-D have been averaged and measurement noise covariance have been computed. The scaled measurements ݕො and their standard deviations ߪ for some of the outputs can be seen in Table 1. The outputs are divided into input and state measurements, denoted ݕො௨ and ݕො௭ with 12 and 13 values respectively. Reactor hold-up masses are denoted ݉ and molar fractions ݔ , where the indexing ݅ denotes the components propane (), hydrogen (݄) , ethylene (݁), butylene (ܾ) and nitrogen (݊) while ݆ denotes prepolymerisation (ͳ), loop (ʹ) and GPR (͵) reactors. Table 1. Measurements (ݕො) and standard deviations (ı) for all data sets together with calibrated model outputs כ ݕand 95% confidence interval for all inputs ݕ௨ . Above the dots are 4 of 12 inputs shown and below 4 of the 13 states. All values are scaled to measurements of data set A. A
ෝ ࢟ ࢛ ࢛ࢎ ࢛ࢋ ࢛࢈ ǥ ࢙ ࢞ࢋ ࢞࢈ ࢞ࢋ
(ı) 1.00 (0.002) 1.00 (0.041) 1.00 (0.005) 1.00 (0.015) … 1.00 (0.011) 1.00 (0.033) 1.00 (0.032) 1.00 (0.031)
࢟ כേ ࢌ 1.00± 0.002 1.00± 0.047 0.83± 0.004 0.99± 0.017 … 1.02 1.02 1.02 1.10
B
ෝ ࢟ (ı) 0.97 (0.002) 2.51 (0.052) 0.89 (0.004) 1.18 (0.016) … 1.00 (0.014) 1.19 (0.015) 1.04 (0.013) 0.82 (0.008)
࢟ כേ ࢌ 0.97± 0.003 2.51± 0.060 0.79± 0.003 1.17± 0.018 … 1.03 1.24 1.06 0.97
C
ෝ ࢟
(ı) 0.97 (0.002) 3.54 (0.043) 1.31 (0.006) 0.78 (0.009) … 1.37 (0.008) 1.27 (0.009) 0.53 (0.008) 0.92 (0.013)
࢟ כേ ࢌ 0.97± 0.002 3.53± 0.050 1.05± 0.004 0.77± 0.011 … 1.37 1.28 0.54 1.08
D
ෝ ࢟
(ı) 0.97 (0.001) 3.67 (0.050) 1.25 (0.005) 0.70 (0.006) … 1.37 (0.011) 1.43 (0.008) 0.41 (0.002) 0.76 (0.005)
࢟ כേ ࢌ 0.97± 0.002 3.67± 0.058 0.97± 0.003 0.69± 0.006 … 1.41
AB-A ࢟ כേ ࢌ 0.98± 0.004 1.01± 0.060 0.83± 0.005 0.97± 0.029 … 0.95
AB-B ࢟ כേ ࢌ 0.99± 0.005 2.47± 0.097 0.78± 0.004 1.19± 0.032 … 1.08
1.48
1.05
1.20
0.42
1.05
1.03
0.89
1.10
0.97
676
N. Andersson et al.
The calibration of the system (1) is formulated as an optimisation problem ܳ௭ ܳ௨ ൌ ሺݕො௭ െ ݕ௭ ሻ் ܹ௭ ሺݕො௭ െ ݕ௭ ሻ ሺݕො௨ െ ݕ௨ ሻ் ܹ௨ ሺݕො௨ െ ݕ௨ ሻ ǡ௨
ǡ௨
݆ܾݑݏǤܨݐ ሺ ݖ ǡ ǡ ݑ ሻ ൌ Ͳ ݔሶ ൌ Ͳ ݕ௭ ൌ ݃௭ ሺ ݖ ሻǡݕ௨ ൌ ݃௨ ሺݑ ሻ
(2)
where ° denotes a steady-state solution for the system and the weighting matrices ܹ௨ and ܹ௭ is defined accordingly as diagonal matrices scaled with corresponding ଶ measurements as ͳΤݕො . An investigation of two calibration cases follows where the first is a calibration of each data set separately while the second case looks at simultaneously calibration of data sets A and B, called AB. When one model instance is calibrated the degrees of freedom is ܰ௨ ܰ , where ܰ௨ and ܰ is the number of inputs and parameters respectively. For calibration of two model instances simultaneously, the degrees of freedom is ʹܰ௨ ܰ . In order to assess the quality of the parameter estimates, confidence regions have been computed. An 1-Į marginal confidence interval means that there is 1-Į probability that the true parameter is within the estimated interval, which is derived from the parameter Jacobian that is obtained at steady-state as ܬൌ
݀ି ܨߜ ݃ߜ ݃ߜ ݕଵ ߜܨ ൌ െ ڄ൬ ൰ Ǥ ݀ݖߜ ݖߜ ߜ ߜ
(3)
The standard deviations ı of the measured outputs are needed to compute the covariance matrix ȱ ൌ ሺܹ ்ܬఙ ܬሻିଵ where ܹఙ is the diagonal weighting matrix with each output weighted with ͳȀߪ ଶ . Now, the standard deviation of the parameters can be estimated by ݏ ൌ ඥ݀݅ܽ݃ሺȱሻ and henceforth a parameter 1-Į marginal confidence interval can be estimated with ǡ௧ േ ݏ ܶ௩ ሺߙΤʹ ǡ ݊ െ ሻ
(4)
where ܶ௩ is Student’ T-distribution (Englezos and Kalogerakis 2001). 4.1. Case 1 – Calibration based on a single data set The calibration results for A, B, C, D are shown in Table 1 and Table 2. When comparing model output ݕto the output measurements a good agreement is noticed for the pre-polymerisation reactor. This is Table 2. The calibrated parameters כ with a 95% probably because there are only input confidence interval for all calibrations. All values signals in the objective function, while are scaled to the original parameter values. there are bigger differences in the loop A B C D AB and GPR reactors, where a trade-off between input signals and states 1.02± 1.18± 0.90± 0.91± 1.21± ࢙ 0.030 0.033 0.014 0.015 0.025 prevails. The calibration of ݔଶ and ݔଷ 0.99± 0.95± 0.73± 0.68± 1.00± is better than ݔଷ for all calibrations ࢋ 0.095 0.043 0.018 0.019 0.051 because they are directly affected by 1.43± 1.76± 1.31± 1.41± 1.60± their respective kinetic parameter ݇ଶ ࢎ 0.248 0.119 0.058 0.065 0.163 and ݇ଷ . In addition, ݉௦ଶ which are 0.88± 0.83± 0.68± 0.78± 0.86± ࢈ directly affected by ݈ݏଶ is nicely fitted. 0.068 0.026 0.023 0.018 0.040 Table 3 shows how the optimal cost is 1.29± 1.17± 1.53± 1.78± 1.22± ࢈ distributed between inputs and states, 0.119 0.041 0.071 0.033 0.040
Calibration of a polyethylene plant for grade change optimisations
677
where the inputs part is much smaller for all calibrations. This is probably due to the fact that inputs are easier to calibrate than for instance concentrations that depend on the other components. 4.2. Case 2 – Calibration of multiple data sets simultaneously When comparing the single data set calibrations A and B to the calibration of multiple data sets AB-A and AB-B, in Table 1, all optimised output values have good agreement. The parameters ݇ଶ ܾ݈ଷ and ݇ଷ in Table 2 shows calibrated values in between those of the single calibrations for A and B, while ݈ݏଶ and ݇ଶ values lies somewhat above, probably due to model non-linearity. The total Table 3. The part of objective function optimal cost of A, B and AB is 1.12, 0.97 and that is ݑand ݖfor all calibration sets. 2.46. The sum of the optimal costs in A and B A B C D AB (2.09) is as expected less than that of AB because the number of freedoms are higher ࡽ࢛ 10% 7% 20% 20% 9% when separately calibrated, because can ࡽࢠ 90% 93% 80% 80% 91% obtain different values.
5. Summary and Conclusions The paper shows an application of calibrating a Modelica model, of an existing Borstar® plant used at Borealis AB, with the Optimica extension in JModelica.org platform. The calibration results show a huge reduction of the optimal cost compared to that obtained with nominal parameter values and the model accuracy could be improved by applying the calibrated parameters. It also shows narrow confidence intervals both for parameters and inputs which is comparable to the standard deviation of the measurements. A comparison between the optimised parameters for the different data sets shows that at least ݇ଶ , ܾ݈ଷ and ݇ଷ have values distinctly different from the nominal parameter values (1) and a parameter adjustment should be beneficial for the model accuracy at all studied operating points. Some measurements have not equally good agreement, which may be explained by model errors or measurement sensors of various qualities, which need to be followed up. More work is to be done in the future by extending the model to also consider the recycle part of the plant, including distillation towers. Also, there are more parameter sets to examine and a single value decomposition analysis of the model parameters to additional investigate their identifiability, remains to be done. While the focus is to optimise grade transitions, it would be interesting to make an offline calibration on dynamic data for different cases.
References Englezos, P. and N. Kalogerakis (2001). Applied parameter estimation for chemical engineers. Larsson, P.-O., N. Andersson, et al. (2010). Modeling and Optimization of Grade Changes for a Polyethylene Reactor. 9th International Symposium on Dynamics and Control of Process Systems. Leuven, Belgium. Saarinen, M. and K. S. Andersen (2003). Applying Model Predictive Control in a Borstar Pilot Plant Polymerization Process. Wächter, A. and L. T. Biegler (2006). "On the implementation of an interior-point filter linesearch algorithm for large-scale nonlinear programming." Math. Programming 106(1): 25-58. Åkesson, J. (2008). Optimica--An Extension of Modelica Supporting Dynamic Optimization. In 6th International Modelica Conference 2008, Modelica Association. Åkesson, J., T. Bergdahl, et al. (2009). Modeling and Optimization with Modelica and Optimica Using the JModelica.org Open Source Platform. Proceedings of the 7th International Modelica Conference 2009, Modelica Association.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
0HPEUDQHSURFHVVRSWLPL]DWLRQIRUK\GURJHQ SHUR[LGHXOWUDSXULILFDWLRQ 5LFDUGR$EHMyQ$XURUD*DUHD$QJHO,UDELHQ 'HSDUWDPHQWRGH,QJHQLHUtD4XtPLFD\4XtPLFD,QRUJiQLFD8QLYHUVLGDGGH&DQWDEULD $Y/RV&DVWURV6DQWDQGHU6SDLQ
$EVWUDFW 7KH DSSOLFDWLRQ RI UHYHUVH RVPRVLV ZLWKRXW DQ\ DX[LOLDU\ WHFKQRORJLHV WR WKH XOWUDSXULILFDWLRQRIWHFKQLFDOJUDGHK\GURJHQSHUR[LGHWRREWDLQDTXDOLW\HQRXJKIRULWV XVH E\ WKH VHPLFRQGXFWRU LQGXVWU\ LV D JUHDW FKDOOHQJH 7KURXJK PRGHOLQJ EDVHG RQ PHPEUDQHWUDQVSRUWHTXDWLRQVDQG PDVV EDODQFHVGLIIHUHQWLQWHJUDWHGUHYHUVHRVPRVLV FDVFDGHV KDYH EHHQ SODQQHG DQG VLPXODWHG WR SURGXFH WKH VHYHUDO HOHFWURQLF JUDGHV FKHPLFDO7KHVHQVLWLYLW\RIGLIIHUHQWGHVLJQDQGRSHUDWLRQYDULDEOHVUHFRYHU\UDWHDQG DSSOLHGSUHVVXUH RQWKHV\VWHPSHUIRUPDQFHZDVDOVRLQYHVWLJDWHG)LQDOO\DQRQOLQHDU RSWLPL]DWLRQDSSURDFKWRPD[LPL]HHFRQRPLFSURILWZKLOHRSWLPL]LQJUHFRYHU\UDWHVDQG DSSOLHGSUHVVXUHVZDVFDUULHGRXW
.H\ZRUGVXOWUDSXULILFDWLRQUHYHUVHRVPRVLVK\GURJHQSHUR[LGHSURFHVVLQWHJUDWLRQ
,QWURGXFWLRQ 7KH VHPLFRQGXFWRU LQGXVWU\ UHTXLUHV WKH KLJKHVW VWDQGDUG RI SXULW\ IRU K\GURJHQ SHUR[LGH+2 EHFDXVHLWVXVHIRUFOHDQLQJZDIHUVXUIDFHVRIIRUHLJQFRQWDPLQDQWVRU UHPRYLQJSKRWRUHVLVWVLPSOLHVGLUHFWFRQWDFWEHWZHHQVLOLFRQDQGFKHPLFDO>@ $PRQJ DOO WKH XOWUDSXULILFDWLRQ DOWHUQDWLYHV IRU WKLV FKHPLFDO UHYHUVH RVPRVLV HPHUJHV DV WKH PRVW GHVLUDEOH WHFKQRORJ\ DFFRUGLQJ WR HQYLURQPHQWDOO\ IULHQGO\ FULWHULD DV DX[LOLDU\ FKHPLFDOVDUHQRWQHHGHGDQGYLUWXDO]HURHIIOXHQWJHQHUDWLRQLVDFKLHYHG 7KH DFKLHYHPHQW RI H[LJHQW HOHFWURQLF JUDGHV >@ E\ UHYHUVH RVPRVLV ZLWKRXW DQ\ DX[LOLDU\ WHFKQLTXH LV D KLJKO\ VWLPXODWLQJ WDUJHW DQG D PXOWLVWDJH SURFHVV EHFRPHV LQGLVSHQVDEOH7RFRPELQHD KLJKUHPRYDORI PHWDOOLF LPSXULWLHVZLWKDKLJKUHFRYHU\ UDWHLQWHJUDWLRQRIWKHVHYHUDOVWDJHVE\UHFLUFXODWLRQRIUHWHQWDWHVWUHDPVLVSURSRVHG 7KURXJKPRGHOLQJEDVHGRQWUDQVSRUWHTXDWLRQVDQGPDVVEDODQFHVGLIIHUHQWLQWHJUDWHG UHYHUVH RVPRVLV PHPEUDQH FDVFDGHV KDYH EHHQ VLPXODWHG WR FKRRVH WKH PRVW DSSURSULDWH GHVLJQ IRU WKH SURGXFWLRQ RI HDFK HOHFWURQLF JUDGH K\GURJHQ SHUR[LGH ,Q DGGLWLRQWKHHIIHFWRIGHVLJQDQGRSHUDWLRQYDULDEOHVVXFKDVUHFRYHU\UDWHDQGDSSOLHG SUHVVXUHXSRQWKH SHUIRUPDQFHRIWKH SURFHVVLV VWXGLHG>@$QRSWLPL]DWLRQSUREOHP LQFRUSRUDWLQJ DQ HFRQRPLF PRGHO LV IRUPXODWHG WR DVVHVV WKH RSWLPDO YDOXHV RI WKH GHVLJQDQGRSHUDWLRQYDULDEOHVLQRUGHUWRPD[LPL]HWKHSURILWRIDQLQVWDOODWLRQ
0HPEUDQHSURFHVVRSWLPL]DWLRQIRUK\GURJHQSHUR[LGHXOWUDSXULILFDWLRQ
679
0DWKHPDWLFDOPRGHO 7KH VLPXODWLRQ PRGHO ZDV EDVHG RQ RYHUDOO DQG FRPSRQHQW PDVV EDODQFHV DQG WKH .HGHP.DWFKDOVN\ HTXDWLRQV IRU VROYHQW DQG VROXWH WUDQVSRUW WKURXJK UHYHUVH RVPRVLV PHPEUDQHV >@ 7R LOOXVWUDWH WKH PRGHO WKH VLPXODWLRQ RI WKH LQWHJUDWHG WZRVWDJH SURFHVVDVVKRZQLQ)LJ LVGHWDLOHG
)LJXUH*HQHUDOVFKHPHRIDWZRVWDJHLQWHJUDWHGUHYHUVHRVPRVLVSURFHVV
)LUVWO\ WKH RYHUDOO DQG PHWDOOLF FRPSRQHQW PDVV EDODQFHV IRU ERWK PRGXOHV DUH FRPSRVHG )5 35 3 35 )&)PHWDO 5&5PHWDO 3&3PHWDO 5&5PHWDO 3&3PHWDO 3&3PHWDO 5&5PHWDO ZKHUH 3L DQG 5L DUH WKH SHUPHDWH DQG UHWHQWDWH YROXPH IORZV UHVSHFWLYHO\ RI WKH FRUUHVSRQGLQJ L PHPEUDQH PRGXOH ) LV WKH LQLWLDO IHHG IORZ DQG &\PHWDO WKH PHWDO FRQFHQWUDWLRQRIWKHFRUUHVSRQGLQJ\VWUHDP6HFRQGO\WKHWUDQVSRUWHTXDWLRQVEDVHGRQ WKHVLPSOLILHG.HGHP.DWFKDOVN\PRGHOZKLFKZDVSUHYLRXVO\YDOLGDWHG>@GHILQHWKH FKDUDFWHULVWLF YDULDEOHV RI WKH UHYHUVH RVPRVLV PHPEUDQH EHKDYLRU WKH VSHFLILF SHUPHDWHIOX[-3 DQGWKHUHWHQWLRQFRHIILFLHQWRIHDFKPHWDO5LPHWDO - 3L = / 3 ǻ3L
5 LPHWDO =
- 3L ı PHWDO - 3L + Ȧ ′PHWDO
ZKHUH Δ3L LV WKH DSSOLHG SUHVVXUH LQ HDFK PRGXOH DQG /3 σ DQG ω DUH WKH .HGHP.DWFKDOVN\ SDUDPHWHUV 2QFH WKH PHPEUDQH WUDQVSRUW PRGHO LV GHVFULEHG WKH FKDUDFWHULVWLFV RI WKH SHUPHDWH VWUHDPV IORZ DQG PHWDO FRQFHQWUDWLRQV FDQ EH FDOFXODWHG 3L $L-3 &3LPHWDO &,1LPHWDO 5LPHWDO ZKHUH $L LV WKH PHPEUDQH DUHD RI WKH FRUUHVSRQGLQJ PRGXOH DQG &,1L WKH PHWDO FRQFHQWUDWLRQRIWKHLQOHWVWUHDPRIWKHFRUUHVSRQGLQJPRGXOH)LQDOO\WKHUHFRYHU\UDWH RIHDFKPRGXOH5HFL LVGHILQHG 5HFL 3L3L5L )RU RSWLPL]DWLRQ SXUSRVH WKH HFRQRPLF SURILW = RI WKH SURFHVV LV IRUPXODWHG DV IROORZV = 5HYHQXHΣ&DSLWDO&RVWVΣ2SHUDWLRQ&RVWV
5$EHMyQHWDO
680
5HVXOWVDQGGLVFXVVLRQ 7KH $VSHQ &XVWRP 0RGHOHU VRIWZDUH ZDV HPSOR\HG WR VLPXODWH WKH EHKDYLRU RI WKH GHVLJQHG PHPEUDQH SURFHVV 7DEOH VKRZV WKH QXPEHU RI VWDJHV RI WKH PRVW DSSURSULDWH VLPXODWHG PHPEUDQH FDVFDGHV IRU WKH SURGXFWLRQ RI HDFK HOHFWURQLF JUDGH K\GURJHQSHUR[LGHFRQVLGHULQJ1DDVWKHOLPLWLQJLPSXULW\>@ ,WFDQEHREVHUYHGWKDW DVLPSOHLQVWDOODWLRQZLWKMXVWVWDJHVLVHQRXJKIRUREWDLQLQJWKHOHDVWVWULFWJUDGH2Q WKH RWKHU KDQG WKH LQWHJUDWLRQ RI VWDJHV EHFRPHV QHFHVVDU\ ZKHQ WKH PRVW H[LJHQW JUDGHLVGHVLUHG 7DEOH1XPEHURIWKHVWDJHVUHTXLUHGIRUWKHSURGXFWLRQRIHOHFWURQLFJUDGH+2 +\GURJHQSHUR[LGHTXDOLW\ 1DOLPLWSSE 1XPEHURIVWDJHV 3URGXFW1DFRQFHQWUDWLRQSSE 7RWDOPHPEUDQHDUHDP
*UDGH
*UDGH
*UDGH
*UDGH
*UDGH
7KHUHVXOWVRI7DEOHDUHFDOFXODWHGIRUUHFRYHU\UDWHVRIDQGDSSOLHGSUHVVXUHVRI EDU 7KH HIIHFW RI YDU\LQJ HDFK YDULDEOH NHHSLQJ WKH RWKHU FRQVWDQW RYHU WKH SHUIRUPDQFHRIDVWDJHVLQVWDOODWLRQLVJUDSKHGLQ)LJDQG
)LJXUH'HSHQGHQFHUHODWLRQRIWKHDSSOLHGSUHVVXUHVRYHUWKHWZRUHYHUVHRVPRVLVVWDJHVXSRQ D WKHWRWDOPHPEUDQHDUHD$7 DQGE WKHFRQFHQWUDWLRQRI1DLQWKHVHFRQGVWDJHSHUPHDWH &31D
+LJK DSSOLHG SUHVVXUHV DUH EHQHILFLDO IRU WKH SURFHVV LQ ERWK HTXLSPHQW DQG SURGXFW TXDOLW\WHUPVWKHWRWDOPHPEUDQHDUHDUHTXLUHGIRUDFRQVWDQWUHFRYHU\UDWHLVVPDOOHU ZKHQKLJKSUHVVXUHVDUHFKRVHQDQGDILQDOSURGXFWVWUHDPZLWKORZHUPHWDOOLFFRQWHQW LVREWDLQHG7KHFRXQWHUEDODQFHWRWKHVHDGYDQWDJHVLVDQXSSHUHQHUJ\FRQVXPSWLRQ 7KHGHVLJQGHFLVLRQDERXWWKHUHFRYHU\UDWHVVKRXOGULGHRXWWKHFRQIOLFWLQJHIIHFWVWKRVH YDULDEOHVSURGXFHRYHUWKHSURFHVVSHUIRUPDQFH2EYLRXVO\KLJKUHFRYHU\UDWHVLPSO\ KLJKSURGXFWVWUHDPIORZJUDSKLFDOO\H[SUHVVHGDVWRWDOUHFRYHU\ EXWDWWKHH[SHQVHRI JUHDWHU PHPEUDQH DUHD DQG ZRUVH SURGXFW TXDOLW\ 7KH WRWDO PHPEUDQH DUHD RI WKH V\VWHPFDQEHPDLQO\UHGXFHGE\GLPLQLVKLQJWKHUHFRYHU\UDWHRIWKHILUVWVWDJHZKLOH WKHVDPHDFWLRQRYHUWKHVHFRQGVWDJHLVPRUHHIILFLHQWWRLPSURYHWKHSURGXFWTXDOLW\
0HPEUDQHSURFHVVRSWLPL]DWLRQIRUK\GURJHQSHUR[LGHXOWUDSXULILFDWLRQ
681
7KHRSWLPL]DWLRQVWUDWHJ\ZDVIRUPXODWHGDVIROORZV )2 0D[LPL]H=(T 5HFLΔ3L 6XEMHFWWR 3URFHVVPRGHO(T 9DULDEOHOLPLWV5HFL5HFXSSHUDQGΔ3LΔ3XSSHU
)LJXUH'HSHQGHQFHUHODWLRQRIWKHUHFRYHU\UDWHVRIWKHWZRUHYHUVHRVPRVLVVWDJHVXSRQD WKH WRWDOPHPEUDQHDUHD$7 E WKHFRQFHQWUDWLRQRI1DLQWKHVHFRQGVWDJHSHUPHDWH&31D DQG F WKHWRWDOUHFRYHU\UDWH5HF7
7KHIHHGVWUHDPFKDUDFWHULVWLFVWKHPHPEUDQHSURSHUWLHVDQGWKHSURGXFWVSHFLILFDWLRQV PXVWEHJLYHQDVPRGHOSDUDPHWHUV>@7KHRSWLPDOYDOXHVDVVHVVHGE\*$06VRIWZDUH 1/3 DOJRULWKP &21237 IRU WKH GHVLJQ DQG RSHUDWLRQ YDULDEOHV RI D WZRVWDJH LQVWDOODWLRQDUHVKRZQLQ7DEOH)RUDOOWKHYDULDEOHVWKHLURSWLPXPVFRLQFLGHZLWKWKH XSSHUOLPLWVRIHDFKGHILQHGUDQJH7KLVIDFWLVPDLQWDLQHGZKHQRSWLPL]LQJLQVWDOODWLRQV ZLWKDELJJHUQXPEHURIVWDJHV+LJKUHFRYHU\UDWHVDQGDSSOLHGSUHVVXUHVVLJQLI\KLJK SURGXFWLRQZKLFKFRPSHQVDWHVWKHJUHDWHUFRVWVLQPHPEUDQHHTXLSPHQWRUHQHUJ\$V WKHPDUJLQWRIXOILOOWKHVSHFLILFDWLRQVLVZLGHHQRXJKWKHTXDOLW\RIWKHSURGXFWVWUHDP LV QRW D OLPLWLQJ FRQVWULFWLRQ RI WKH V\VWHP 7KH SURILWV REMHFWLYH IXQFWLRQ FRUUHVSRQGLQJWRHDFKRSWLPL]HGPXOWLVWDJHSURFHVVDUHVXPPDUL]HGLQ)LJXUH
&RQFOXVLRQV 7KLV ZRUN KDV H[SRXQGHG D PRGHO WR GHVFULEH WKH SHUIRUPDQFH RI D UHYHUVH RVPRVLV LQVWDOODWLRQ IRU WKH XOWUDSXULILFDWLRQ RI K\GURJHQ SHUR[LGH 7KH QXPEHU RI VWDJHV UHTXLUHG IRU HDFK HOHFWURQLF JUDGH ZDV GHWHUPLQHG E\ PHDQV RI VLPXODWLRQ UHVXOWLQJ LQVWDOODWLRQVIURPWRVWDJHV
682
5$EHMyQHWDO
7DEOH2SWLPL]DWLRQUHVXOWVRIDVWDJHVLQVWDOODWLRQ Feed conditions F (m3/d) CF(Na) (ppb) Optimal values Rec1 Rec2 ΔP1 (bar) ΔP2 (bar) Technical results AT (m2) P2 (m3/d) CP2(Na) (ppb)
24.2 25000 0.9 0.9 40 40 26.8 21.5 182
Cost indicators ($/d): Profit Revenue Costs Capital Costs Attributable to membranes Attributable to the rest of the installation Operating Costs Attributable to raw material Attributable to labor Attributable to energy Attributable to maintenance
34928 57405 22477 3041 446 2595 19436 19110 168 6 152
7KHHIIHFWRIGHVLJQDQGRSHUDWLRQYDULDEOHVUHFRYHU\UDWHVDQGDSSOLHGSUHVVXUHV RQ WKHSHUIRUPDQFHRIWKHV\VWHPZDVVWXGLHGE\DVHQVLELOLW\DQDO\VLVDVDSUHYLRXVVWHS EHIRUH IRUPXODWLRQ RI WKH RSWLPL]DWLRQ RI WKH VDPH YDULDEOHV ,W ZDV IRXQG WKDW WKH UHFRYHU\ UDWHV DQG DSSOLHG SUHVVXUHV VKRXOG EH DV KLJK DV SRVVLEOH WR PD[LPL]H WKH HFRQRPLFSURILWRIWKHSURFHVV
)LJXUH0D[LPXPSURILWIRUHDFKHOHFWURQLFJUDGH
$FNQRZOHGJHPHQWV 7KLVUHVHDUFKKDVEHHQILQDQFLDOO\VXSSRUWHGE\WKH0LQLVWU\RI6FLHQFHDQG,QQRYDWLRQ RI 6SDLQ 0,&,11 WKURXJK &70 3URMHFW 5 $EHMyQ DFNQRZOHGJHV DOVR WKHDVVLVWDQFHRI0,&,11IRUWKHDZDUGRID)3,JUDQW%(6
5HIHUHQFHV >@-$WVXPL62KWVXND60XQHKLUD..DML\DPD3URF(OHFWURFKHP6RF >@6(0,'RFXPHQW& >@.06DVVL,00XMWDED(6&$3( >@06ROWDQLHK:1*LOO&KHP(QJ&RPPXQ >@5$EHMyQ$*DUHD$,UDELHQ6HSDU3XULI7HFKQRO
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
'\QDPLFRSWLPL]DWLRQRISRURXVPHGLDFRPEXVWRU XVLQJDJUH\ER[QHXUDOPRGHODQG103&WHFKQLTXH /XLV+HQUtTXH]9DUJDV9DOHUL%XEQRYLFKDQG)UDQFLVFR&XELOORV &KHPLFDO(QJLQHHULQJ'HSDUWPHQW8QLYHUVLGDGGH6DQWLDJRGH&KLOH&DVLOOD 6DQWLDJR&KLOH
$EVWUDFW ,QWKLVZRUNZHVWXG\E\VLPXODWLRQVWKHG\QDPLFRSWLPL]DWLRQRIWKHEXUQHUSRZHUDQG SRVLWLRQLQJ RI WKH FRPEXVWLRQ IURQW LQ D XQLIRUP SRURXV PHGLD FRPEXVWRU E\ XVLQJ D QRQOLQHDUPRGHOSUHGLFWLYHFRQWURO103& VFKHPH,QWKLVDSSURDFKWKHFRPEXVWLRQ IURQWFDQEHIL[HGWRDGHVLUHGYDOXHE\FKDQJLQJWKHYHORFLW\RIWKHFRPEXVWLRQZDYH E\WKHPDQLSXODWLRQRIWKHRSHUDWLRQDOYDULDEOHVILOWUDWLRQYHORFLW\DQGFRPSRVLWLRQRI WKHJDVPL[WXUH7KH103&FRQWUROVWUDWHJ\ZDVDUHFHGLQJKRUL]RQFRQWUROVFKHPH$ JUH\ER[ QHXUDO W\SH PRGHO *10 ZDV XVHG WR GHVFULEH WKH FRPEXVWLRQ ZDYH G\QDPLFV DV LQQHU PRGHO LQ WKH 103& VWUDWHJ\ 7KH *10 LV EDVHG RQ IXQGDPHQWDO FRQVHUYDWLRQ ODZV DVVRFLDWHG ZLWK QHXUDO QHWZRUNV 11 XVHG WR PRGHO XQFHUWDLQ SDUDPHWHUV ,Q WKLV FDVH WKH 11 ZDV XVHG WR HVWLPDWH WKH JDLQ IDFWRU LQ D RUGHU SUHGLFWLRQ PRGHO 7KH VWXG\ ZDV FDUULHG RXW E\ PHDQV RI VLPXODWLRQV FRQVLGHULQJ DQ H[SHULPHQWDOILOWUDWLRQFRPEXVWLRQ)& EXUQHU&+DLUPL[WXUHV PDQLSXODWLQJWKHJDV ILOWUDWLRQYHORFLW\DQGIXHOHTXLYDOHQFHUDWLR&RQWUROOHGYDULDEOHZDVWKHIODPHSRVLWLRQ DORQJWKHEXUQHUDQGWKH ILULQJUDWH5HVXOWVVKRZ WKDWFRQILQHPHQW RIWKHFRPEXVWLRQ IURQW DQG ILULQJ UDWH VHWSRLQW FDQ EH DFKLHYHG E\ PDQLSXODWLQJ RSHUDWLRQDO YDULDEOHV JLYLQJ WZR RSHUDWLRQ PRGHV IRU WKH FRPEXVWLRQ IURQW F\FOLF DQG VHWSRLQW WUDFNLQJ VFKHPHV .H\ZRUGV&RPEXVWRU&RQWURO103&QHXUDOQHWZRUNV
,QWURGXFWLRQ )LOWUDWLRQFRPEXVWLRQ)& LVJHQHUDWHGZKHQDQLQFRPLQJIXHOR[LGL]HUPL[WXUHIORZV DQG UHDFWV LQ WKH LQWHUVWLWLDO VSDFH RI D SRURXV PDWUL[ 'XH WR LWV EHWWHU WKHUPDO SURSHUWLHVWKHSRURXV PDWHULDO DOORZVHIILFLHQWUHGLVWULEXWLRQ RIWKHHQHUJ\ UHOHDVHG LQ WKHJDVSKDVHFKHPLFDOUHDFWLRQ%DUUDDQG(OO]H\ 7KH VLPSOHVW UHDFWRU ZKHUH WKH )& FDQ GHYHORS LV D WXEH ILOOHG ZLWK D XQLIRUP LQHUW SRURXV PDWHULDO WKURXJK ZKLFK D IXHOR[LGL]HU PL[WXUH LV IORZLQJ ,Q WHUPV RI RSHUDWLRQVWKLVUHDFWRUSUHVHQWVDQRSHUDWLRQWLPHGHWHUPLQHGE\WKHFRPEXVWLRQZDYH VSHHG DQG WKH GLVWDQFH OHIW IRU WUDYHO WRZDUGV WKH UHDFWRU¶V RXWOHW IRU ZKLFK WKH FRPEXVWLRQ ZDYH LV KHDGHG 'XH WR WKH XQVWHDG\ QDWXUH RI WKH )& SKHQRPHQRQ FRQILQHPHQW PHWKRGV IRU WKH FRPEXVWLRQ ZDYH KDYH EHHQ GHYHORSHG .HQQHG\ HW DO 6PXFNHU DQG (OO]H\ $ ZD\ WR RSWLPL]H WKH FRPEXVWLRQ SURFHVV LV WKH SRVLWLRQLQJ WKH FRPEXVWLRQ IURQW DQG WKH FRPEXVWRU SRZHU E\ PHDQV RI DGYDQFHG FRQWURO DOJRULWKPV VSHFLILFDOO\ D 1RQOLQHDU 0RGHO 3UHGLFWLYH &RQWURO 103& IUDPHZRUN,Q103&DQRQOLQHDUSUHGLFWLYHPRGHORIWKHSURFHVVLVXVHGWRFRPSXWH WKH IXWXUH FRQWUROOHG PRYHPHQWV LQ RUGHU WR REWDLQ D GHILQHG WUDMHFWRU\ RQ WKH V\VWHP VWDWHV$JUH\ER[QHXUDOW\SHPRGHO*10 ZDVXVHGWRGHVFULEHWKHFRPEXVWLRQZDYH
684
/+HQULTXH]HWDO
G\QDPLFV DV LQQHU PRGHO LQ WKH 103& VWUDWHJ\ 7KH *10 LV EDVHG RQ IXQGDPHQWDO FRQVHUYDWLRQ ODZV DVVRFLDWHG ZLWK QHXUDO QHWZRUNV 11 XVHG WR PRGHO XQFHUWDLQ SDUDPHWHUV ,Q WKLV FDVH WKH 11 ZDV XVHG WR HVWLPDWH WKH JDLQ IDFWRU LQ D RUGHU SUHGLFWLRQPRGHO
0RGHODQGVLPXODWLRQ ([SHULPHQWDOEXUQHU 7KHV\VWHPWREHFRQWUROOHGLVDSRURXVPHGLDEXUQHUUHSUHVHQWHGLQILJXUH,WFRQVLVWV RIDFPORQJTXDUW]F\OLQGHUZLWKLQQHUGLDPHWHURIGW FPDQGZDOOWKLFNQHVVRI PP7KHSRURXVPHGLDFRQVLVWVRIUDQGRPO\DUUDQJHGDOXPLQDVSKHUHVZLWKGLDPHWHU GP PPUHVXOWLQJLQDYROXPHWULFSRURVLW\P 7KH&+DLUJDVPL[WXUHHQWHUV DWWKH ERWWRP RIWKHEXUQHUDWLQWHUVWLWLDO YHORFLW\ XJDQGWHPSHUDWXUH7 .7KH FRPSRVLWLRQRIWKHPL[WXUHLVFKDUDFWHUL]HGE\WKHHTXLYDOHQFHUDWLRɎ )LUVW3ULQFLSOHV0RGHO 7KH SK\VLFDO V\VWHP ZLOO EH UHSUHVHQWHG E\ D ILUVW SULQFLSOHV PRGHO 7KH RQH GLPHQVLRQDO VHW RI HTXDWLRQV GHVFULELQJ WKH FRPEXVWLRQ ZDYHV LQ LQHUW SRURXV PHGLD ZLWKRQHVWHSNLQHWLFVKDVWKHIRUP%XEQRYLFKHWDO
∂ ( ρ J ⋅ YJ )
= ∂] ∂Z ∂Z ∂ ∂Z + ρ J ⋅ YJ = ρ J ⋅ ( 'P + 'G ) +: ρJ ∂W ∂] ∂] ∂]
: = − . ⋅ Z ⋅ ρ J H[S ( − ( 5 ⋅ 7J )
∂7 α ∂ ( λJ + 'G ⋅ ρJ ⋅ FJ ) ∂]J = PYRO 7V − 7J −: ⋅ 4 ∂] ∂7 ∂7 · ∂§ − P ρ V ⋅ FV V − ¨ λHII V ¸ = α YRO 7J − 7V − β 7V − 7H[W ∂W ∂] © ∂] ¹
ρJ ⋅ FJ
∂7J ∂W
+ FJ ⋅ ρJ ⋅ YJ
∂7J ∂]
−
(TXDWLRQV KDYHERXQGDU\DQGLQLWLDOFRQGLWLRQV ∂7J ∂Z = ∂] ] = / ∂]
= ] = /
∂7V ∂]
= ] = /
Z = Z 7J = 7V = 7 W = 7KH JRYHUQLQJ HTXDWLRQV DUH ILQLWHGLIIHUHQFHG WUHDWLQJ WKH FRQYHFWLYH WHUPV ZLWK XS ZLQGHGVFKHPHDQGWKHGLIIXVLYHWHUPVDUHGLVFUHWL]HGXVLQJDVHFRQGRUGHUWHFKQLTXH 7KHVROXWLRQRIWKHV\VWHPLVSHUIRUPHGYLDWKH7KRPDVDOJRULWKP 2SHQORRSUHVXOWV )LJXUHSUHVHQWVVLPXODWLRQRIWKHWHPSHUDWXUHEXUQHUSURILOHVZLWKWLPHLQRSHQORRS RSHUDWLRQLHZLWKRXWDIHHGEDFNFRQWURODFWLRQ 7KH FRPEXVWLRQIURQW SRVLWLRQ]Z LV GHILQHGLQWKLVZRUNDVWKHSRVLWLRQRIWKHVROLGWHPSHUDWXUHSHDN
'\QDPLFRSWLPL]DWLRQRISRURXVPHGLDFRPEXVWRUXVLQJDJUH\ER[QHXUDOPRGHO
685
1600
1400
Temperature (K)
/ FP
3DFNHG%HG $O2 PPGLD 4XDUW]7XEH PPLG PPRG
Tg Ts
1200
1000
800
600
+HDW ,QVXODWLRQ
500
900
1300 1700
2100
2500 2900
400
]
)LJXUH([SHULPHQWDO6HWXS
200 15
20
25
30
35
40
45
50
)LJXUH &RPEXVWLRQ IURQWV QXPEHUV LQGLFDWHVHFRQGVSDVWIURPLJQLWLRQ Axial distance (cm)
&RQWURO3UREOHP &RQWUROIRUPXODWLRQ 'XH WR WKH FRPEXVWLRQ IURQW PRELOLW\ WKH SRURXV PHGLD FRPEXVWRU KDV D OLPLWHG RSHUDWLRQ WLPH JLYHQ E\ WKH FRPEXVWLRQ ZDYH VSHHG DQG WKH GLVWDQFH OHIW WR WUDYHO WRZDUGV WKH UHDFWRU RXWOHW 7KH IODPH SRVLWLRQ LQVLGH WKH UHDFWRU LV GHWHUPLQHG E\ WKH FRPEXVWLRQ ZDYH YHORFLW\ ZKLFK FDQ EH FKDQJHG WKURXJK RSHUDWLRQDO YDULDEOHV PDQLSXODWLRQ7KHFRQWUROVWUDWHJ\WREHLPSOHPHQWHGZLOOEHDUHFHGLQJKRUL]RQFRQWURO VFKHPHVHH $ODPLUDQG %RUQDUG 7KH REMHFWLYHIXQFWLRQIRUPXODWHGSHQDOL]HV WKH GHYLDWLRQV RI WKH FRQWUROOHG YDULDEOHV [ )5 IURP WKHLU UHIHUHQFH WUDMHFWRULHV [G⋅ )5G⋅ DQGWKHFRQWUROHIIRUWǻX 0LQ9 1
9 ( [ ( N ) X) = ¦ ªδ [L ⋅ [XN +L − ( [ ( N ) ) − [GN+L− + δXL ⋅ ΔXN +L + δ )5L ⋅ )5NX+L − )5NG+L º « »¼ ¬ L =
[PLQ ≤ [X ≤ [PD[ XPLQ ≤ X ≤ XPD[
ΔXPLQ ≤ ΔX ≤ ΔXPD[ 7
7
[ = [ [ [ [ [ ] = ª¬ ]P YZ YJ Φ º¼ X = [X X ] = ª¬YJ Φ º¼ 7
7
*10SUHGLFWLRQPRGHO ,W LV REVHUYHG WKDW FRPEXVWLRQ ZDYH G\QDPLFV FDQ EH PRGHOHG E\ D VHFRQG RUGHU XQGHUGDPSHG PRGHO ª º H − S ⋅W » FRV VLQ ⋅ ⋅ + ⋅ YZ = YZ + Χ ( ΔX ) ⋅ « − T W T W ( ) ( ) ( ) « ( − ξ ) » ¬ ¼
S = ω ⋅ ξ T = ω ⋅ ( − ξ )
:KHUH ξ ωDUHWKH GDPSLQJFRHIILFLHQWDQGWKHQDWXUDOIUHTXHQF\ RI WKHV\VWHP 7KH YDULDEOH JDLQ Χ ( ⋅) UHSUHVHQWV WKH GLVWDQFH EHWZHHQ WZR VWDWLRQDU\ YDOXHV RI
686
/+HQULTXH]HWDO 7
VXFFHVVLYH FKDQJHV LQ WKH LQSXWV ΔX = ª¬ ΔYJ ΔΦ º¼ DQG FDQ EH HVWLPDWHG E\ D QHXUDO QHWZRUNZLWKWKHV\VWHPPDQLSXODWHGYDULDEOHVDVLQSXW LIΔX = Χ ( ΔX ) = ® ¯ 11ΔX LIΔX ≠ :KHUH 11ǻX UHSUHVHQW D WUDGLWLRQDO IHHGIRUZDUG QHXUDO QHWZRUN 7KH EHVW WRSRORJ\ REWDLQHG IRU HYDOXDWLQJ WKH YDULDEOH JDLQ SDUDPHWHU ZDV D QHW ZLWK K\SHUEROLF WDQJHQW VLJPRLG WUDQVIHU IXQFWLRQV IRU WKH KLGGHQ OD\HU DQG OLQHDU SXUHOLQ WUDQVIHU IXQFWLRQ IRU WKH RXWSXW OD\HU 7UDLQLQJ RI QHXURQV ZDV FDUULHG RXW ZLWK GDWD REWDLQHG IURPWKHVLPXODWHGEXUQHU 5HVXOWV 7KH FRQVWUDLQHG QRQOLQHDU RSWLPL]DWLRQ SUREOHP ZDV VROYHG ZLWK WKH VHTXHQWLDO TXDGUDWLFSURJUDPPLQJDOJRULWKPLPSOHPHQWDWLRQSUHVHQWHGLQWKH0DWODE2SWLPL]DWLRQ 7RROER[ )LJXUHDVKRZVWUDFNLQJRIDQLPSRVHGVHWSRLQWSURILOHIRUIURQWSRVLWLRQDWWLPHVWHSV LQFUHPHQWVRI⋅τFILULQJUDWHPXVWEHFRQVWDQWDWN:P $QRWKHUDOWHUQDWLYHIRU WKH FRQILQHPHQW RI WKH FRPEXVWLRQ IURQW DSDUW IURP WKH VHW SRLQW WUDFNLQJ VFKHPH LV DFFRPSOLVKHGE\SHULRGLFDOO\UHYHUVLQJWKHVHWSRLQWVYDOXHVZKHQWKHFRPEXVWLRQIURQW UHDFKHVDWROHUDQFHGLVWDQFHIURPWKHP)LJXUHELOOXVWUDWHVWKLVLGHDZLWKWKHVHWSRLQWV YDOXHV [G DQG FP DQG D WROHUDQFH RI FP LPSRVHG IRU DOWHUQDWLQJ WKH VHW SRLQWV DJDLQ ILULQJ UDWH PXVW EH FRQVWDQW DW N:P 7KH FRPEXVWLRQ IURQW HYROXWLRQLVWKHQSHULRGLFDOO\ UHYHUVHGDQGFRQVHTXHQWO\PDLQWDLQLQJERWKWKHVXEDQG VXSHU DGLDEDWLF HIIHFW DORQJ XS DQG GRZQ VWUHDP GLUHFWLRQV UHVSHFWLYHO\ ,Q ERWK RSHUDWLRQ VFKHPHV WKH 103& DSSURDFK LV DEOH WR PDLQWDLQ FRQVWDQW WKH SRZHU UHTXLUHPHQWV7RWDORSHUDWLRQWLPHIRUERWKFRQILJXUDWLRQVLVDSSUR[LPDWHO\K
&RQFOXVLRQV $ 103& DOJRULWKP ZLWK D VHFRQG RUGHU XQGHUGDPSHG LQQHU PRGHO ZDV XVHG DV D PHWKRG IRU SRVLWLRQLQJ WKH FRPEXVWLRQ ZDYH DW FRQVWDQW SRZHU LQ D )& EXUQHU E\ FKDQJLQJ WKH LQOHW PL[WXUH FRQGLWLRQV 7KH SUHGLFWLRQ PRGHO LQ 103& ZDV D *10 PRGHOZKHUHWKHQHXUDOQHWZRUNZDVXVHGWRHVWLPDWHWKHJDLQIDFWRU7KH*10JLYHVD EHWWHU IOH[LELOLW\ DQG UREXVWQHVV LQ WKH SUHGLFWLRQ RI WKH H[WUHPH RSHUDWLRQ SRLQWV 6HYHUDOWHVWRIWKH103&*10FRQWUROOHUIRUVHW SRLQW SRVLWLRQWUDFNLQJVFKHPHDQG SHULRGLFDOO\ UHYHUVLEOH RSHUDWLRQ ZHUH FRQGXFWHG 5HVXOWV VKRZHG H[FHOOHQW VWDELOLW\ IDVWUHVSRQVHDQG]HURRIIVHWIRUWKHSRVLWLRQRIWKHFRPEXVWLRQIURQW,QERWKRSHUDWLRQ VFKHPHV WKH 103&*10 DSSURDFK LV DEOH WR PDLQWDLQ FRQVWDQW WKH GHVLUHG SRZHU UHTXLUHPHQWV 7KHSUHVHQWHG DSSURDFK DOORZV WKH XVHUWRGHFLGHWKH FRPEXVWLRQUHJLRQ SRVLWLRQ LQ WKH UHDFWRU JLYLQJ IOH[LELOLW\ IRU DSSOLFDWLRQV IRU LQVWDQFH LQ HQHUJ\ FRQYHUVLRQ DQG IXHO UHIRUPLQJ $OVR PD\ LW EH VHHPHG DV D ZD\ WR RSWLPL]H WKH FRPEXVWLRQSURFHVVE\PHDQVRIDQLQFUHDVHLQWKHOLIHF\FOHRIWKHEXUQHUWKHSODQQLQJ RIWKHFRPEXVWLRQSURILOHVLQWKHEXUQHUDQGDEHWWHUFRQWURORIWKHFRPEXVWLRQSURFHVV IL[LQJWKHLUSRZHURULHQWHGWRUHGXFHWKHSROOXWDQWVIRUPDWLRQ
zw (cm)
'\QDPLFRSWLPL]DWLRQRISRURXVPHGLDFRPEXVWRUXVLQJDJUH\ER[QHXUDOPRGHO
50
50
40
40
30
30
20
20 zw
10 0
zw 0
10
zw 10
d
20
30
0
zw d 0
10
D
30
500 FR
2
20
E
1000
FR (kW/m )
687
FR
FRd
800
FRd 450
600 400 400 200
0
10
20
Indexation, k
30
350
0
10
20
30
Indexation, k
)LJXUH&ORVHGORRSUHVXOWVIRUFRQWUROSDUDPHWHUVτF V1 į >@D )URQWSRVLWLRQWUDFNLQJE &\FOLFRSHUDWLRQ
$FNQRZOHGJHPHQWV 7KH VXSSRUW RI &21,&<7 *UDQWV )RQGHF\W DQG LV JUHDWO\ DFNQRZOHGJHG
5HIHUHQFHV %DUUD$-DQG(OO]H\-/+HDWUHFLUFXODWLRQDQGKHDWWUDQVIHULQSRURXVEXUQHUV&RPEXVW )ODPH .HQQHG\/$&RQWDULQ)6DYHOLHY$9DQG)ULGPDQ$$$UHFLSURFDOIORZILOWUDWLRQ FRPEXVWRUZLWKHPEHGGHGKHDWH[FKDQJHUVQXPHULFDOVWXG\,QW+HDW0DVV7UDQVIHU 6PXFNHU 07 DQG (OO]H\ -/ &RPSXWDWLRQDO DQG H[SHULPHQWDO VWXG\ RI D WZRVHFWLRQ SRURXVEXUQHU&RPEXVW6FL7HFK %XEQRYLFK 9 +HQUtTXH] / DQG *QHVGLORY 1 1XPHULFDO VWXG\ RI WKH HIIHFWV RI WKH GLDPHWHURIDOXPLQDEDOOVRQIODPHVWDELOL]DWLRQLQDSRURXVPHGLXPEXUQHU1XPHULFDO+HDW 7UDQVIHU3DUW$$SSOLFDWLRQV $ODPLU 0 DQG %RUQDUG * 2Q WKH VWDELOLW\ RI UHFHGLQJ KRUL]RQ FRQWURO RI QRQOLQHDU GLVFUHWHV\VWHPV6\VWHPV &RQWURO/HWWHUV
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Monte Carlo Assessment of the Arrival Cost Evaluation Method in Moving Horizon Estimation for Chemical Processes Rincón Cuellar , F.D.a. Hirota , W.H.a. Giudici , Ra., Le Roux , G.A.C.a a
Department of Chemical Engineering, Polytechnic School of the University of São Paulo, Av. Luciano Gualberto trav.3, 380, São Paulo,05508-900 BRAZIL .
Abstract In this work is to compare the performance of online estimators through a Monte Carlo study. The online estimators compared are the Extended Kalman Filter (EKF), the Unscented Kalman Filter (UKF), and the moving horizon estimator (MHE) with arrival cost estimated by EKF and UKF. These filters were applied to two different systems: a benchmark batch fermentation reactor simulation and an experimental calorimetric reactor. It is shown that the MHE has a better performance in terms of bias, standard deviation and because it did not present any divergence in all the simulations performed. Keywords: Monte Carlo, Extended Kalman Filter, Unscented Kalman Filter, Moving Horizon Estimator, Arrival cost.
1. Introduction In spite of the advances of on-line process analyzers, some measurements, like composition, are still expensive and difficult to maintain. The on-line state and parameter estimation is a technical alternative for monitoring and control those variables that are difficult to measure. At present, the most widely used method for online estimation in the chemical process is the Extended Kalman Filter (EKF). However, the EKF presents several flaws that may seriously affect its performance. In addition it is deduced from a intricate mathematical formalism that makes it difficult to generalize and depends on some non intuitive parameters. On the other hand, the Moving Horizon Estimation (MHE) approach may prove to be a better suited alternative to EKF. It has the ability to handle constraints on state and parameter estimates in an intuitive way. The basic idea of MHE is to minimize the weighted sum of the squared errors between the measurements data and model prediction available in a time window. As far as a new measurement gets available, the window slips, and starts one step ahead. The weighting matrices are updated and the optimization problem is repeated with this new data set. The past information is referred as the arrival cost and is condensed into a matrix that is used to weight the initial points. The choice of the arrival cost remains an open issue in MHE research. The conventional
Monte Carlo Assessment of the Arrival Cost Evaluation Method in Moving Horizon Estimation for Chemical Process 689 method is to use the EKF in order to estimate the arrival cost, but other techniques have been developed over the recent years. Particle Filter (PF) and Unscented Kalman Filter (UKF) are promising alternatives for the approximation of the arrival cost for MHE. In literature, in general, the performance of online estimators is assessed by simply comparing results obtained from arbitrary realizations of the random variables of the model, thus not taking into account the intrinsic stochastic interpretation. In this work we propose to evaluate the performance of the estimators by Monte Carlo simulations, thus obtaining a estimate of the distribution of the observer, which is the better way to describe the performance of the different estimators. Two cases are studied, first a simulation example of a batch fermentation reactor and an experimental device corresponding to a non-adiabatic calorimetric reactor for which several data are available.
2. State estimation for nonlinear systems State estimation is applied assuming that the system is described by a set of nonlinear stochastic state space equations: x (t0)=x0+e x t = f[x(t), p(t), u(t), w(t)], y(k) = h[x(k)]+ v(k) (1) Where x is the state vector with initial value x(t0), u(t) is a known input vector, y the measurement vector, p is the vector of unknown system parameters, f [.] and h[.] are nonlinear functions. The measurement noise, vector v, and state noise, vector w, are assumed to independent zero mean white Gaussian noise. The index k represents the sampling time. 2.1. The Extended Kalman Filter The equations defining the discrete-time form of the EKF are well-known and can be found in, for example, Walter and Pronzato (1997). For this estimator the covariance matrix of the state estimate is given by:
Pk / k 1
A k 1Pk 1/ k 1ATk1 G k 1Q G Tk1
(2)
where A, the sensitivities matrix, is normally approximate by the first-order expansion of the Taylor series, which can be dangerous for highly nonlinear systems. In this work we have solved sensitivities equations in order to obtain a better approximation for A. 2.2. Unscented Kalman Filter(UKF) The UKF uses symmetrical sigma points in order to represent the evolution of the covariance:
F ka1
xˆ k 11x( 2n1) [01x( 2n1)
(n a k )Pka1 / k 1
(n a k )Pka1 / k 1 ]
(3)
Each of the sigma points is propagated through the nonlinear model function f[.] and h[.]. Weighted means and covariance are them computed from the transformed set points, as is presented in Texeira et al. (2010) and used in this work:
FD. Rincón Cuellar et al.
690 2 n a 1
Pk / k 1
¦W [ F i
x i ,k
xˆ k / k 1 ][ F ix,k xˆ k / k 1 ]T
(4)
i 1
2.3. Moving Horizon Estimation The basic idea of MHE is to minimize the weighted sum of the squared errors between the measurements data and model prediction available in a time window, comprising a given number of sampling times (N). As soon as a new measurement is available, the window slips, and starts over one step ahead. The weighting matrices are updated and the optimization problem is repeated with this new data set. The past information is referred as the arrival cost and is condensed into a matrix that is used to weight the initial points. The choice of the arrival cost remains an open issue in MHE research. The conventional approach is to use the EKF in order to approximate the arrival cost, but other techniques have been suggested over the recent years. Arrival costs, in this work, are computed by two different methods, based on EKF (called eMHE) and based on an UKF filter (called uMHE) with interval constraints as proposed by Texeira et al 2010.
3. Case studies 3.1. Systems The first system studied corresponds to a benchmark model of a batch fermentation reactor, presented by Rawlings et al. (1996). Five state variables (cell, gluconolactone, gluconic acid, glucose and dissolved oxygen concentration) and the maximum specific growth rate are estimated. It is assumed that only measurements of CA, CG and CO are available. The second system corresponds to an experimental non-adiabatic calorimetric reactor where the hydrolysis of acetic anhydride takes place (Hirota, WH. et al., 2010). Four state variables (acetic anhydride, water, acetic acid concentration and temperature) and the heat transfer coefficient are estimated. Only the measurement of T r is available.
3.2. Monte Carlo Studies In the first system, Monte Carlo simulations are performed to evaluate the filtering techniques under consideration. The system with the filter is simulated by introducing random noise in the three random vectors of the system (v, w and x(t0)) following their known probability distribution. 600 runs were carried out for EKF and UKF and 100 for eMHE and uMHE with N=2. The trajectories were recorded, and their distribution and moments can be analysed as will be shown further. For the second system, 19 experimental data sets were analyzed. The data were obtained from experimental teaching activities and the results will be presented in next section.
Monte Carlo Assessment of the Arrival Cost Evaluation Method in Moving Horizon Estimation for Chemical Process 691
4. Result 4.1. Batch Fermentation Reactor In figure 1, the distribution of the mum estimate obtained by EKF and UKF are presented. At each time instant a histogram of the Monte Carlo distribution is built and represented by means of colours. The colour approaches red for important densities and goes to dark blue for low densities. In figure 1 it can be noted that EKF and UKF may diverge and that the distribution is rather wide. In figure 2 the distribution of the estimates from eMHE and uMHE are presented. Both distributions are narrower and no divergence was observed. In figure 3 the mean and standard deviation of these distributions are presented for the four filters. The difference between the variance of MHE and Kalman Filter is noticeable. Other interesting results concerning measured and estimated estates cannot be shown here because of article size constraint. Parameter EKF
Parameter UKF
5
5
550 4
500
0
450 3
400
-5
350
P
m
2
300
-10
1
250 200
-15
0
-20
-1
-25
-2
150 100 50
0
5
10
0
5
10
Time(h)
Figure 1. Pm estimated by EKF and UKF. Parameter eMHE N=2
Parameter uMHE N=2 35 0.41
0.41
30 0.4
0.4
Pm
25
0.39
0.39
20
0.38
0.38 0.37
0.37
15
0.36
0.36
10
0.35
0.35
0.34
0
5
5
0.34
10
2
4
Time(h)
6
8
10
Time(h)
1
10
0.8
9
0.6
8
0.4
7
variance
mean
Figure 2. Pm estimated by eMHE and uMHE.
0.2 0 -0.2 -0.4 -0.6
uMHE N=2 EKF
5 4 3
UKF
2
-0.8 -1
eMHE N=2
6
1
0
2
4
6
8
10
12
0
0
2
4
6
8
10
12
Time(h)
Figure 3. mean(left) and variance(rigth) of Pm estimate by EKF, UKF eMHE and uMHE.
FD. Rincón Cuellar et al.
692
4.2. Calorimetric Reaction The mean and standard deviation of the error of the local estimates compared to the estimate obtained by considering the whole experimental data (“optimal” parameters) are presented in figure 4. It can be noted that eMHE presents a trend bias and a standard deviation which much larger than uMHE. Parameter eMHE(UA)
0.1
0.1
0.08
0.08
0.06
0.06
0.04
0.04
0.02
0.02
0
0
-0.02
-0.02
-0.04
-0.04
-0.06
0
100
200
300
400
Parameter uMHE(UA)
0.12
error
error
0.12
500
600
-0.06
0
100
Sampling Instants
200
300
400
500
600
Sampling Instants
Figure 4 . Error of UA estimated by eMHE and uMHE. 5. Conclusions The performance of EKF, UKF, eMHE and uMHE was assessed applying Monte Carlo method to a simulation benchmark and an experimental system. The methodology is useful in describing the performance of the filters in a stochastic framework (the estimators are random variables). It can be concluded that the MHE has a better performance in terms of bias, standard deviation and because it did not present any divergence in this study, where it was applied more than a hundred times.
5. Acknowledgements We acknowledge the financial support received from Coordenação de Aperfeiçoamento de Pessoal de Nível(CAPES-Brazil) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico - Brazil).
References Hirota, W.H. et al., 2010. Hydrolysis of acetic anhydride: Non-adiabatic calorimetric determination of kinetics and heat exchange. Chemical Engineering Science, 65(12), 3849-3858 Robertson DG, Lee JH, Rawlings JB, 1996. A moving horizon-based approach for least-squares estimation. AIChE J., 42(8), 2209-2224 Walter, E. and Pronzato, L., 1997. Identification of Parametric Models from Experimental Data.. Springer, Berlin Teixeira, B.O. et al., 2010. On unscented Kalman filtering with state interval constraints. J.Proc. Con., 20(1), 45-57
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Adaptive Advanced Control of a Copolymerization System Nádson M. N. Lima,a Lamia Zuñiga Liñan,a Flavio Manenti,b Rubens Maciel Filho,a Marcelo Embiruçu,c Maria R. Wolf Maciel,a a
University of Campinas (UNICAMP), Department of Chemical Processes, Barão Geraldo, 13083-970, Campinas-SP, Brazil, [email protected] b Politecnico di Milano, CMIC dept. “Giulio Natta”, Piazza Leonardo da Vinci 32, 20133, Milano, Italy, [email protected] c Federal University of Bahia, Polytechnic Institute, Federação, 40210-630, SalvadorBA, Brazil, [email protected]
Abstract The development and implementation of two multivariable nonlinear fuzzy modelbased adaptive predictive control schemes to a copolymerization process is described in this paper. Multi-input/single-output models are developed using fuzzy logic and combined to form a parallel system model for simulation and on-line prediction. The behavior of the outlined controllers were compared to the dynamic matrix control (DMC) and to a typical nonlinear model-based predictive control (typical NMPC) for regulatory problem. The obtained results showed the effectiveness of the proposed structures. Keywords: multivariable nonlinear model-based predictive control, adaptive control, nonlinear system identification, nonlinear dynamic fuzzy model, copolymerization.
1. Introduction Polymerization processes are multivariable and nonlinear in nature, making the performance of conventional controllers to be poor or to require considerable effort in controller tuning (Lima et al., 2009). To cope with this fact, model-based predictive control (MPC) has been the most successful advanced control technique applied in the process industries. This formulation naturally handles time-delays, multivariable interactions and constraints (Manenti et al., 2009). Also, the dynamic behaviour often changes in such systems. Thus, the incorporation of adaptive methodologies in the formulation of predictive control can be very useful; in addition, if well designed, adaptive formulations require almost the same computational effort. MPC uses a process model to predict future outputs. Though, such a model is often very hard to obtain through a first-principles approach. The difficulties are related to the large number of differential algebraic equations and the solution of these models involves a considerable mathematical effort, requiring simplifications which create uncertainties about the quality of the solutions obtained (Lima et al., 2010). This paper presents the development of two multivariables nonlinear adaptive predictive controllers based on nonlinear fuzzy models, with linear and exponential functional structure for each rule of the model (in the following, L-ANFMPC and E-ANFMPC, respectively), for polymerization processes. So, internal models for the controllers are developed through the fuzzy logic, taking into account processes restrictions and nonlinearities. Such approach has the great advantage of not requiring system
694
N. M. N. Lima et al.
fundamental knowledge, which makes it widely applicable in complex systems. The copolymerization of methyl methacrylate with vinyl acetate was adopted to validate the proposed approaches and to compare them to two well-established methodology of optimal control [Dynamic Matrix Control (DMC) and typical Nonlinear Model-based Predictive Control (typical NMPC)]. Four output variables were analyzed for regulatory problem. The obtained results showed that the designed control structures are robust, of simple implementation and appears to hold a considerable promise for such a reaction system. A detailed differential and algebraic mathematical model consists of 53 equations and was implemented in Fortran 90/95 to simulate the plant and setup the typical NMPC. The numerical solution was performed by using IMSL library.
2. Identification of Dynamical Fuzzy Models A fuzzy implication is defined by expressions like: IF premise (antecedent), THEN conclusion (consequent). This logical structure is commonly referred to as the IF-THEN rule-based form. Thus, according to Lima et al. (2007), the first point to be considered in the fuzzy modeling is the definition of the fuzzy model structure that composes the system base of rules. Takagi and Sugeno (1985) proposed a design and analysis scheme for overall fuzzy systems, where the qualitative knowledge of a process was first represented by a set of local Takagi-Sugeno fuzzy models. This approach involves fuzzy sets for the premise portion and a linear equation of input variables for the conclusion in each rule. A complex, high-dimensional and nonlinear modeling problem is decomposed into a set of simpler linear models valid within certain operating regimes defined by fuzzy boundaries. Fuzzy inference is hence used to interpolate the outputs of the local models in a smooth fashion to get a global model. For the system analyzed in this paper, Takagi-Sugeno structure is used for nonlinear fuzzy model with linear structure for each rule of the model, and it is made exponential for nonlinear fuzzy model with exponential structure. The subtractive clustering method is employed for determination of the amount of rules and parameters of Gaussian membership functions. Consequent function parameters are obtained by solving a least square optimization problem. The next step is the data generation for the identification of the models. At first, the training data set is generated and the models parameters are evaluated. These models are then validated through the test data observing the average quadratic error between the predicted outputs and the real outputs (differential and algebraic mathematical model).
3. Adaptive Fuzzy Model-Based Predictive Controllers Predictive control generates profiles of manipulated variables by means of the minimization of some performance indexes over the time. A feedback loop is incorporated in the control structure because the measurements are used to update the optimization problem for the next time step. The method is aimed at calculating a set of CH (Control Horizon) future input movements such that the sum of the squared deviations between the output projections, over a PH (Prediction Horizon) future time intervals, and the desired values is minimized, using a moving horizon methodology. Thus, future outputs are driven close to the reference trajectory. The basic idea of the multivariable predictive algorithm is to: I. Calculate the reference trajectory for each output variable Io ( y Iod ); II. Estimate the closed-loop output predictions ( yˆ IoCLpred ) using the process prediction models. In this paper, these models are formulated in the form of nonlinear functional fuzzy models with linear and exponential
Adaptive Advanced Control of a Copolymerization System
695
structures, for each rule of the model, for the two nonlinear adaptive fuzzy model-based predictive controllers proposed. The adaptive methodology is based on multiple fuzzy models for each controller (Dougherty and Cooper, 2003). The used model in a sampling instant is selected by the current measured value for each controlled variable; III. Compute the errors between predicted and reference trajectories; IV. Estimate the sequence of future controls (movements) of each manipulated variable Ii ( u ) through the minimization of the objective function J, expressed by Eq. (1):
J=
NOV PH
(
¦ ¦ wIo ⋅ yIod ,n − yˆ IoCLpred ,n Io =1 n =1
subject
to:
2
NIV CH
) + ¦ ¦ ª«¬ f ⋅ ( Δu ) Ii
Ii =1 k =1
u Ii ,min ≤ u Ii ,k ≤ u Ii ,max ,
Ii ,k
new
º »¼
2
Δu Ii ,min ≤ (Δu Ii ,k = u Ii ,k − u Ii ,k −1 ) ≤ Δu Ii ,max
(1) and
yIo, min ≤ yIo , n ≤ yIo, max . In Eq. (1), NOV = number of output variables; NIV = number of input variables; f = suppression factor to the movements of the manipulated variables; and w = weighting factor.
4. Case Study Figure 1 reports a flow-sheet of the copolymerization reactor with a recycle loop (Congalidis et al., 1989), which is assumed to be a jacketed, well-mixed tank. Monomer A is methyl methacrylate (MMA); monomer B is vinyl acetate (VAc); the solvent is benzene; the initiator is azobisisobutyronitrile (AIBN); and the chain transfer agent is acetaldehyde. The monomer stream may also contain inhibitors such as mdinitrobenzene (m-DNB). Relevant output variables to match the product quality are: copolymer production rate (Gpi), mole fraction of monomer A in the copolymer (Yap), weight average molecular weight (Mpw), and reactor temperature (Tr). The presence of the recycle stream introduces disturbances in the reactor feed and a feedforward controller was implemented to compensate for them by manipulating fresh feeds in order to preserve feed composition and flow rate to the reactor. Feedforward control of the recycle stream allows the designer to separate reactor control from the rest of the process. Details on the feedforward control are given in Congalidis et al. (1989).
Figure 1. Process layout.
696
N. M. N. Lima et al.
4.1. Selection of control loop Lima et al. (2007) developed a factorial planning to discriminate the variables with higher impact on the process performance. The selected control loop resulting from this analysis is shown in Table 1.
Table 1. Control loop. Manipulated Variables Gbf (VAc feed rate), Gaf/Gbf (MMA feed rate / VAc feed rate), Gif (AIBN feed rate), and Tj (reactor jacket temperature)
Controled Variables Gpi, Yap, Mpw, and Tr
4.2. Results The closed-loop performance of the multivariable nonlinear adaptive fuzzy-predictive controllers was analyzed for the rejection of unmeasured disturbances (regulatory problem). The disturbance considered was the presence of an inhibitor in the fresh feed. This disturbance inhibits the polymerization reaction. Also, as the reaction is exothermic, less polymerization results in less heat being generated and the reactor temperature decreases as well. Here, an inhibitor disturbance of 4 parts per 1,000 (mole basis) in the fresh feed was applied. In order to assess the performance of the adaptive controllers, a comparative study with DMC and typical NMPC was carried out. The DMC methodology uses in Eq. (1) a linear (step response) prediction model and the typical NMPC algorithm utilizes a prediction model in the form of nonlinear differential equations. The controllers were tuned by IAE (Integral of the Absolute value of the Error) criterion. Figure 2 illustrates the closed-loop performance comparison of the four control strategies applied to the four controlled variables subject to the selected disturbance. Control system parameters and IAE errors are summarized in Table 2. 0.6080
23.8
0.5985
23.1
0.5890
22.4
0.5795
Yap
Gpi / kg h
-1
24.5
21.7
0.5700
21.0
0.5605
20.3 45
60
75
90
105
0.5510 45
120
60
75
90
105
120
90
105
120
Time / h
Time / h 353.40
35250
353.02
34500
Tr / K
Mpw / kg kmol
-1
352.64 33750
33000
352.26 351.88
32250
31500 45
351.50
60
75
90
Time / h
105
120
351.12 45
60
75
Time / h
Figure 2. Closed-loop and open-loop simulations to inhibitor disturbance. -------- Openloop; ⎯Δ⎯ L-ANFMPC; ⎯ ⎯ E-ANFMPC; ⎯⎯⎯ DMC; ⎯×⎯ Typical NMPC; ….….. (Setpoint).
Adaptive Advanced Control of a Copolymerization System
697
Table 2. Tuning parameters and IAE errors for adaptive predictive control structures. Parameters PH CH f (Gbf;Gaf/Gbf;Gif;Tj) w (Gpi;Yap;Mpw;Tr) IAE (Gpi[kg/h];Yap[-]; Mpw[kg/kmol]; Tr[K])
L-ANFMPC 3 1
E-ANFMPC 3 1
DMC 7 1
Typical NMPC 5 1
(0.1;0.1;0.2;0.1)
(0.1;0.2;0.3;0.1)
(0.3;0.5;0.5;0.1)
(0.2;0.3;1.3;0.2)
(9.8;3.0;4.0;2.3)
(9.9;2.7;4.7;2.4)
(3.2;0.8;2.1;2.0)
(0.5;0.7;1.7;0.7)
(24.1;2.9; 46,744;1.1)
(21.4;3.1; 42,554;0.8)
(64.9;3.0; 82,380;5.4)
(40.3;2.2; 54,378;1.7)
4.3. Discussions As can be observed in Figure 2 and Table 2, E-ANFMPC and L-ANFMPC perform better than DMC and typical NMPC, with a lower IAE value and a smaller overshoot for Gpi, Mpw, and Tr. About Yap, the value of the IAE for typical NMPC control is smallest, with similar outcomes for the other controllers.
5. Conclusions In the present paper the problem of multivariable nonlinear fuzzy model-based adaptive predictive control for complex processes was tackled. In particular, two adaptive predictive controllers based on nonlinear fuzzy models were developed for a copolymerization process. Copolymer production rate, mole fraction of monomer in the copolymer, molecular weight, and reactor temperature were analyzed for regulatory problem and compared against DMC and typical NMPC controllers. The simulation results showed good performance for the proposed structures and confirm the potential and robustness of these techniques to reduce off-specifications due to disturbances in nonlinear systems.
6. Acknowledgements The authors acknowledge the financial support of FAPESP, CAPES and CNPq.
References J.P. Congalidis, J.R. Richards, W.H. Ray. (1989). Feedforward and feedback control of a solution copolymerization reactor. AIChE Journal, 35(6), 891-907. D. Dougherty, D. Cooper. (2003). A practical multiple model adaptive strategy for multivariable model predictive control. Control Engineering Practice, 11, 649-664. N.M.N. Lima, L. Zuñiga Liñan, R. Maciel Filho, M. Embiruçu, M.R. Wolf Maciel, F. Grácio. (2010). Modeling and predictive control using fuzzy logic: application for a polymerization system. AIChE Journal, 56(4), 965-978. N.M.N. Lima, F. Manenti, R. Maciel Filho, M. Embiruçu, M.R. Wolf Maciel. (2009). Fuzzy model-based predictive hybrid control of polymerization processes. Industrial & Engineering Chemistry Research, 48(18), 8542–8550. N.M.N. Lima, R. Maciel Filho, M. Embiruçu, M.R. Wolf Maciel. (2007). A cognitive approach to develop dynamic models: application to polymerization systems. Journal of Applied Polymer Science, 106, 981-992. F. Manenti, I. Dones, G. Buzzi-Ferraris, H.A. Preisig. (2009). Efficient numerical solver for partially structured DAE systems. Industrial & Engineering Chemistry Research, 48(22), 9979-9984. T. Takagi, M. Sugeno. (1985). Fuzzy identification of systems and its applications to modeling and control. IEEE Transactions on Systems Man. and Cybernetics, 15(1), 116-133.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Control of processes with multiple steady states using MPC and RBF neural networks Alex Alexandridisa, Haralambos Sarimveisb a
Department of Electronics, Technological Educational Institute of Athens, Agiou Spyridonos, Aigaleo, 12210, Greece b School of Chemical Engineering, National Technical University of Athens, 9 Heroon Polytechniou street, Zografou campus, Athens, 15780, Greece
Abstract This work presents a new methodology for controlling processes that exhibit multiple steady states. The proposed approach is based on a Model Predictive Control (MPC) framework, where the dynamics of the process are modeled by a Radial Basis Function (RBF) neural network. The innovative non-symmetric fuzzy means algorithm is employed in order to train the RBF network. The proposed methodology is applied to the control of a non-isothermal Continuous Stirred Tank Reactor (CSTR) that exhibits three steady state points. The results show that the proposed controller can drive the CSTR through the entire operating region, including the unstable steady state point, around which the control task is rather challenging. Keywords: Model Predictive Control, Radial Basis Function, Neural Networks, Multiple Steady States
1. Introduction Model Predictive Control (Prett and Garcia, 1988) has emerged during the last few years as a very attractive method for controlling chemical processes. It is based on the online solution of an optimization problem which calculates the optimum sequence of inputs that drive the process to the desired set point. MPC makes use of a model that correlates the manipulated variables with the control variables. The model should approximate adequately the process, in order to yield satisfactory performance. Unfortunately, many processes in the chemical industry are nonlinear so that the performance of traditional MPC methods based on linear models may be inadequate. Radial Basis Function neural networks constitute a special neural network architecture that has received much attention from the academic community (Darken and Moody, 1990). A typical RBF network comprises only one hidden layer of neurons which results to fewer synaptic weights and in general a less complex structure compared to other neural network architectures. The most commonly used approach for training an RBF network is to separate the problem of identifying the network parameters in two steps: The first step aims at finding the number and locations of the hidden node RBF centers, while in the second step the synaptic weights are determined. As the second step is performed trivially by linear regression between the outputs of the hidden node and the real output data, this two-step procedure is usually faster than optimizing all the RBF network parameters at the same time. Still, determining the number and the locations of the hidden node centers is not an easy task and numerous attempts to solve this problem have been presented in the literature. Sarimveis et al. (2002) proposed the fuzzy means algorithm which determines the RBF centers of the network based on a fuzzy partition of the input space. The main
Control of nonlinear multiple steady state processes using MPC and RBF neural networks
699
advantage of the algorithm is that it has the ability to determine both the centers and structure of the network in very short computational times, while comparisons with other training methodologies show that the prediction capabilities of the produced models are similar or superior. The results of the fuzzy means algorithm could be further improved in terms of prediction accuracy and/or network size, by assigning a different number of fuzzy sets to each input variable. The modification of the original method in order to take into account non-symmetric fuzzy partitions of the input space uses hyper-ellipsoid fuzzy subspaces instead of the hyper-spherical shapes on which the original algorithm was based, thus resulting to the non-symmetric fuzzy means algorithm. The algorithm improves the prediction accuracy of the produced models, while at the same time a significant reduction of the number of hidden nodes can be achieved. In this paper we integrate the non-symmetric fuzzy means algorithm in an MPC framework and apply the resulting controller to the control of a CSTR that exhibits multiple steady states. The objective is to effectively control the CSTR through its entire operating region, which also includes an unstable steady state point. The rest of this article is organized as follows: A short introduction to the general concept of the fuzzy means algorithm is given in the next section. Next follows a description of how RBF models can be integrated within the MPC configuration. Then, the case study is presented, where the proposed methodology is applied to a nonlinear CSTR. In the final section we draw conclusions and set some directions for future research.
2. The fuzzy means algorithm The fuzzy means algorithm has been proposed as an alternative to classical methodologies for selecting the RBF network hidden node centers, like the k-means algorithm (Darken and Moody, 1990). In contrast to the traditional methodologies, the fuzzy means algorithm has the ability to determine automatically the size of the network, i.e. the number of RBF centers, while it proves to be orders of magnitude faster. The algorithm is based on a fuzzy partition of the input space, which is produced by defining a number of triangular fuzzy sets on the domain of each input variable. This number is the only tuning parameter of the method. The centers of these fuzzy sets produce a multidimensional grid on the input space. A rigorous selection algorithm chooses the most appropriate knots of the grid, which are used as hidden node centers in the produced RBF network model. The idea behind the selection algorithm is to place the centers in the multidimensional input space, so that there is a minimum distance between the center locations. At the same time the algorithm assures that for any input example in the training set there is at least one selected hidden node that is close enough according to a distance criterion. In case each input variable is partitioned into an equal number of one-dimensional triangular fuzzy sets, then the resulting grid is symmetric and this distance criterion corresponds to a hyper-sphere. However this restricts the flexibility of the algorithm, since a different partitioning for each input variable might result in a better network in terms of accuracy and/or complexity of the model. The replacement of the original spherical relative distance equation with an ellipsoidal one creates a non-symmetric version of the algorithm which is more flexible, as it gives the user the opportunity to partition the domain of each input variable in a different way. Thus the resulting radial basis functions cover more efficiently the regions of the input space where input data are available and produce networks with more accuracy (smaller modeling error) and lower complexity.
700
A. Alexandridis and H. Sarimveis
3. Model Predictive Control using RBF models In an MPC implementation, the objective function is a combination of two targets: a) minimization of the distances between the predicted output values and the set point and b) minimization of the control moves: hp
min
'v ( k ), 'v ( k +1),..., 'v ( k + hc )
¦
Ĭ(yˆ (k + i) y sp )
2 2
hc
+ ¦ ȍ'v(k + i)
2 2
(1) where ǻv(k) is the vector containing the control moves at time step k; Ĭ and ȍ are the error and move suppression weights; hc and hp are the control and prediction horizons respectively and y sp is the set point value. Typically, the minimization problem is solved subject to a number of constraints, posing upper and lower bounds on the values of the manipulated variables and the control moves. RBF models are integrated in the MPC approach, by adding an extra constraint: i =1
yˆ (k + i ) = NN (k + i ) + E(k ),
i =0
1d i d hp
(2)
where NN(k ) is the RBF network prediction for the time step k and E(k ) is the current error between the actual output measurement and the model prediction, which is assumed constant throughout the prediction horizon.
4. MPC of a CSTR exhibiting multiple steady states In this section, an example is presented where the proposed approach is used to control a simulated nonisothermal CSTR. The CSTR is described by a set of nonlinear ODEs which can be found in Kazantzis and Kravaris (2000). For certain values of the CSTR operational parameters, the process is exhibiting multiple steady states: An upper and a lower one which are stable, and a medium one which is unstable. Though it is relatively easy to control the CSTR around the individual upper or lower steady state point, controlling the CSTR over the entire operating range which includes the unstable steady state point is a rather challenging task. The CSTR was simulated by solving the system of ODEs. The objective was to apply an MPC configuration in order to control the output concentration CA using the temperature of the coolant Tj as the manipulated variable. The MPC configuration requires a dynamic model that correlates the input variable Tj with the output variable CA. The presence of multiple steady states makes it impossible to approximate the system dynamics over the whole operating region using a model of the type: CA(k)=RBF( Tj(k-1), Tj(k-2), …, Tj(k-i) )
(3)
i.e. a model using as inputs only past values of Tj. The reason is that for the same sequence of inputs Tj(k-1), Tj(k-2), …, Tj(k-i), there are more than one possible values of the output variable CA(k), depending on which steady state the CSTR is nearest at that time point. In order to circumvent this problem, an ARX (AutoRegressive with eXogenous inputs) type of model was chosen, where the current concentration CA(k) is correlated with the previous value of the coolant temperature Tj(k-1), but also with the previous values of the two state variables, i.e. the concentration CA(k-1) and the temperature inside the reactor T(k-1): CA(k)=RBF( Tj(k-1), CA (k-1), T(k-1) )
(4)
The introduction of such a model - using the previous values of the two state variables as inputs – complicates the calculation of the future model predictions throughout the
Control of nonlinear multiple steady state processes using MPC and RBF neural networks
701
prediction horizon. At each discrete time step k, where the optimization problem (Eq. 1) is formulated and solved, the model (Eq. 4) is used to generate predictions for the next hp time steps. However in order to calculate the predictions beyond the first future time instance, e.g. CA(k+2), the values of both state variables during the previous time step CA(k+1) and T(k+1) are needed as inputs to the model. Therefore, an additional model is needed to predict the dynamic evolution of the second state variable: T(k)=RBF( Tj(k-1), CA (k-1), T(k-1) )
(5)
In order to generate data for training the two RBF models, the CSTR was excited by changing the coolant temperature Tj every 1 second, within the limits 0 – 500. Using the described configuration, 50000 data points were collected from the CSTR. The data points were split into a training dataset of 35000 data points and a validation one of 15000 data points. For comparison purposes, both the original version of the symmetric fuzzy means and the non-symmetric extension were applied. The original symmetric fuzzy means algorithm was applied to partitions from 4 to 20 fuzzy sets for all input variables, while for the non-symmetric algorithm, an exhaustive search was performed testing all combinations of partitions ranging from 4 to 20 fuzzy sets for each input variable. The best networks produced by the non-symmetric algorithm outperformed the ones generated by the symmetric fuzzy partition in terms of prediction accuracy. Moreover, the lower modeling error was accompanied by a significant decrease in the number of RBF centers. To be more specific, the RBF networks produced by the nonsymmetric algorithm contain only 37 and 46 centers for the CA and T models respectively (which corresponds to a reduction of 51% and 39% respectively, compared to the best networks trained with a symmetric fuzzy partition). The models trained with the two methodologies were incorporated into the MPC configuration described in the previous section, resulting in two different control schemes. The controllers were then applied to two test cases. In the first case, the objective was to drive the CSTR to the three steady state points, thus testing whether the controllers are capable of controlling the reactor throughout its operating range. The CSTR is initiated at a concentration CA equal to 0.2 which corresponds to the lower steady state point and then a step change in the set-point from 0.2 to 0.95, i.e. the upper stead state point is introduced. After the CSTR reaches the new set point, at time point 100 a new step change is applied, requiring the process to be driven to the middle unstable steady state (0.6). The responses are depicted in Fig. 1. The controller using the non-symmetric algorithm manages to reach the set-point, taking advantage of the more accurate model it uses for prediction, while the controller using the symmetric algorithm totally misses the set point, as it gets stuck to the lower stable steady state point. The failure of the symmetric controller to reach the set point can be explained by noticing that the initial modeling error for the two state variables is propagated throughout the prediction horizon. The initial error is amplified because the model predictions at each time step are used for calculating the predictions in the subsequent time instance. In the second case the set point was fixed at 0.20 and unknown disturbances were introduced to the inlet flow F. The disturbances were assumed to follow a normal distribution with mean equal to 20 and standard deviation equal to 3.5. The results are shown in Fig. 2. Once again, the MPC scheme based on the non-symmetric fuzzy means algorithm proves more accurate. This is also supported by calculating the Sums of Squared Errors for the two MPC schemes. The non-symmetric controller features an SSE that is 39% lower compared to the controller based on the symmetric algorithm.
702
A. Alexandridis and H. Sarimveis
1
Symmetrical FM Non-Symmetrical FM
0,8
Set Point
CA
0,6 0,4 0,2 0 0
20
40
60
80 100 Time (s)
120
140
160
180
Figure 1. First case: Responses for the two MPC schemes Symmetrical FM
0,31
Non-Symmetrical FM 0,26
Set Point
CA
0,21 0,16 0,11 0,06 0
20
40
60
80 Time (s)
100
120
140
Figure 2. Second case: Responses for the two MPC schemes
5. Conclusions This work presents an MPC framework for controlling nonlinear processes with multiple steady states, employing RBF networks as dynamic models of the system. The non-symmetric fuzzy means algorithm is used to train the RBF networks, which improves the performance in terms of accuracy, but also produces networks of lower complexity. The resulting controller is implemented successfully for the control of a non-isothermal CSTR exhibiting multiple steady states.
References C. Darken, J. Moody,1990, Fast Adaptive K-Means Clustering: Some Empirical Results. IEEE INNS International Joint Conference On Neural Networks; Proceedings 2, 233-238. N. Kazantzis, C. Kravaris, 2000, Synthesis of State Feedback Regulators for Nonlinear Processes, Chemical Engineering Science, 55, 3437. D. M. Prett, C.E. Garcia, 1988. Fundamental Process Control, Butterworths, Stoneham, Mass. H. Sarimveis, A. Alexandridis, G. Tsekouras, G. Bafas, 2002, A fast and efficient algorithm for training radial basis function neural networks based on a fuzzy partition of the input space, Industrial and Engineering Chemistry Research, 41, 751–759.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Simulation Optimization of Cost, Safety and Displacements in a Construction Design Eleftherios-Stamatios Telis,a,b George Besseris,a,b Constantinos Stergioua,b a
Department of Mechanical Engineering, Technological and Educational Institute of Piraeus,P. Ralli & Thivon Avenue 250, Aigaleo 12224, Athens, Greece b Kingston University, Faculty of Engineering, Penrhyn Road, Kingston upon Thames, KT1 2EE,London, England
Abstract This paper presents an innovative method for simultaneous optimization of the critical design characteristics in an earth retaining structure design. Those design characteristics are cost, safety factor and total displacements. This approach is based on the simultaneous tuning of the most important design variables. Civil and geotechnical engineers have no tool to determine the design characteristics for the optimum set of design variables, but only for one set at a time. The methodology presented in this study uses the desirability analysis, an adoption of proven quality tools in engineering, while the needed ‘experimental’ measurements were performed via finite elements analysis (FEA). Keywords: Multi-response optimization, Earth Retaining Structure, Safety Factor, Total Displacements, Cost
1. Introduction Construction engineering is the field that assures sustainability and robustness of constructions worldwide. The correct implementation of quality methods can assist engineers to design better, cheaper and environmental friendly processes and products. Moreover, numerical methods as the finite element analysis assist engineers to simulate and optimize most of the engineering processes.
2. State of the art The field of analyzing fractional factorials in robust design methodology has been reviewed extensively by Arvidsson and Gremyr (2008). An updated review on the current utilization of design of experiments in the field of engineering has been published by Ilzarbe et al. (2008). The future of improving product traits and characteristics as well as processes will conclusively claim more demanding statistical tools (Hoerl and Snee, 2010). This paper presents an innovative method, which could be used as a guide to civil and geotechnical engineers who need to simultaneously optimize the critical design characteristics of an earth retaining structure design. The design characteristics chosen are the safety factor, the total displacements and the construction cost of the earth retaining structure. Civil and geotechnical engineers determine the design characteristics only for a set of design variables and not their best combination. The multi-response optimization method presented in this paper uses the desirability analysis and the important aspect of this methodology is that the optimization is developed through different approaches for the design characteristics (maximization of safety factor and minimization of displacements and cost). Furthermore, this method is
704
E.S. Telis, G. Besseris, C. Stergiou
able to provide the safest possible earth retaining structure, while at the same time being cost effective. The designs used in this paper are taken from a real-life case study for an earth retaining project.
3. Construction Procedure The case study analyzed in this work is a real life bracing project, which was assigned to Geostirixis Co., with the following title: “Revised geotechnical study of temporary bracing slope excavation project, for a six-floor building with ground stores and a threefloor underground garage in Athens, Greece” The type of the earth retaining structure methodology performed in this case study is the Berlin wall type. This type of bracing study procedure assures the robustness and sustainability of the excavation works in a site, in order to remain safe the examined and the neighbouring properties, until the final excavation depth level.
4. Design Variables Proven statistical tools and techniques can provide us with those combinations of the design variables that result to a minimum of experimental runs needed. Those quality variables, also called control factors, have a direct relation to the design characteristics. The seven active control factors found and used in the optimization are shown in the following Table 1 and Figure 1 D E
C
G
B
Figure 1. Design Variables presentation in the construction procedures.
HEB
m
m
m
Level 1
120
10.50
-4.00
1.00
Level2
180
12.00
-3.00
1.50
Length
G Anchor
Loading
F Anchor
E Angle of the Excavation
Between Piles
D Distance
Depth
C Anchor
Depth
B Pile’s
Steel Sections
Description
A Standardized
Factor
kN/m
m
3:4
100
12.0
4:3
200
14.0
Table 1. Design variables levels.
Those design variables have a linear response, so we can only use two levels for the appropriate tuning. The values for the two levels of the design variables were chosen according to the subsoil analysis of the investigated site, the load of the neighboring property and the experience of the engineers of the foundation construction company. The precise identification and prediction of the subsoil layers beneath is the most
Simultaneous Optimization of Cost,Safety and Displacements in a Construction Design
705
important issue for the most accurate results of the optimization. Those two levels of the design variables are shown in Table 1.
5. Quality Method The quality technique in this case study is used for the simultaneous optimization of cost, safety factor and total displacements. Desirability analysis is based on the weight and the importance of the responses. The weights and importance of the three design characteristics was based on the significance of these responses and customer needs (Table 2). The overall or composite desirability D is calculated by the following equation:
F y
W1I1
D
F2 y
W2 I 2
Wn I n 1 / I1 I S I N
Fn y
(1) where Wi and Ii are the weight and desirability for the response i (i=1,2,…n) and fi(y) is the function that describes the approach method of the y response. 1
Goal
Lower Value
Target Value
Upper Value
Weight
Importance
Cost (€/m2)
min
-
94.07
141.41
1
2
Safety Factor
max
1.50
1.72
-
1
1
Displacements (mm)
min
-
8.23
50.00
1
3
Table 2. Hierarchy of the design characteristics.
For the approach of the Design of experiments, the design variables data were taken into account. In this case study we have seven (7) design characteristics with two (2) levels each. Thus, the matrix used for this combination of design characteristics is the L8(27) orthogonal array presented in Table 3. Exp.
A
B
C
D
E
F
G
Cost (€/m2)
Safety Factor
Dis/nts (mm)
1
1
1
1
1
1
1
1
140,00
1,2136
83,27
2
1
1
1
2
2
2
2
94,07
0,2406
12,60
3
1
2
2
1
1
2
2
178,38
0,7603
19,09
4
1
2
2
2
2
1
1
105,14
1,5348
11,57
5
2
1
2
1
2
1
2
174,44
1,5268
10,86
6
2
1
2
2
1
2
1
105,00
1,2367
12,65
7
2
2
1
1
2
2
1
202,62
1,7209
8,23
8
2
2
1
2
1
1
2
131,64
1,2159
36,94
Table 3. Experimental runs and measurements.
6. Experiments The finite elements analysis was performed by Computer-Aided Engineering (CAE) software which gives measurements accurate to a 4th order digit each time. Thus, each experimental run was performed only once. Consequently the quality method of this project was based on means only and not to signal-to-noise framework. The measurements of the finite elements analysis for safety factor and total measurements, as long as the calculations of the cost based on the materials used, work-
706
E.S. Telis, G. Besseris, C. Stergiou
hours and the combination of the control factors in each experimental run are shown in the final three rows of Table 3.
7. Findings The results taken from the Finite Elements Analysis were statistically appraised through statistical software for the performance of the desirability analysis. The desirability for each factor and the composite desirability, as long as the global solution taken for this statistical software are shown in Figure 2. Global Solution A = 1,92929 B = 1 C = 2 D = 2 E = 2 F = 1,03030 G = 1
Predicted Responses Cs SF TD
= = =
101,620 1,723 8,335
, , ,
desirability = desirability = desirability =
0,840523 1,000000 0,997480
Composite Desirability = 0,942545
Figure 2. Desirabilities and Global Solution.
From the above Figure we can see that the composite desirability is 94% close to the one desired from this methodology, while safety factor and total displacements are almost 100% to the demanded values. Finally, cost reaches the 84% of the expected cost. While, the most important values taken from the above figure are the following: x Cost = 101.62 €/m2, which is a value between the first and the second price of the eight (8) measured values of the analysis. x Safety Factor = 1.723, which is the target value used in the desirability analysis for the maximization of safety as it is shown in Table 2. x Total Displacements = 8.33 mm, which is the target value used in the desirability analysis for the minimization of the displacements as it is shown in Table 2. The statistical software used, gave also the optimization plot (Figure 3) of the design variables. In this Figure the factors responsible for this multi-response optimization are: factor D in level 2, factor A in level 2 and factor B in level 1.
Figure 3. Optimization Plot.
8. Confirmation Experiment The active control factors tuned in the level taken from the desirability analysis, as long as the choice of the most economical levels of the other three variables, were used for the confirmation experiment. This combination is for A2, B1, C1, D2, E2, F2 and G1 gave the following responses: x Cost = 105.00 €/m2,
Simultaneous Optimization of Cost,Safety and Displacements in a Construction Design
707
x Safety Factor = 1.727, and x Total Displacements = 8.49 mm, Those results show the high accuracy of the performed desirability analysis in an earth retaining structure design.
9. Critical Analysis The methodology as the confirmation experiment proved concluded to optimum and accepted values for the critical design characteristics that we used. The values computed by the foundation engineering construction company, with the usage of FEA from the most experienced civil and geotechnical engineers were (compared to the values of the values of the confirmation experiment): x Cost = 110.83 €/m2 > 105.00 €/m2 x Safety Factor = 1.60 < 1.727 x Total Displacements = 14.10 mm > 8.49 mm. This comparison proves that civil and geotechnical engineers can determine the values of the most important design characteristics based on their experience and FEA tools for only one set at time. The methodology used can provide to engineers a cheaper and safer earth retaining structure through the usage of proven quality techniques such as the desirability technique.
10. Conclusion To sum up, the method presented in this paper is a valuable guidance tool for civil and geotechnical engineers to predict the best combination of the active control factors for the simultaneous optimization of the design characteristics of an earth retaining structural design. The most interesting part of this approach is that the optimization is performed through proven statistical and quality tools and not empirically as it has been calculated until now.
References Arvidsson M, Gremyr I (2008) Principles of robust design methodology. Qual Rel Engng Intl 24:23-35. Besseris, G.J., “Multi-response robust screening in quality construction blue-printing”, 2009, International Journal of Quality and Reliability Management, Vol. 26 No.6, pp. 583-613. Berni, R. and Gonnelli, C. “Planning and optimization of a numerical control machine in a multiple response case”, 2006, Quality and Reliability Engineering International, Vol. 22, p.p 517-526. Hoerl RW, Snee R (2010) Statistical thinking and methods in quality improvement: A look to the future. Quality Engineering 22: 119-129. Ilzarbe L, Álvarez MJ, Viles E, and Tanco, M (2008) Practical applications of design of experiments in the field of engineering: a bibliographical review. Qual Rel Engng Intl 24: 417-428. Montgomery, D.C. “Design and analysis of Experiments, 6th ed”, 2004, John Wiley & Sons, New York, NY. Derringer G. and Suich, R. (1980). Simultaneous optimization of several response variables. Journal of Quality Technology, 12, pp. 214-219. Harrington, E.C. (1965). The desirability function. Industrial Quality Control, 21, 494-498. Kim, K-J. and Lin, D.K.J. (2000). Simultaneous optimization of mechanical properties of steel by maximizing exponential desirability functions. Applied Statistics, 49(3), 311-325. Telis E.S., Besseris G. J. and Stergiou C. (2008), “Desirability analysis in construction design quality improvement”, Proceedings of the European Network for Business and Industrial Statistics 8, September 2008, Athens, Greece, Book of abstracts, pp. 48
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Methodologies for input-output data exchange between LabVIEW and MATLAB/Simulink software for Real Time Control of a Pilot Scale Distillation Process Alexandre J. S. Chambel, Carla I.C. Pinheiro, José Borges, João M. Silva a
IBB – Instituto Superior Técnico/UTL, Av. Rovisco Pais, 1, 1049-001 Lisboa, Portugal IDMEC – Instituto Superior Técnico/UTL, Av. Rovisco Pais, 1, 1049-001 Lisboa, Portugal c Instituto Superior de Engenharia de Lisboa, Rua Conselheiro Emídio Navarro, 1, 1950-062 Lisboa, Portugal b
Abstract The use of different control applications in automated processes, poses the problem of how to exchange data in Real-Time (RT) between different software platforms. This paper presents a new methodology developed for input and output data exchange between LabVIEW® and MATLAB®/Simulink, and provides an overview on available methodologies for data exchange comparing their advantages/disadvantages. Finally, a case study is presented in detail where a method based on TCP/IP plus Simulation Interface Toolkit (SIT) is implemented for experimental data exchange between a pilot-scale continuous distillation plant acquisition/control and supervision system carried out using a FieldPoint system from National Instruments (NI) that integrates with LabVIEW® software and a high level Model Based Predictive Control (MPC) algorithm developed in MATLAB®/Simulink® platform. Keywords: Continuous Distillation Process, MATLAB®/Simulink, LabVIEW®, MATLAB® and LabVIEW® data exchange, Model Predictive Control.
1. Introduction The demand for increased efficiency in various branches of industry, such as chemical industry, leads to an increased degree of automation of the production process. In order to meet this demand, while at the same time maintaining or improving the quality and safety, there is a need for optimal and robust control strategies, such as MPC and Fault Tolerant Control (FTC), new real-time optimization strategies and new Fault Detection and Isolation (FDI). Many industrial processes and products are now achieving their competitive edge due to the complex functionality which can be provided by control software. Distillation is the most common unit operation in Chemical Engineering used to separate two or more components from a homogeneous fluid mixture, being widely used in chemical and petroleum industries. An effective control of the products composition can reduce the energy consumption, increase capacity and improve process safety. In order to develop and test new methodologies, a medium sized pilot scale continuous distillation column process was built at ISEL – Instituto Superior de Engenharia de
Methodologies for input-outptut data exchange between LabVIEW and MATLAB /Simulink software for Real Time Control of a Pilot Scale Distillation Process 709 Lisboa, and the ethanol-water binary mixture separation has been used as case study. Barroso et al. (2010), developed detailed nonlinear mathematical models based on Fuzzy Logic and Composition of Local Linear State-Space Models (Borges et al., 2004) that were integrated into a MPC scheme implemented in MATLAB/Simulink. The FDI was also implemented, based on the threshold between the distillation column and model outputs. These models were able to predict both the steady-state and the dynamic behaviour of this pilot scale distillation column and were validated against experimental data, showing a good agreement with them. However, the testing of the different control applications with experimental data from the pilot-plant, posed the problem of how to exchange data in RT between the software platforms MATLAB and LabVIEW in order to merge and integrate the previously developed models and control algorithms (Barroso, 2009). The simultaneous use of MATLAB and LabVIEW is a problem that, although already reported in the literature, still lacks from a formal treatment that also considers the control perspective (Canete et al., 2007). The major requisites for this integration are: bi-directional communication; real-time or near real-time data exchange; time synchronization between both programs; on-fly control model swap; a single user interface to control all software; and the ability to independently start/stop/restart the Simulink model from LabVIEW.
2. Experimental Pilot Plant A medium size pilot-scale distillation column built at ISEL (Figure 1), is used to separate a water-ethanol liquid binary mixture. The column structure is composed by several glass sections with structured packing sections above and below the feeding zone, coated with an isolation material to prevent heat loss and supported by an external steel framework (Oliveira et al., 2008; Barroso et al., 2010). On the top section there is a glass condenser with a reflux line and a pressure relieve system. The liquid feed mixture is pre-heated using an electrical heater. For the reboiler heating, there are two electrical heaters disposed in V-shape associated with a glass condenser. The column measurement instrumentation consists of type K temperature thermopars and pressure sensors, at different points. The actuators consist of heating elements, valves and peristaltic pumps. Both sensor and actuators are connected to a NI FieldPoint system providing on-line information and RT control. The pilot plant low-level control and supervision, as well as data acquisition, are carried out using a NI Field Point system with LabVIEW v.8.5 software in a networked PC. The developed Virtual Instrument (VI) application ensures data acquisition and low level control of the distillation column.
Figure 1 – Pilot-scale distillation column
Figure 1 – Scheme of the pilot scale distillation column at ISEL.
710
A.J.S. Chambel et al.
3. Methodology In order to solve the data exchange issue, different strategies are available: Mathscript, SIT-DLL, SIT-TCP/IP, command-line use, file read/write and ActiveX. Each one were compared and briefly the conclusions are: Mathscript is not an option as it only supports core MATLAB commands, command-line is considered a one-way communication method, file read/write does not allow the file access simultaneous from different applications and ActiveX requires a considerable additional programming effort. Available SIT options are further discussed below. 3.1. Simulation Interface Toolkit - Dynamic Link Library (DLL) model The LabVIEW Simulation Interface Toolkit (SIT) is used to integrate LabVIEW with Simulink and Real-Time Workshop toolbox allowing data exchange and/or creation of Simulink models interfaces that can be used to manipulate model parameters and view input/output data. SIT can be used to integrate a Simulink model, previously converted into a Dynamic Link Library (DLL) with Real-Time Workshop. The DLL had the same behaviour as it was running in Simulink, but the Simulink itself is no longer necessary to run online (National Instruments, 2007, 2009). This solution overcomes the data exchange problem, but is a solution Operating System (OS) dependent (SIT is only available for Windows and DLL is compiled to a specific architecture), small changes in Simulink models can be time consuming, and all code runs in the same computer, which can exceed the available computational resources. 3.2. Simulation Interface Toolkit – TCP/IP or UDP The SIT can be used not only to integrate compiled DLL models, as seen before, but also combined with the available TCP/IP nodes to exchange data with Simulink. LabVIEW already includes several nodes that allow the communication across a TCP/IP network in a client/server scheme (National Instruments, 2007, 2009). This solution overcomes the data exchange problem, provides a transparent error check connection, it is OS-independent, models run on Simulink allowing fast modifications. The UDP protocol can also be used to stream data from MATLAB to LabVIEW through DataSocket Server. However, the UDP protocol does not have error correction mechanisms integrated (requiring additional programming effort) and requires the use of Instrument Toolbox or Data Acquisition Toolbox, in parallel with Simulink.
4. RT Data Exchange Case Study The distillation pilot-plant at ISEL was used for the development and validation of high level control algorithms using MATLAB/Simulink. However, the pilot-plant lower level control and data acquisition was developed in LabVIEW. In order to integrate control algorithms developed in MATLAB/Simulink and LabVIEW, the Simulation Interface Toolkit (SIT) was used. The option for a TCP/IP connection to Simulink instead of compiling the model into a DLL allows easy and fast modifications, on-fly changes during execution or controller
Methodologies for input-outptut data exchange between LabVIEW and MATLAB /Simulink software for Real Time Control of a Pilot Scale Distillation Process 711 model swap without stopping the column control. It also allows the distribution of computational effort, as Simulink can run in another computer. In order to interconnect Simulink and LabVIEW, it is necessary to insert test points, or operations, that can be mapped on SIT. In this case, as input variable in Simulink (LabVIEW output) it was used the output of a constant block and to the Simulink output (LabVIEW input) it was used the output of a dummy gain block. A section of the LabVIEW block diagram is presented in Figure 2a. Outside the main while structure, there are SIT sub-VIs related with model initialization, model path and LabVIEW output variable (i.e., Simulink variable where LabVIEW output is written). The path to Simulink model is automatically determined with base on the path to LabVIEW model. Inside the while structure there is SIT sub-vi’s which define the model status (run/stop/pause), the LabVIEW input variables (i.e., Simulink variable that LabVIEW reads) and allow the read/write from/to Simulink variables. The lower section of the block diagram refers to the vector assembling with all variables that are exchanged, while the top section is the decomposition/reading of the input vector from Simulink. This cycle runs while the model status is set to “Start”, exchanging values on each iteration, with an update period of 10 ms. MATLAB and Simulink are automatically launched with the start of LabVIEW VI, through a command-line exec node. Figure 2b presents the Simulink model. LabVIEW data is received in a subsystem and the vector is decomposed in the control variables (reboiler temperature, vapour temperature) used in the controller block. The block output, manipulated variables (flow, reboiler power, reflux) are merged into a vector and sent to LabVIEW.
Figure 2 - (a) Data Exchange section from LabVIEW block diagram. (b) Simulink controller block diagram.
712
A.J.S. Chambel et al.
The controller consists in a composition of MPC which overall control action results of a weighted combination using a scheduling vector, based on the previous control action (Barroso, 2009). Comparing time between LabVIEW and Simulink one can observe a time divergence at a rate that depends on processor load. This divergence is mainly on the Simulink side as the time is emulated. To overcome this problem a RT Block was introduced in the Simulink diagram to enforce uniform time. The integration of communication section in VI’s is straight forward since it is an independent block, and is only necessary to map the variables from the main cycle to this one, as local variables.
5. Conclusions A methodology for data exchange between LabVIEW and MATLAB/Simulink was developed, allowing the experimental use and validation of previous developed MPC control algorithms in Simulink. A single LabVIEW interface can be used to control the Simulink models and the changes of control modules or parameters can be performed on-the-fly during column operation. The time between applications is synchronized and registered to further event analysis. Also a strategy was found to ensure that time evolves at the same rate despite the time in Simulink being an emulated time and dependent on CPU load.
6. Acknowledgements The authors gratefully acknowledge the financial support of project POCI/EME/59522/2004 from FCT granted by the Programa Operacional Ciência e Inovação 2010 (POCI 2010) with the co participation of European fund FEDER.
References Barroso, J.R.C., 2009. Modelling and Intelligent Control of a Distillation Column – MSc. Thesis, Instituto Superior Técnico, Lisboa, Portugal. Barroso, J., Borges, J., Oliveira, P., Pinheiro, C.C., Pires, A.C., Silva, J.M., 2010. Nonlinear Modeling of a Real Pilot Scale Continuous Distillation Process, 20th European Symposium on Computer Aided Process Engineering – ESCAPE20. Borges, J., Verdult, V.,Verhaegen, M., Botto, M.A., 2004. Separable least squares for projected gradient identi¿cation of composite local linear state-space models. 16th International Symposium on Mathematical Theory of Networks and Systems (MTNS), Belgium. Canete et al., 2007. Distillation Monitoring and Control using LabVIEW and SIMULINK Tools, World Academy of Science, Engineering and Technology 34. National Instruments, 2009. Connecting LabVIEW to 3rd Party Software Packages, available from http://zone.ni.com/devzone/cda/tut/p/id/10060 National Instruments, 2003. LabVIEW Simulation Interface Toolkit User Guide. Oliveira, P., Batalha, N., Pinheiro, C., Borges, J., Silva, J.M., 2008. Nonlinear dynamic modelling of a real pilot scale continuous distillation column for fault tolerant control purposes, 10th Inter. Chemical Eng. Conference – CHEMPOR’2008, Braga, Portugal. Canete, J. F., Orozco, P., Gonzales-Perez, S., 2007. Distillation Monitoring and Control using LabVIEW and Simulink Tools, International Journal of Computer, Information, and Systems Science and Engineering 1(4).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Plant-wide optimisation and control of a multi-scale pharmaceutical process Mayank P. Patel,a Nilay Shah,a Robert Ashe,b a b
CPSE, Dept. Chemical Engineering, Imperial College, London, SW7 2AZ, UK AM Technology, The Heath Bus. & Tech. Park, Runcorn, Cheshire, WA7 4QX, UK
Abstract We are in an era where the objectives of process systems engineering (PSE) and process intensification (PI) are aligning. Micro-to-millimetre range reactors are well known in literature for the property of very efficient heat exchange and control, thus the design concept for such reactors allows for great flexibility in adapting the process for new reaction schemes. The novel ‘green’ synthetic route of the ibuprofen compound developed by BHC in the mid- 1980’s requires efficient temperature control and thus lends itself to continuous processing. However, downstream of the reaction phase is rarely considered for further intensification, despite the plethora of proven continuous downstream technologies. Thus, in this work we consider the entire primary manufacturing stage, from reaction to crystallisation, with the intent of introducing intensification where needed, be it via a continuous option or innovation. Novel pharmaceutical production lines are thus ideal for exploring this paradigm shift, by introducing systematic process design, modern analytics and advanced control. Keywords: Process intensification, optimization, control, continuous processing
1. Introduction Continuous manufacturing is an attractive option for the pharmaceutical industries, with respect in reducing costs of R&D as well as operating costs, thus leading to lower drug prices for the consumer. The FDA, despite regulating towards a batch processing environment in the past, have included within their guidelines for pharma industries in 2004 to adopt innovation, i.e. devices and processes at smaller length and time scales [FDA, 2004]. Hessel et al. [2003], have published an extensive review on the suitability of continuous devices with dimensions at the microscale. However, direct comparative studies are rare at best. The FDA are aiming to bring the pharmaceutical industry to the 21st century by using modern process analytical techniques (PAT) for improved process monitoring and control, for example. A sense of resistance is detected for application in currently approved and marketed active pharmaceutical ingredients (APIs) due to intensity of existing batch production processes. There is scope however for the paradigm shift by the introduction of new APIs, especially when new molecules are now targeting the thousands of cases rather than the millions. Nevertheless, opportunities exist in addressing the need for systematic design, operation and control of multi-scaled production lines. Intensified reactors, by their small dimensions, provide very efficient heat transfer. This characteristic and relative thermal inertia of the reactor, diminishes the risk of thermal runaway. On the other hand, the more “efficient” heat transfer will make it possible to run the process at higher temperatures and pressures that are not possible in batch reactors. These new conditions will bring the operating point closer to runaway
714
M. Patel et al.
condition. The inclusion of PI at the manufacturing stage evolves the traditional concept of 'unit operations' to be integrated processes, which significantly impact the properties of solvents, i.e. compatibility to upstream and downstream processes. Here a flow sheet is designed for a process consisting of a three step organic synthesis reaction and six separations consisting of liquid-liquid extractions and filtrations. A crystallisation operation concludes the process. This conceptual synthesis allows for efficient reaction, separation and solvent integration to improve process economics. We also describe the application of control to this large-scale process via the reaction stage. The idea is to select controlled variables which when kept constant lead to minimum economic loss. A steady-state model is sufficient for selecting the controlled variables, however a dynamic model is used to design and test the complete control system which includes regulatory and supervisory control. The final control structure is robust and yields good dynamic performance from the nonlinear dynamic simulations.
2. Ibuprofen synthesis The discovery of ibuprofen in the 1960s by the Boots company stemmed a synthesis route with a six stage process which starts with isobutylbenzene (IBB) as the main raw ingredient. Nowadays, the common approach is to apply the three step BHC process route: A Friedel-Crafts acetylation leads on to a hydrogenation with Raney nickel to give an alcohol, which undergoes a palladium-catalyzed carbonylation. A recent publication successfully adapts the BHC route and explores the application of flow chemistry to this synthesis (Figure 1). Friedel-Crafts acylation
PhI(OAc)2 TMOF, H
+
hydrolysis or saponification
Figure 1. Continuous synthetic route to ibuprofen [Bogdan et al., 2009].
This synthesis was carried out within a tubular reactor of modular construction [Patel et al., 2009]. The subsequent process (up to crystallisation) is traditionally performed is conceptualised into 12 streams, as shown in Figure 2. These streams are summarised with the following descriptions: S1: Comprises of all the inlet reactant streams to the modular tubular reactor. The reactor is optimally configured and operates at the micrometre to millimetre range [Patel et al., 2011]. The pre-production of the reactants is not considered here. S2: The tubular reactor doesn’t have the residence time to complete the reaction. The material is sent to an agitated vessel, i.e. CSTR. Composition analysis with GC can be performed here also. S3: Excess amounts of methanol (MeOH) are evaporated in an agitated vessel. Vapours can be captured, cooled and recycled to production of reactants. S4: The aqueous phase is washed with ether (Et2O) to transfer the API to the organic phase through countercurrent liquid-liquid extraction with continual agitation. S5: To aid in extracting the API, the aqueous phase is acidified and simultaneously washed with concentrated HCl and Et2O. S6: The organic phase is finally washed with a weak alkaline solution to neutralise the pH of the aqueous phase. S7: The stream is filtered through a 3Å molecular sieve with sodium sulfate to remove traces of water. S8: The stream is of a light orange colour, and thus sent through a membrane of activated carbon to decolourise the liquor.
Plant-wide optimisation and control of a multi-scale pharmaceutical process
715
S9: The activated carbon is subsequently removed by filtering the suspension through a plug of anhydrous MgSO4, to yield an off-white solid. S10: The organic material now begins the crystallisation phase. A suitable polar solvent is chosen (for preferred crystal morphology) and mixed at a raised temperature in an agitated vessel. S11: The stream is salted out with multiple injections of a suitable anti-solvent within a continuous drowning out crystallisation stage, explained further below. S12: Crystals are yielded along with the reagents from the synthesis. The crystals can be drawn off and dried in vacuo to afford a white solid. MeOH (vapor)
IBB TfOH PhI(OAC)2 {H2O} TMOF MeOH KOH MeOH
S1
Reaction
S2
Mixing & Reaction
Heat
S3
Heat
Et2O (liquid)
Mixing & Evaporation
S4
Et2 O/HCl (liquid)
S5
Wash
Et2O (liquid)
Heat
Acidify
S6
Et2 O/HCl (liquid)
NaHCO3 (liquid) Wash
Anti-solvent (liquid)
NaHCO3 (liquid)
Polar solvent (liquid)
pH 7
S12
Crystallisation
S11
S10
Mixing
(+ excess reagents)
S9
Filtration
Filtration
MgSO4 (solid) Cool
S8
Activated Carbon
Filtration
S7
Na2SO4 (solid)
Heat
Figure 2. Multi-scale conceptual process flow sheet.
2.1. Reaction stage The reaction stage is where it all begins. The modular tubular reactor indicated earlier was optimally configured to deliver the required local temperature demands necessary for the three-step reaction to produce a yield close to 90% [Patel et al., 2011]. However, severe process nonlinearities particular in the reaction kinetics is the main challenges for control. A model predictive control (MPC) structure is designed as the feedback control offered by the multivariable process model will be able to handle all cross-coupling effects implicitly [Maciejowski, 2002]. The following cost function is to be minimised: Hp
V (k ) = ¦ i =1
zˆ(k +i k )−r (k +i k )
2 Q (i )
+
¦ Δuˆ(k +i k )
H u −1 i =0
2
(1)
R (i )
The residence time within the reactor is around 33 seconds, thus the prediction horizon is chosen as Hp = 100 and the control horizon Hu = 10. For temperature control, the cost function is suitably transformed to equation (2) where the location of these controlled temperatures is highlighted in Figure 3.
ªTˆ1(k +i k )º ª125°Cº » « «ˆ » «T2 (k +i k )» − « 25°C » V (k ) = ¦ «Tˆ (k +i k )» « 50°C » » « « 3 » ˆ ¬«T4 (k +i k )¼» ¬150°C¼
2
º ª ΔTˆ1,ref utility (k + i k ) » « ˆ ref « ΔT2, in (k + i k ) » +¦ «ΔTˆ ref (k +i k )» , utility » « 2ref ˆ T Δ ¬« 3, utility (k +i k )¼»
2
(2)
10 −1
100
i =1
i =0
Q = I 4× 4
R = I 4× 4
716
M. Patel et al.
T2ref ,in
T1ref , T ref , T ref , utility 2, utility 3, utility
MPC T1, T2, T3,T4
Material inlet
TC
T1
HEX
T1,utility TC
TC
HEX
T2
T2,in
T3
HEX
T2,utility TC
T4
HEX
T3,utility
Reactor Material outlet
Figure 3. Control structure with controlled and manipulated variables.
2.2. Crystallisation stage Crystallisation processes within the pharmaceutical industry are usually designed to obtain crystals with controlled size, shape, purity and polymorphic form. The design as selection of suitable solvents and anti-solvents is thus a non-trivial task. Inherent control by design offers greater flexibility in manipulating process conditions. Continuous crystallisation has recently been investigated which yielded promising results within a Kenics static mixer [Alvarez and Myerson, 2010]. Supersaturation is generated by adding the anti-solvent at multiple points, which reduces the solubility of solute in the mixture and controls the size of the crystals. For practical purposes if we have a good anti-solvent, the potential recovery will be very high. A consideration then must be given in identifying the optimal solvent/anti-solvent pair and composition. Karunanithi et al. [2006] undertook a molecular systems approach in maximising the potential recovery of the solute and indicate the pair in Table 1, which optimally gives a recovery of 69% ibuprofen. The physical properties of these optimal compounds can be relayed to known solvents on the market and thus experimentally validated, as shown for cooling crystallisation using ethyl glycol acetate [Karunanithi et al., 2007]. The plate shaped crystal morphology thus would facilitate readily in further downstream processing as opposed to the current commercial route of using non-polar solvents like heptane or hexane which yields crystal like structures [Gordon and Amin, 1984]. OH
Solvent (21%):
O O
O
OH
Anti-solvent (89%): HO
Table 1. Optimal solvent/anti-solvent pair [Karunanithi et al., 2006].
3. Material and population balance A first principles approach was used for each of the distributed systems, and subsequently discretised into the length steps for each device to represent the mass and population balances of the entire process. They are expressed as: 2 NoReac Material balance: ∂Ci = −v ⋅ 1 ⋅ ∂Ci + D ⋅ 1 ⋅ ∂ Ci + ν i , j ⋅ rj 2 2 ~ ~ ∂t L ∂z L ∂z
¦( j =1
)
i= 1,…, NoComp
Plant-wide optimisation and control of a multi-scale pharmaceutical process § − Ea
where reaction kinetics: r = A ⋅ exp¨© j j Population balance: v ∂n + G ∂n = 0 ∂~z ∂L
· NoComp R ⋅T ¸¹
⋅
∏
Ci
ai , j
717
j = 1,…, NoReac
i =1
(
where crystal growth rate: G = k g C ibuprofen − C S
)
g
The necessary assumptions that govern the mass balance are the use of the Arrhenius law for reaction kinetics and equi-molar stoichiometric inlet conditions. The mass balances for the washing and filtration stages incorporate mass transfer and adsorption coefficients. Assumptions of no radial or axial dispersion, growth rate independent of crystal size, and no significant agglomeration or breakage are used for the population balance. The ‘reaction’ stage yields a conversion of 90% (at a flow of 180 g·h-1) which is further improved to 96% in the ‘mixing and reaction’ stage. As the stream progresses negligible product is lost through the various extraction and filtration stages. The crystallisation phase is gives a potential recovery of 66% crystals for the solute, ibuprofen.
4. Conclusions A conceptual continuous primary manufacturing stage indicates that ibuprofen can be successfully manufactured in a multi-scale environment. Introducing small-scale continuous technologies has a clear impact in reducing the number of operations, the size of the flowsheet, solvent duties and overall energy requirements. A noteworthy increase in conversion is yielded from accurately monitoring the chemistry at the reaction phase, through early considerations in the process design and thereafter maintained through novel model based control structures. Crystallisation has been shown to perform well in using a nonconventional static mixer. Solvent selection has played an important role with regards to compatibility for downstream operations as well as influencing the size and shape of the final product. Novel pharmaceutical production lines are thus ideal for exploring a paradigm shift and aligning goals of the process systems engineering and process intensification communities.
References U.S Food and Drug Administration (FDA), 2004, Pharmaceutical cGMPS for the 21st Century — A Risk-Based Approach. (Web: http://www.fda.gov/cder/gmp/gmp2004/manufSciWp.pdf) V. Hessel, H. Lowe, 2003, Microchemical Engineering: Components, Plant Concepts, User Acceptance - Part II, Chem. Eng. Technol., 26, 391-408 A. Bogdan, S. Poe, D. Kubis, S. Broadwater, D. McQuade, 2009, The Continuous-Flow Synthesis of Ibuprofen, Angew. Chem. Int. Ed, 48, 8547 –8550 M. Patel, N. Shah, R. Ashe, 2011, Robust optimisation methodology for the process synthesis of small-scale continuous technologies, 21st European Symp. on Comp. Aided Process Eng. J. Maciejowski, 2002, Predictive Control with Constraints. Pearson Education Ltd. A. Alvarez, A. Myerson, 2010, Continuous Plug Flow Crystallization of Pharmaceutical Compounds, Crystal Growth and Design, 10, 2219-2228 A. Karunanithi, L.Achenie, R. Gani, 2006, A computer-aided molecular design framework for crystallization solvent design, Chem. Eng. Sci., 61, 1247-1260 A. Karunanithi, C. Acquah, L.Achenie, S. Sithambaram, S. Suib, R. Gani, 2007, An experimental verification of morphology of ibuprofen crystals from CAMD designed solvent, Chem. Eng. Sci., 62, 3276-3281 R. Gordon, S. Amin, 1984, Crystallization of Ibuprofen. US Patent number 4,476,248
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of Hybrid Reactive DistillationPervaporation System Vinay Amte Department of Chemical Engineering, Indian Institute of Technology, Bombay 400076, India
Abstract This work demonstrates the potential of hybrid reactive distillation-pervaporation (RDPV) system for enhancing chemical conversion and selectivity, reducing by-products formation and overcoming azeotropic limitations. The structural and parametric optimization of hybrid RD-PV system is studied and the superstructure problem is modeled and optimized using MINLP formulation. The application is illustrated by the synthesis of tert-amyl ethyl ether (TAEE) from ethanol and tert-amyl alcohol (TAA). The proposed hybrid system effectively removes the water from the system that results in increase in conversion of TAA and enhances the selectivity towards TAEE product. The hybrid RD-PV offers economically attractive alternatives due to simple process structure and better performance. The results were validated by independent simulation. Keywords: Reactive Distillation, Pervaporation, MINLP, Optimization.
1. Introduction Reactive distillation (RD) is a classical example of multi-functional reactor due to its integrated functionality of reaction and separation. It facilitates reaction conversion by continuous removal of the byproduct and recycling of reactants in the reactive zone which results in enhanced conversion of the limiting reactant in equilibrium-limited reaction. In numerous cases, non-ideal aqueous-organic mixtures are formed which have a tendency to be azeotropic in nature. This can be overcome by using membrane separations such as pervaporation, vapor permeation etc, since they are highly selective and not limited by vapor-liquid equilibria (VLE). The substantial work in commercially hybrid processes has already been reported in various reviews (Lipnizki et al., 1999 and references therein), but hybrid RD-PV systems has received little attention. The main motivation behind the application of RD-PV system in this work is due to the following attributes: (i) overcoming the inhibitions of the chemical equilibrium, (ii) surpassing the thermodynamic limitations. Some of the notable efforts in the past for RD-PV system are mentioned in Aiouache and Goto (2003), Buchaly et al. (2006). Consequently, this work focuses on structural and parametric optimization of hybrid RD-PV system for different configurations with series, parallel and series-parallel arrangement of PV modules.
2. Superstructure of Hybrid RD-PV System The superstructure of hybrid RD-PV system can be represented by combining superstructures of RD and PV network as shown in Fig. 1a. The stage model (see Fig. 1b) consists of four streams in addition to usual liquid and vapor streams. The above superstructure helps to purify distillate from RD column in PV network.
Optimization of Hybrid Reactive Distillation-Pervaporation System
719
(b) (a) Fig. 1. (a) Proposed superstructure for hybrid RD-PV system, (b) Generalized stage model for the superstructure. The logic for optimization of RD column and PV network is based on methodology by Gangadwala et al, 2007 and Kookos, 2003 respectively. 2.1. Mathematical model formulation for hybrid RD-PV system The mathematical model for hybrid RD-PV system aims for structural and parametric optimization of the overall system so as to minimize the total annual cost (TAC) to achieve required separation. The model results in MINLP problem can be written as: Minimize TAC ( ROI TCI OP) subjected to 0 model equations f x max (1) purity constraints xi xi , i 1, 2,...NC, x i X min max design variable bounds xd xd xd , xd X min max operating variable bounds xo xo xo , xo X max max logical constraints , x j z jx 0, z j 1, x X xj x j j Where, NC = number of components, j = RD column stages, ROI = rate of return on investment, TCI = total capital investment, OP = operating cost. The non-linear mathematical model for RD is equipped with MESH equations, purity constraints, bounds on design and operating variables such as number of stages, temperature and component mole fraction etc. The PV model is based on the solution and diffusion of liquid phase species on the feed side of the membrane followed by vaporization on the permeate side of membrane where the sensible heat of the liquid feed is used for phase change. The retentate stream temperature required is calculated by using the energy balance around the PV and the permeate temperature is at its dew point evaluated at the composition and pressure of permeate stream. The logical constraints consisting of binary variable, zj decides on the existence of stages (feed stages, reactive stages, reflux and reboil stage) in RD column. Again, for hybrid RD-PV system, TCI comprises of bare module cost of RD column, trays, catalyst and bare module cost of heat exchangers (reboiler and condenser) and cost of reflux drum, cost of membrane module (bare module), cost of intermediate heat exchangers and cost of pumps and compressors (Douglas, 1988). The operating cost involves the cost of cooling water, cost of steam and membrane replacement cost. The
720
V Amte
MINLP model is implemented in GAMS 22.8 using solvers CONOPT/SBB (Brooke et al, 2008) on a 3 GHz Pentium 4 PC with 1 GB of main memory.
3. Case Study: Synthesis of tert-amyl ethyl ether The optimization of hybrid RD-PV system is studied for the etherification of tert-amyl alcohol (TAA) by ethanol (EtOH) to form tert-amyl ethyl ether (TAEE). The synthesis of TAEE, a potential source of alkyl ether is a promising reaction since both reactants are derived from natural resources (Aiouache and Goto, 2003). The main liquid phase reaction is inhibited by two side reactions, dehydration of TAA to isoamylene (IA) and etherification of IA formed to TAEE. The reactions are as follows: TAA + EtOH R TAEE + H O 2 (2) TAA R IA + H O 2 IA + EtOH R TAEE The kinetic rate expressions and other thermodynamic properties are taken from Aiouache and Goto (2003). For the base case study, the simulation of RD column was performed using RADFRAC module of Aspen Plus process simulator, while the PV model is implemented into the equation oriented simulation environment of Aspen Custom Modeler which is exported to Aspen Plus. The VLE is described by the UNIQUAC model, with Soave-Redlich-Kwong equation of state. The specification considered for hybrid RD-PV system is given in Table 1. Table 1. Feed conditions and specification of a hybrid RD-PV system Design Parameter Value Flow Rate (kmol/hr) [EtOH:TAA = 1:1] 10 Feed Temperature (K) 298 Pressure (bar) 1.0 RD Column Maximum number of stages 20 with total Distillate rate (kmol/hr) 0.652 condenser and Column Pressure (bar) 1.0 partial reboiler Maximum number of Pervaporation 30 modules Number of divisions in each module 10 Pervaporation Thermodynamic Model UNIQUAC unit Permeate pressure (mbar) 20 Membrane feed rate (kmol/hr) 0.652 Permeability coefficient (mol/sec m2) from Aiouache and Goto (2003) 3.1. Optimization results The proposed hybrid RD-PV system with model equations was coded in GAMS modeling environment. The optimized results for the given system are shown in Table 2. The optimization is performed for different values of rows and columns of membrane module. In this work, the design and operating parameters which are optimized by minimizing TAC for the structural and parametric optimization of hybrid RD-PV system are number of stages in the RD column, feed and reactive stage location, number of pervaporation modules required, membrane area, permeate recycle location etc. The optimal configuration for hybrid RD-PV system is found to be series-parallel arrangement of membrane module where distillate stream from RD column is evenly
Optimization of Hybrid Reactive Distillation-Pervaporation System
721
distributed across each parallel section of membrane network. It is important to note that use of just one large membrane unit is unwelcome in practice for several reasons; firstly, large temperature drop of the retentate, which reduces the permeate flux drastically and hence, the retentate has to be reheated repeatedly. This requires a series arrangement of membranes. In addition, a membrane is not able to let through great rate of feed with acceptable purification, so parallel connection of membranes is also required. Therefore, series-parallel membrane arrangement with RD column suffices our objective. Table 2. Optimization results for the hybrid RD-PV system with series-parallel arrangement of membrane modules
Variable Number of stages Diameter of column (cm) Reactive stage location and amount (kg per stage) Condenser duty (watts) Reboiler duty (watts) Feed stage location Permeate recycle location Reboil location Reflux location Membrane area (m2) Number of modules Row Investment cost*10-4($) =1 Operating cost*10-4 ( $/year) TAC *10-4($/year) Membrane area (m2) Row Number of modules Investment cost*10-4($) =2 Operating cost*10-4 ( $/year) TAC *10-4($/year) Membrane area (m2) Row Number of modules Investment cost *10-4 ($) =3 Operating cost*10-4 ( $/year) TAC *10-4($/year)
Optimal Solution 14 32.72 2-8 (0.75) -18.35 55 2 3 13 2 0.025 6 57.62 7.66 19.19 0.025 9 65.02 7.66 20.66 0.025 12 72.95 7.66 22.25
Bounds [1, 20] ------[2, 20] ------------[2, 20] [2, 20] [2, 20] [2, 20] [1, 50] ----------------------------[1, 50] ----------------------------[1, 50] -----------------------------
As shown in Table 2, the optimal location of reactive stages is found to be at top section. The permeate recycle stream mainly consisting of EtOH and the feed stream located at the top stage offers maximum selectivity towards TAEE and highest conversion of TAA. This also suppresses the side reaction for IA formation. In this work, the feed to the membrane network is intentionally the distillate stream from RD column due to convergence issues. If the stream is taken from those stages below the reactive zone, then poor selectivity towards TAEE is observed. The composition and temperature profiles for both RD and hybrid RD-PV system are shown in Fig. 2. The TAA conversion and selectivity towards TAEE in case of RD without PV network is nearly 70% and 80.5% respectively. While with the introduction of PV network to RD,
722
V Amte
(a)
(b) Fig. 2. Composition and temperature profiles for (a) base-case RD without PV network and (b) optimal hybrid RD-PV network system. (shaded region indicate reactive zone) it facilitates the removal of water from the reactive stages (preferably top section of reactive zone) and hence, the forward reaction is favored and equilibrium is shifted to product side. As shown in Fig. 2b, hybrid RD-PV system promotes TAEE formation. The TAA conversion and TAEE selectivity in hybrid RD-PV system is found to be 73.2% and 83.7% respectively. The other configurations were also generated with PV network on each reactive stages of RD column for effective water removal.
4. Conclusions This study focused on structural and parametric optimization of hybrid RD-PV system for synthesis of TAEE. The configuration of RD with series-parallel network of PV is found to be appropriate for the given system for effective removal of water from RD column and surpassing the thermodynamic limitations. The hybrid RD-PV system offers economically attractive alternatives due to simple process structure. The optimized results were verified by independent simulations.
References A. Brooke, D. Kendrick, A. Meerhaus, R. Raman, 2008, GAMS Release 22.8-A User’s Guide, GAMS Development Corporation, Washington. Aspen Plus User Manual, Aspen Plus version 2004.1, ASPEN Technologies Inc.: Cambridge, MA, 2004. C. Buchaly, P. Kreis, A. Gorak, 2006, Experimental investigation of reactive distillation in combination with membrane separation, IChemE Symp. Ser. 152, 373-383. F. Aiouache, S. Goto, 2003, Reactive distillation-pervaporation hybrid column for tert-amyl alcohol etherification with ethanol, Chem. Eng. Sci, 58, 2465-2477. F. Lipnizki, R. W. Field, P. K. Ten, 1999, Pervaporation-based hybrid process: a review of process design, applications and economics, J. Membr. Sci., 153, 183-210. I. K. Kookos, 2003, Optimal design of membrane/distillation column hybrid processes, Ind. Eng. Chem. Res., 42, 1731-1738. J. Gangadwala, 2007, Optimal design of combined reaction distillation processes, PhD thesis, Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg. J. M. Douglas, 1988, Conceptual Design of Chemical Processes; McGraw-Hill, New York. J. Viswanathan, I. Grossmann, 1993, An alternative MINLP model for finding the number of trays required for a specified separation objective, Comput. Chem. Eng., 17, 949-955.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Dynamic Modeling and Optimization of Flash Separators for Highly-Viscous Polymerization Processes Prokopis Pladisb, Vassileios Kanellopoulosb, Apostolos Baltsasb and Costas Kiparissidesa,b D E
'HSDUWPHQWRI&KHPLFDO(QJLQHHULQJ$87+7KHVVDORQLNL*UHHFH &HQWUHIRU5HVHDUFKDQG7HFKQRORJ\+HOODV7KHUVVDORQLNL*UHHFH
Abstract In the present study, a multi-phase, multi-zone mathematical model is developed to describe the dynamic operation of industrial high-pressure separators (HPSs) for highlyviscous polymer systems. The proposed multi-phase, multi-zone description of the highpressure separator takes into account the complex gas carry-under and liquid droplets carryover phenomena. Moreover, the model takes into account the mass transfer rate from the liquid droplets to the gas phase as well as the bubble formation in the liquid zone. Extensive numerical simulations are carried out to determine the optimal operating conditions (i.e., temperature, pressure, feed composition and mass flowrate, etc.) on the dynamic performance and the separation efficiency of the HPS for highly-viscous fluids. It is shown that the proposed model is capable of simulating the dynamic operation of industrial-scale HPSs over a wide range of operating conditions (i.e., pressures 200-260 bar and temperatures 220-260 0C) and copolymers of different copolymer composition and viscoelastic properties (i.e., melt index in the range of 2-50 g/10min). Finally, it is shown that industrial HPSs do not operate near the thermodynamic equilibrium conditions. Therefore, their non-ideal behaviour should be taken into account when simulating their dynamic operation. Subsequently, model-based optimization and control studies are carried out to optimize the dynamic operation and performance of an industrial HPS. Keywords: Dynamic modeling, high pressure separator, multi-phase multi-zone model, LDPE.
1. Introduction The high pressure ethylene polymerization is an industrial process of significant economic importance. The annual production of low-density polyethylene (LDPE), that includes homopolymer and copolymer grades with densities ranging from 0.91 to 0.93 g/cm3, is estimated to be over 19 Mt/yr, a figure that represents almost 25 % of all manufactured polyethylene. The large number of applications of low-density polyethylene and its copolymers (e.g., in packaging, adhesives, coatings, films, etc.) is the result of the wide range of molecular and end-use properties of the various homopolymer and copolymer grades. LDPE and its copolymers are commercially produced in high-pressure reactors (i.e., autoclaves and tubular reactors) for more than four decades. The unique end-properties of LDPE and its copolymers ensure that the demand for these specialty grades will remain strong in the near future. A schematic diagram of a high pressure autoclave process is shown in Figure 1. Fresh ethylene, after its primary compression, is mixed with the recycled ethylene coming from
724
33ODGLVHWDO
the HPS and, optionally, with a comonomer stream (e.g., vinyl acetate, butyl acrylate, etc.). The mixture is then compressed to the desired reactor pressure by the hyper-compressor. The freeradical copolymerization of ethylene with a comonomer is initiated via the thermal decomposition of chemical initiators (e.g., organic peroxides) which are fed into the reactor at different side reactor locations. The polymerization Figure 1. Schematic representation of a high temperature in an autoclave is pressure autoclave LDPE process. controlled by manipulating the initiator(s) addition rates into the different reaction zones. The polymer molecular weight is controlled by adjusting the reaction temperature and pressure and, optionally, by adding a chain transfer agent. It should be pointed out that ethylene plays a dual role both as a reactant and solvent for its polymer. Due to the short mean residence time of the reaction mixture in the reactor (i.e., 30-90 s), the monomer conversion in autoclaves is relatively low (i.e., 10-25 %wt). The separation of unreacted dissolved gases from the polymer melts is carried out in two successive separation stages. Initially, via the operation of the letdown valve at the autoclave outlet, the reaction pressure decreases from its operating value of 1700 bar to 150-300 bar. Because of this expansion process, a temperature increase of the outlet reaction stream (i.e., reverse Joule-Thompson effect) occurs (Orbey et al., 1998, Buchelli et al., 2004).The product stream is then directed to the high pressure separator (HPS). The monomer/comonomer/polymer mixture entering the HPS is split into a polymer-rich liquid phase (containing 70-80% per weight polymer) and a monomer-rich gas phase. The polymer-rich liquid phase from the bottom of the HPS is directed to the low-pressure separator. In the second flash separator, the pressure is further reduced to about 1.5 atm. The gases and waxes stream leaving the low-pressure separator (i.e., off gas) is fed to the primary compressor while the liquid bottom stream, is sent to the extruder where the polymer is pelletized.
2. Development of the HPS Dynamic Model &DOFXODWLRQRI7KHUPRG\QDPLF3URSHUWLHVDQG(TXLOLEULXP&RQFHQWUDWLRQV One of the most important aspects in the development of a dynamic model for the HPS, is the accurate description of the thermodynamic equilibrium concentrations of the various species in the multi-phase system. Therefore, the use of a suitable and robust thermodynamic model (i.e., EOS) to predict the phase equilibrium of the multi-component system of interest (i.e., EVAethylene-vinyl acetate-solvent) is of paramount importance. Since it is very difficult from an experimental point of view to measure experimentally the solubilities of all sorbed species in different copolymers, various thermodynamic models, in the form of an EOS, are often employed to predict the phase behavior of a multi-component system. In this work, the Sanchez – Lacombe EOS (S-L EOS) was employed to study the phase behavior of the (EVAethylene-vinyl acetate) multi-component system under different operating conditions (i.e., temperatures, pressures, polymer concentrations and EVA copolymer composition).
'\QDPLF0RGHOLQJDQG2SWLPL]DWLRQRI)ODVK6HSDUDWRUVLQ+LJKO\9LVFRXV 3RO\PHUL]DWLRQ3URFHVVHV
725
'HVFULSWLRQRIWKH'\QDPLF0RGHO In the present study, a multi-phase, multi-zone mathematical model was developed to describe the dynamic operation of an industrial high-pressure separator for highlyviscous ternary polymer solutions. According to the proposed multi-zone, multi-phase model, the separator is divided into two zones, namely, the gas-rich (i.e., G) and the liquid-rich (i.e., L) zone (see Figure 1). The gas-rich P1 T zone is assumed P T to comprise two x sub-zones, m Gas-liquid x namely, (i) the Phase sub-zone, GL x l G G consisting of m P T liquid droplets m m x m (i.e., formed via T Q T Q the breakage of Ll Lg the liquid feed T P stream fed into x the separator) m and a gas phase, Figure 1. Schematic representation of the proposed multi-phase, (ii) the GG submulti-zone mathematical model. zone, consisting of a gaseous mixture with entrained liquid. Accordingly, the liquid-rich zone is assumed to comprise two sub-zones, namely, (i) the LL sub-zone containing a mixture of liquid and dissolved gases and (ii) the LG sub-zone, comprising gaseous bubbles. 1
2 x SEP f ,i
2
m SEP F
SEP G ,i
mG
SEP G
g
Gg i
K iGG
SEP L ,i
g
L
m Gent
SEP L
3
3
out
in
x Gi
in
l
mG
Lg
Lg i
l
out
in
out
K iLL
' 3
3
Ll i
Ll
Sub-zones name GL GG LL LG
Description Mixture of highly-viscous droplets and evaporated gases Gaseous mixture and liquid droplets entrained by the outlet gas stream Mixture of liquid and dissolved gases Gaseous bubbles (bubbles formed in the liquid-rich zone)
Table 1. Description of the various HPS sub-zones Note that the proposed multi-phase, multi-zone description of the high-pressure separator takes into account the complex gas carry-under and liquid droplets carry-over phenomena. Moreover, the model takes into account the mass transfer rate from the liquid droplets to the gas phase in the GL sub-zone as well as the bubble formation in the liquid zone. Based on the postulated modeling assumptions, the following dynamic mass and energy balances can be derived for the various molecular species (e.g., monomer, comonomer, solvent, polymer, etc.) to calculate the time variation of the total mass and molecular species concentrations in each sub-zone of the HPS: GL sub-zone dM iG (1) m s x s M G k G x G x G m G x G m G x G L
L
l
dt
l ,i
L
m ,i
GG sub-zone dM iG dt
G L
L
L
i ,eq
L
L
L
e
L
i
L
L
i
L
G
G
G
msg x sg ,i M G k Gm ,i x iG x iG, eq m Ge x iG m L x iL m G x iG
LL sub-zone L
dM iL dt
i
L
L
L
L
L
L
L
L
L
mG x iG - M L k Lm ,i x iL x iL,eq m L x iL MWi riL V L
G
(2)
(3)
726
LG sub-zone
G
dM iL dt
L
L
L
G
33ODGLVHWDO
(4)
G
M L k mL ,i x iL x iL, eq m L x iL
Overall mass balances for the HPS dM s dM si ; msf m L mG msf x si m L x iL m G x iG L
G
L
dt
L
G
(5)
G
dt
Overall energy balance for the G zone x s
dH G dt
x s
G x L
L x G
G x G
Hl Hg H H H
x G
x
(6)
Qst ' H evap
Overall energy balance for the L zone L x G
dH L dt
H
L x L
G x L
x
x L
x
(7)
H H Q st ' H rxn ' H evap
The dynamic model also includes all the necessary equations regarding the operation of the different feedback controllers as well as the calculation of thermodynamic and transport properties of the various phases. 5HVXOWVDQG'LVFXVVLRQ The multi-phase, multi-zone dynamic model described above was employed to assess the effects of various operating conditions (i.e., pressure, temperature, mass flow rate and composition of the inlet stream into the separator, MFI of the copolymer, copolymer composition, etc.) on the dynamic operation and separation efficiency of a high pressure separator for an EVA-ethylene-vinyl acetate mixture. 80
Polymer Mass Fraction
1.0
% Liquid Level
60
40
20
0
0.8
0.6
0.4
(a) 0
2
4
6
8
10
Time, min
Figure 3a. Dynamic evolution of the liquid level in a HPS.
0.2
EVA in the Liquid Phase (Simulation) EVA in the Liquid Phase (Equilibrium)
(b) 0
2
4
6
8
10
Time, min
Figure 3b. Effect of set-point changes of the LL controller on the polymer mass fraction of the liquid stream.
In Figure 3a, the dynamic evolution of the liquid level (LL) in the HPS is depicted under real industrial operating conditions. The set-point of the LL controller in the HPS changed from a value of 50%, at time t = 1 min, to a value of 40%, at time t = 3min, and, finally to a value of 60%, at time t = 6 min. The effect of set-point changes of the LL controller on the polymer mass fraction in the liquid polymer-rich phase is shown in Figure 3b. It can be seen that as the liquid level in the HPS increases the polymer mass fraction in the liquid polymer-rich phase increases, resulting in an increase of the separation efficiency. This means that the removal of the dissolved gases (i.e., ethylene and vinyl acetate) from the highly-viscous solution increases due to the higher residence time of the liquid polymer-rich phase in the HPS.
'\QDPLF0RGHOLQJDQG2SWLPL]DWLRQRI)ODVK6HSDUDWRUVLQ+LJKO\9LVFRXV 3RO\PHUL]DWLRQ3URFHVVHV 0.6
1.0
Polymer Mass Fraction
Polymer Mass Fraction
EVA in the Liquid Phase (Simulation)
0.5
% VA = 8 % VA = 24 % VA = 33
0.4
0.3
0
2
4
6
Time, min
8
727
10
0.8
Feed Flowrate = 10,150 kg/hr (Simulation) Feed Flowrate = 14,500 kg/hr (Simulation) Feed Flowrate = 18,850 kg/hr (Simulation) (Equilibrium)
0.6
0.4
0.2
0
1
2
3
4
Time, min
Equilibrium Polymer Fraction
Figure 4. Effect of the copolymer Figure 5. Effect of the inlet mass flow composition of EVA on the polymer mass rate on the polymer mass fraction in the fraction in the liquid phase. liquid phase. Figure 4 shows the effect of the copolymer composition of EVA on the polymer mass fraction in the liquid polymer-rich phase of the HPS. It is apparent that the separation efficiency decreases as the VA content in EVA increases. This can be attributed to the fact that for high values of the VA, 1.0 monomer(s) mass transfer rate(s) from the liquid polymer-rich phase to the gas Separation Efficiency 0.8 MFI = 3, 5, 7, 25 phase decreases. Moreover, the effect of the feed mass flow rate on the polymer 0.6 mass fraction of the liquid polymer-rich phase in the HPS is illustrated in Figure 0.4 5. It should be pointed out that as the inlet mass flow rate increases the 0.2 separation efficiency decreases due to the 0.0 lower residence time of the polymer-rich 0.0 0.2 0.4 0.6 0.8 1.0 phase in the HPS. Finally, Figure 6 Real Polymer Fraction depicts the effect of the copolymer melt Figure 6. Effect of MFI on separation flow index on the separation efficiency of efficiency of HPS. the HPS. It is apparent that the separation efficiency increases (i.e., it shifts closer to the thermodynamic value) as the MFI increases (i.e., the MW of the EVA decreases). This can be explained by the fact that the mass transfer rate of the unreacted monomers (i.e., ethylene, vinyl acetate) increases as the MFI increases due to the decrease in the melt viscosity of EVA.
References P. Pladis, V. Kanellopoulos, A. Baltsas and C. Kiparissides, 2011, Development of a Multi-phase, Multi-compartment Dynamic Model for High-Pressure Separators in LDPE Plants, Submitted to Ind. Eng. Chem. Res. H. Orbey, C. P. Bokis and C.-C. Chen, 1998, Equation of State Modeling of Phase Equilibrium in the Low-Density Polyethylene Process: The S-L, SAFT, and Polymer-Soave-Redlich-Kwong Equations of State, Ind. Eng. Chem. Res., 37, 11, 4481-4491. C. Kiparissides, A. Baltsas, S. Papadopoulos, J.P. Congalidis, J.R. Richards, M.B. Kelly and Y. Ye, 2005, Mathematical Modeling of Free-Radical Ethylene Copolymerization in HighPressure Tubular Reactors, Ind. Eng. Chem. Res., 44, 8, 2592-2605. A. Buchelli, M.L. Call, A.L. Brown, C.P. Bokis, S. Ramanathan and J. Franjione, 2004, Nonequilibrium Behavior in Ethylene/Polyethylene Flash Separators, Ind. Eng. Chem. Res., 43, 7, 1768-1778.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Role of MPC in Building Climate Control Samuel Prívara1 , Zdenˇek Váˇna1 , Jiˇrí Cigler1 , Frauke Oldewurtel2 and Josef Komárek3∗ 1 2 3
Department of Control Engineering, CTU in Prague, 121 35 Prague,Czech Republic Automatic Control Laboratory, ETH Zrich, 8006, Switzerland Technofiber s.r.o, Lazaretn 7, 615 00 Brno, Czech Republic
Abstract Low energy buildings have attracted a lot of attention in past decades. Recent research is dedicated mainly to optimization of building construction and alternative energy sources. We provide a different approach to the energy-consumption and energy-cost optimization. A generic concept of minimizing energy consumption using current energy sources making use of advanced control techniques is presented. Model Predictive Controller (MPC) presented in this article makes use of both weather forecast and thermal model of a building to control inside temperature. This, by sharp contrast to conventional control strategies such as heating-curve (HC) or rule-based controllers (RBC), enables utilization of thermal capacity of the building. The inside temperature can be maintained at desired levels independent of the outside weather conditions using modified formulation of MPC. Keywords: model predictive control, costs effectiveness, energy time-management
1
Introduction
Buildings account approximately for 39% of total energy usage Perez-Lombard et al. (2008). Although the energy efficiency of systems and components for heating, ventilating, and air conditioning (HVAC) has improved considerably, there is still potential for substantial improvements. This article provides a comparison of traditional nonpredictive techniques and deals with an advanced predictive control technique applied on real building. Widely used control strategy of water heating systems is the weathercompensated control (WCC). This often leads to poor energy management or reduced thermal comfort even if properly set up, because it utilizes current outside temperatures only. Weather conditions, however, can change dramatically in few hours; and due to the heat accumulation in large buildings, it can lead to underheating or overheating of the building easily. The Model Predictive Controller (see Richalet et al. (1976); Kwon et al. (1983); Rawlings and Muske (1993); Zheng and Morari (1994); Campo and Morari (1987)) (MPC) presented in this article introduces a different approach to the heating system control design. As the outside temperature is one of the most influential quantities for the building heating system, weather forecast is employed in the predictive controller. It enables to predict inside temperature trends according to the selected control strategy. The aims of the control can be expressed in natural form as thermal comfort and economic trade off. The paper is organized as follows. Next section briefly discusses currently used strategies, section 3 introduces MPC approach and section 4 provides the case study setup as well as results. The last section concludes the paper.
2
Current Heating Strategies
Let us briefly compare the current major control techniques: on-off room temperature control, weather-compensated control, and PID (Underwood (1999)) with the proposed MPC. The on-off temperature control is the simplest type of control; the heating devices in a room are switched on and off (device state S) according to some room temperature error threshold, often with hysteresis as S = fon−o f f (et ). This is a very simple feedback control, which does not contain any information about the dynamics. On the contrary, ∗1
[email protected]
Role of MPC in Building Climate Control
729
the WCC is a feedforward control, which also does not contain any information about the building dynamics. The temperature of the heating medium tw is set according to the outside temperature tout by means of predetermined HCs fw−c . Its advantage is robustness and simple tuning. PID control is one of the most favorite strategies of control engineers (Ang et al. (2005); Li et al. (2006)). The heating water temperature tw is then determined as tw = fPID (et , his), where his is a ”history". PID controllers are robust and allow accurate tuning, but they cannot reflect the outside temperature effects. This is the reason why PIDs in HVAC control are not as common as in other control applications. Even though all the above controllers are easy to tune for single-input, single-output (SISO) systems, their tuning for multi-variable (MIMO) systems becomes very difficult or even impossible. The PID control can be applied to MIMO systems only in case of specially structured systems. We would therefore appreciate control strategy, which has a feedback, use as much information as possible (tout , the weather forecast t pred , and others x) and include system dynamics as well. These requirements are satisfied by a MPC.
3
Model Predictive Control
MPC is a method for constrained optimal control (Underwood (1999)). In buildings one would aim at optimizing the energy use or costs subject to comfort constraints. Predictions of any disturbances (e.g. internal gains), time-dependencies of the control costs (e.g. dynamic electricity prices), or of the constraints (e.g., thermal comfort range) can be readily included in the optimization. The MPC strategy comprises two basic steps. Firstly, the future control signals are calculated by optimizing the objective function. Secondly, the first component of the control sequence u(k) is sent to the system, whilst the rest is disposed. At the next time instant, new control sequence is calculated (receding horizon). The standard formulation of criterion for MPC can be written as T −1 J = ∑ q(k) y(k)2 − yr (k)2 + r(k)u(k)2 , (1) k=0
where q(k) is a weight for difference of the system output y(k) and reference yr (k), whilst r(k) is a weight of the control signal u(k). By this, the area delimited by the system output below desired value is same as the area above it. This is depicted in Figure 4 by a red line. Such a behavior is not suitable for temperature control of a building. Resulting behavior of the output is delineated by a blue line. This problem can be solved as: a) The intuitive method is to use dynamic weights q(k) and r(k). The complexity of this procedure grows with more reference trajectory levels, but in case of 2 levels, it is the simplest solution. b) In the minimization of the standard criterion, the reference yr (k) can be substituted with ”artificial" reference w. This can be done using following convex combination (Camacho and Bordons (1999)): w(t + k) = αw(t + k − 1) + (1 − α)yr (t + k), (2) where w(t) = y(t), k = 1, . . . , T and α ∈ 0; 1 is a parameter, that determines the smoothness (and speed) of the approaching of the real output to the real reference. c) Reformulation of the part of 1, which refers to the desired value error. If y(k) < yr (k) then weight the square of this difference using q(k), otherwise the error is not weighted. This problem can be solved using concept of zone control (also called funnel MPC, Maciejowski (2002)), where the reference error is not weighted in a specified interval while the weighting out is made in a common way. 3.1 Extensions to MPC In this section we address particular problems in building climate control. As a first example we consider the uncertainty in weather predictions, then we introduce an incorporation of maintenance cost, dynamic electricity prices or contract optimization into the MPC problem. Yet another extension deals with an approximation of MPC for more cost-effective implementation. • Stochastic Model Predictive Control. The main challenge of the overall control problem lies in the uncertainty due to the use of weather predictions. This can be
Samuel Prívara et al.
730
formulated in two different ways. Either we do not require constraints to be satisfied at all times, but only with a predefined probability (so-called chance constraints), or we explicitly account for the uncertainty in the controller by formulating the future control inputs as functions of future past disturbances Oldewurtel et al. (2008, 2010a). • Maintenance cost, dynamic electricity prices and contract optimization. Besides the energy cost, other criteria could be used when formulating the MPC problem Maintenance costs. When the lifetime of the appliances is considered, the maintenance cost lkm (xk , uk ) can be added into the cost function. Three general cases can be distinguished: a) Linear dependence on usage, where cost is the sum of energy cost and maintenance cost multiplied by the usage of the appliance, i.e. ml lkm (xk , uk ) = cl uml k , where cl is maintenance cost and uk is linearly dependent on usage. b) Life-time dependence which leads to a Mixed-Integer-Problem. Let us demo mo note umo k an integer input variable expressing if a plant is on uk = 1 or off uk = 0. m mo Life-time dependent cost can be expressed as lk (xk , uk ) = co uk , where is co is life time cost. c) Switching dependence, where the maintenance cost depends on the mo number of on-off switches, i.e. lkm (xk , uk ) = cs |umo k − uk−1 |, where cs is a switching cost. Dynamic electricity prices and reduction of peak electricity demand The minimization of total electricity consumption seems can lead to the undesirable peaks in electricity demand. The building envelope constitutes a thermal storage which poses possibility to shift electricity demand from high price to low price times or from high loading to low loading times, respectively. This is readily possible, when the electricity prices of the day-ahead spot market price are used in the MPC problem Oldewurtel et al. (2010b). • Approximations and explicit MPC. The real implementation involves having appropriate hard-and software at the building site for solving the optimization problems at each time step. In order to keep investment costs low, one could think of a prior store of the control law„ i.e. in real operation the controller only needs to evaluate the current state and weather and internal gain prediction and from this decide which control input to apply. This control law could be determined by using a model of the real building and controlling it in simulation with an MPC controller and storing the computed control inputs for different weather and occupancy regimes. In a second step, learning techniques can be used in order to distinguish different regimes of operation that result in different control laws. Depending on how many regimes you allow, the solution will be an approximation to the original online-optimization and thus degrade in performance.
4
Case Study
The presented MPC scheme of Problem 1 was applied to the building of the Czech Technical University (CTU) in Prague (see Figure 4), which is composed of several blocks with the same construction and way of use. The heating system scheme of one building block is depicted in Figure 4. Detailed description of the heating system and modeling can be found in Široký et al. (2010); Cigler and Prívara (2010). Description of the controller. There are several requirements to be fulfilled: Reference tracking. The reference trajectory yr,k (room temperature) is known as a schedule. The major advantage of MPC is the ability of computing the outputs and corresponding input signals in advance, that is, it is possible to avoid sudden changes in the control signal and undesired effects of delays in the system response. The schedule defines two levels of the room temperature: 22◦C during the day and 19◦C at night and over the weekends. The reference tracking should be perfect for the upper level from its beginning to ending edge, while the lower reference level should represent a temperature below which the temperature should not fall. Thus, we proposed an alternative MPC problem formulation - the displacement is penalized only below the reference trajectory, see Figure 4. 2-norm for
Role of MPC in Building Climate Control
731
(a) CTU building structure. Ref
Classical
(b) Block B2 Supply - scheme of heating. water temperature
Zone control
No penalization
50
MPC WCC
45 Tsw[°C]
High penalization
40 35 30 25 22.1.09
23.1.09
24.1.09
(c) Comparison between classical and zone predictive strategy.
25.1.09
26.1.09
time [days]
t
(d) HC and MPC energy requirements profile.
accurate performance is used. Minimization of energy consumption. As the return water ϑrw circulates in the heating system (Figure 4), the energy consumed by the heating-up of the building is linearly dependent on the positive difference between ϑsw and ϑrw entering/exiting the three port valve. Thus, the 1-norm of weighted inputs is to be minimized. MPC problem formulation. The standard state space system is partitioned as follows: xk+1 = Axk + Buk , yk = Cxk + Duk , zk = V xk +Wuk , where yk stands for outputs with reference signal (e.g. ϑin,k ), zk are input-output differences, i.e. zk = ϑsw,k − ϑrw,k . The weighting of the particular variables is carried out by adding the slack variables ak and bk . The resulting optimization problem can be written as: N−1 J = min ∑ aTk Qak + Rbk , ak ,bk ,uk k=0
k−1
k−1
i=0
i=0
yk = CAk−1 x0 + ∑ CAk−i−1 Bui + Duk , zk = VAk−1 x0 + ∑ VAk−i−1 Bui +Wuk ,
(3)
yr,k − yk − ak ≤ 0, ak ≥ 0, zk − bk ≤ 0, bk ≥ 0, umin ≤ uk ≤ umax , |uk − uk−1 | ≤ Δumax . Q and R stand for the weighting matrices, umin and umax represent the bounds and Δumax is the maximum rate of change. Setup and results. Several comparisons of the real building experiments have been performed. The first comparison (cross comparison) using B1 and B2 blocks had two phases, each lasted for a week. In the first week, B1 was controlled by the HC and block B2 by MPC. The other week, the control strategies were switched. The second comparison uses heating degree days (HDD) for the normalization of the building energy consumption Tend yr,k − ϑo,k , where Tbegin , Tend denote the beginning which is defined as HDD = ∑k=T begin and the end of the measured period, respectively. To minimize the negative effect of different weather conditions, time periods with similar average temperature were used. Due to constant heating water flow, the energy consumption measure (denoted as ECM ) is: Tend
ECM =
∑
(ϑsws,k − ϑrws,k ) + (ϑswn,k − ϑrwn,k ).
(4)
k=Tbegin
The Crittall heating system utilizes the building mass as a thermal storage, which enabled MPC to preheat the concrete mainly at night. The beneficial side effect of MPC strategy
Samuel Prívara et al.
732
Table 1. Comparison of HC and MPC strategies using similar building blocks B1 and B2 . mean B1 B1 mean B2 B2 mean MPC ϑo [◦C] control ϑs , ϑn [◦C] control ϑs , ϑn [◦C] savings 1st week −3.4 HC 21.4 MPC 21.1 15.54% 2nd week −1.3 MPC 21.4 HC 20.9 16.94%
was a significant energy peak reduction as can be seen in Figure 4. The cross comparison results are summarized in Table 1 with MPC savings approximately 16% of energy in both weeks. The results from HDD based comparison are in Table 2. The relative savings were more significant at insulated building blocks B1 and B2 . Table 2. Heating degree days based comparison. The ratio ECM /HDD expresses normalized energy demands for heating. mean mean days relative block - control ECM /HDD ϑo [◦C] ϑs , ϑn [◦C] compared MPC savings B1 - HC 0.906 3.8 21.6 84 28.74 % B1 - MPC 0.645 3.2 21.8 49 B2 - HC 0.813 4.0 21.7 85 26.83 % B2 - MPC 0.595 3.0 21.7 49
5
Conclusion and Acknowledgements
MPC was operational Jan-March 2010 and has proven significant savings. This work has been supported in the scope of grant No. FR-TI1/517, "Control systems for energy consumption optimization in low-energy and passive houses".
References Ang, K., Chong, G., Li, Y., JUL 2005. PID control system analysis, design, and technology. IEEE Transactions on Control Systems Technology 13 (4), 559–576. Camacho, E. F., Bordons, C., 1999. Model Predictive Control. Springer, London. Campo, P. J., Morari, M., 1987. Robust model predictive control. American Control Conference. Cigler, J., Prívara, S., 2010. Subspace identification and model predictive control for buildings. In: Proceedings of 11th International Conference on Control, Automation, Robotics and Vision. Kwon, W. H., Bruckstein, A. M., Kailath, T., 1983. Stabilizing state feedback design via the moving horizon method. International Journal of Control 37, 631–643. Li, Y., Ang, K., Chong, C., FEB 2006. PID control system analysis and design - Problems, remedies, and future directions. IEEE Control Systems Magazine 26 (1), 32–41. Maciejowski, J. M., 2002. Predictive control with constraints. Prentice Hall, Essex, England. Oldewurtel, F., Jones, C., Morari, M., 2008. A Tractable Approximation of Chance Constrained Stochastic MPC based on Affine Disturbance Feedback. In: Conference on Decision and Control, CDC. Oldewurtel, F., Parisio, A., Jones, C., Morari, M., Gyalistras, D., Gwerder, M., Stauch, V., Lehmann, B., Wirth, K., 2010a. Energy Efficient Building Climate Control using Stochastic Model Predictive Control and Weather Predictions. In: American Control Conference. Oldewurtel, F., Ulbig, A., Parisio, A., Andersson, G., Morari, M., 2010b. Reducing Peak Electricity Demand in Building Climate Control using Real-Time Pricing and Model Predictive Control. In: Conference on Decision and Control, CDC. Perez-Lombard, L., Ortiz, J., Pout, C., 2008. A review on buildings energy consumption information. Energy and buildings 40 (3), 394–398. Rawlings, J., Muske, K., 1993. Stability of constrained receding horizon. IEEE Transaction on Automatic Control 38. Richalet, J., Rault, A., Testud, J. L., Papon, J., 1976. Algoritmic control of industrial process. In: Proceedings: Symposium on Identification and System Parameter Estimation. IFAC, Tbilisi. Underwood, C. P., 1999. HVAC Control Systems: Modelling, Analysis and Design. E & FN Spon, London, Great Britain. Široký, J., Prívara, S., Ferkl, L., 2010. Model predictive control of building heating system. In: Proceedings of 10th Rehva World Congress, Clima 2010. Zheng, A., Morari, M., 1994. Stability of model predictive control with soft constraints. Internal Report.Californian Institute of Technology.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Efficient Computation of First- and Second-Order Sensitivities Using an Internal Forward Differentiation Scheme T. Barz, L. Zhu, G. Wozny, H. Arellano-Garcia∗ Chair of Process Dynamics and Operation, Sekr. KWT-9, Berlin Institute of Technology, D-10623 Berlin, Germany
Abstract In this work, an approach for the forward integration of first and second order sensitivities for dynamic simulation and optimization problems is presented. The implementation is done following the idea of internal numeric differentiation. The integration of the dynamic system is done by an implicit Runge-Kutta discretization of the state equations, namely Orthogonal Collocation on Finite Elements. Based on the discrete solution of each integration step, sensitivities are generated using the chain rule and the implicit function theorem. In doing so, parts of the information produced in the integration step can be reused, and thus, the cost for the generation of sensitivities is minimized. The presented approach is applied to the solution of a dynamic moel-based optimal experimental design problem for the parameter determination in a gas desulfurizing process. Keywords: sensitivity generation, dynamic optimization, differential algebraic equation systems
1. Introduction Single and multiple shooting techniques used in the sequential optimization approach (also known as partial discretization techniques, see Biegler and Grossmann (2004)) are general accepted methods for the efficient solution of dynamic optimization problems. In contrast to the simultaneous optimization approach (also known as full discretization technique), where all optimization problem functions exist in an explicit form, in the partial discretization technique, a large amount of computation time is spent on the gradient generation of the objective and constraint functions. Classical solution approaches are numeric calculation by finite differences (Støren and Hertzberg (1999)) and numerical integration of the augmented system (Vassiliadis et al. (1999)), which then contains differential equations for the model and corresponding sensitivities. In the former method, a repeated integration of high accuracy is needed for the evaluation of partial derivatives. The latter method suffers from the high dimension of the resulting differential equation system. A different approach represents the forward integration of the state equations and the backward integration of the so called dual equations (Cao et al. (2002)). Finally, the gradients can be computed based on the discretized problem (also known as internal numerical differentiation, see Bock (1981)). This last idea is the most promising in terms of efficiency since it leads to a drastic reduction of computation time. By this means ∗ [email protected]
734
T. Barz et al.
that information from the simulation step is re-used for the generation of sensitivities. Moreover, recent advances have shown that it can be implemented as forward and reverse mode of automatic differentiation algorithms, which is similar to the use of dual equations (Albersmeyer and Bock (2009); Hannemann et al. (2010)). In this work, an algorithm for the computation of first and second order sensitivities is derived, which can be applied to a general differential algebraic (DAE) system of the general fully implicit type: 0 = g (x(t), ˙ x(t), u(t), p,t) ,
x(t0 ) = x0
(1)
where x ∈ RNx denotes the dependent states, u ∈ RNu the independent controls or decisions, p ∈ RN p the independent parameters and t the time. In doing so, the idea of internal numeric differentiation is pursued, and an algorithm is derived which generates first and second order sensitivities using a forward integration method (Barz et al. (2010)). For the solution of the simulation problem, orthogonal collocation on finite elements is used which corresponds to an implicit Runge-Kutta discretization of the model equations. Based on the step-wise solution of the discrete equation system, sensitivities are then generated by formulae that result from the chain rule and the implicit function theorem.
2. Generation of first and second order sensitivities For the sake of briefness, the discussion is restricted to the generation of sensitivities w.r.t. p, and not u(t). All adaptive elements from the integration are frozen by using the internal numeric differentiation method after each successful step. Based on the available frozen information, the sensitivities are then generated. For the recursive implementation of one step-integration methods, the approximate solution x+ is obtained by a computation Ψ based on the known initial value x− (solution of the previous step) and the parameter values p in each step. To account for the step-wise nature of this procedure, the parameters p are formally decomposed into piece-wise constant contributions p+ , p− , p= , · · · . In doing so, formally independent parameter are introduced, which have the same value. The repeated recursive computation for the last and current step reads: x+ = Ψ+ (x− , p+ ),
x− = Ψ− (x= , p− )
(2)
For the implicit one-step integration method, x+ in the current element is calculated by the solution of an implicit equation system 0Nx = g+ (x+ ). With Eq. (2), the dependencies of the solution x+ on the contributions p− and p+ read: 0Nx = g+ ( x− (p− ), x+ (x− (p− ), p+ ), p+ )
(3)
Adopting the rule for differential calculus of functions of several variables (here the independent contributions p− , p+ ), we get: n ∂ ∂ n d p− + d p+ x + (4) d x+ = ∂ p− ∂ p+ With d p = d p− = d p+ , the general expression for the sensitivities of the discrete states in the current element w.r.t. to both contributions of the decomposed parameter (in the last
Efficient Computation of First- and Second-Order Sensitivities Using an Internal Forward Differentiation Scheme and current element) p is obtained: n ∂ d n x+ ∂ Sn = = + x+ d pn ∂ p− ∂ p+
735
(5)
A derivation of the discretized equation system g+ w.r.t. the independent contributions p− , p+ and the consideration of Eq. (5) yields: ∂ g+ ∂ g+ ∂ g+ S + S + = 0Nx×N p ∂ x+ + ∂ x− − ∂ p+
(6)
= ∂ x /∂ p denotes the first order sensitivities of the discrete states w.r.t. all prior where S+ + = ∂ 2 x /∂ p2 , the same notations as contributions p. For the second order sensitivities S+ + in Vassiliadis et al. (1999) regarding second order matrix derivatives are used. Derivation of Eq. (6) w.r.t. the independent contributions p− , p+ and the consideration of Eq. (5) yields: ! 2 ! " " T ∂ g+ ∂ g+ ∂ 2 g+ ∂ 2 g+ ⊗ IN p · S+ + INx ⊗ S+ + ·S + · S+ · 2 x+ ∂ x+ ∂ p+ ∂ x+ ∂ x− − ∂ x+ ! ! " " T ∂ 2 g+ ∂ g+ ∂ 2 g+ ∂ 2 g+ · ⊗ IN p · S− + INx ⊗ S− + · S− + ·S + 2 ∂ x− ∂ x− ∂ p+ ∂ x− ∂ x+ + ∂ x− ! 2 " ∂ g+ ∂ 2 g+ ∂ 2 g+ + + · S + · S = 0Nx·N p×N p ∂ p+ ∂ x− − ∂ p+ ∂ x+ + ∂ p2+
(7) From Eqs. (6) and (7) it can be seen, that the first and second order sensitivities are obtained by the solution of a linear matrix equation system with the coefficient matrix J, which has already been used for the solution of the simulation problem (solution of the implicit equation system 0Nx = g+ (x+ )).
3. Application example The developed approach for sensitivity generation is applied to a dynamic control problem for the computation of optimally designed experiments for parameter determination. The process model is taken from a gas desulfurizing process (Claus process) with the following simplified reaction: 3 2H2 S + SO2 S2 + 2H2 O 2 3 i ∈ {A, B, C, D} 2A + B C + 2D 2 With the assumptions of ideal gas, equilibrium reaction, and using the stoichiometric coefficients: νA = −2, νB = −1, νC = 32 , νD = 2, the process model is given in Eq. (8). dnR = n˙ F − n˙ out + ∑ νi · r˙ dt i
(8)
736
T. Barz et al.
d(nR · xiR ) = n˙ F · xiF − n˙ out · xiR + νi · r˙ ; ∀ i = A, B,C, D dt d(nR · hR (T R , xiR )) = n˙ F · hF (T F , xiF ) − n˙ out · hR (T R , xiR ) dt ∑i νi p r G0 (T R ) νi R R R p ·V = n · R · T ; = −ln ∏ xi s RT R p i The functions used in Eq. (8) are taken from the NIST Chemistry Webbook and defined as follows: Δr G0 (T ) = Δr H 0 − T · Δr S0 = ∑ (νi (hi − T · si )) hi (T ) = Δ f H 0 +
T Ts
i
c pi (τ)dτ ;
si (T ) = S0 +
T c pi (τ) Ts
τ
c pi (T = T /1000) = ai + bi · T + ci · T 2 + di · T 3 +
(9)
dτ
ei T2
T2 T3 T4 e +c· +d · − + f −h 2 3 4 T 2 3 T T e +g +d · − S0 (T = T /1000) = a · ln(T ) + b · T + c · 2 3 2·T2 h(T, xi ) = ∑ xi · hi
Δ f H 0 (T = T /1000) = a · T + b ·
i
3.1. Optimum experiment design for parameter determination The general unconstrained optimum experiment design problem formulation reads: T
∂ y(p, u) ∂ y(p, u) min Φ(C p ); with C p ≥ F −1 = ·Q· (10) u ∂p ∂p s.t. umin ≤ u ≤ umax In Eq. (10), p = [Δ f HA0 , SA0 ]T are the parameter to be determined, u = [n˙ F , T F ]T are the control variables (solution of the optimization problem), and y = [T R , xAR ]T are the measured variables. Moreover, C p represents the covariance matrix of the parameters and Φ = 1/N p · trace(Cp ) the scalar A-criterion which represents the parameter accuracy. The obtained values u represent optimal experimental conditions, which provide a maximum information content for the determination of the parameter values p (Franceschini and Macchietto (2007)). Using sequential quadratic programming (SQP) solvers for the solution of Eq. (10), the provision of exact gradient information of the objective function is crucial for an efficient computation and reliable results. It can be seen, that the objective function already includes first order sensitivity information Sp = ∂ y/∂ p which is a subset of the sensitivities S computed by Eq. (6), since y ⊂ x. Accordingly, in order to provide first order gradients ∂ Φ/∂ u, second order sensitivities are needed Sp u = ∂ 2 y/∂ p∂ u. The solution of Eq. (10) using piecewise constant control profiles with 10 elements is shown in Fig. 1.
4. Conclusions An algorithm for the efficient generation of first and second order sensitivities and its application to the solution of an OED problem has been presented. The generation of sec-
490
140 T R [K ]
n˙ F [mol/min]
Efficient Computation of First- and Second-Order Sensitivities Using an Internal Forward Differentiation Scheme
100 60 20
410 370
xR 1 [mol/mol]
T F [K ]
measurement
450
330
600 500 400 300 0
737
5
10 time: t[min]
15
20
0.9 0.8 0.7 0.6 0.5 0.4 0
measurement
5
10 time: t[min]
15
20
Figure 1. Left: Optimized control trajectory; Right: Measured variables.
ond order sensitivities allows the exact computation of gradients of the objective function and thus, is crucial for an efficient solution and reliable results when using SQP methods. Quantitative results regarding the efficiency of computation are given in the presentation.
Acknowledgement The authors acknowledge the support from the Collaborative Research Centre SFB/TR 63 "InPROMPT- Integrated Chemical Processes in Liquid Multiphase Systems" coordinated by the Berlin Institute of Technology - Technische Universität Berlin and funded by the German Research Foundation.
References Albersmeyer, J., Bock, H. G., January 2009. Efficient sensitivity generation for large scale dynamic optimization. Tech. Rep. SPP1253-01-02, Interdisciplinary Center for Scientific Computing, Heidelberg, Germany, http://www.am.uni-erlangen.de/home/spp1253. Barz, T., Kuntsche, S., Arellano-Garcia, H., Wozny, G., 2010. An efficient sparse approach to sensitivity generation for large-scale dynamic optimization. Computers and Chemical Engineering, Article in Press, doi:10.1016/j.compchemeng.2010.10.008. Biegler, L. T., Grossmann, I. E., 2004. Retrospective on optimization. Computers & Chemical Engineering 28 (8), 1169–1192. Bock, H., 1981. Numerical treatment of inverse problems in chemical reaction kinetics. In: Ebert, K., Deuflhard, P., Jäger, W. (Eds.), Modelling of Chemical Reaction Systems. Vol. 18 of Springer Series in Chemical Physics. Springer, Heidelberg, pp. 102–125. Cao, Y., Li, S., Petzold, L., 2002. Adjoint sensitivity analysis for differential-algebraic equations: algorithms and software. J. of Computational and Applied Mathematics 149 (1), 171–191. Franceschini, G., Macchietto, S., 2007. Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science 63 (19), 4846–4872. Hannemann, R., Marquardt, W., Naumann, U., Gendler, B., 2010. Discrete first-and second-order adjoints and automatic differentiation for the sensitivity analysis of dynamic models. Procedia Computer Science 1 (1), 297–305. Støren, S., Hertzberg, T., 1999. Obtaining sensitivity information in dynamic optimization problems solved by the sequential approach. Computers & Chemical Engineering 23 (6), 807–819. Vassiliadis, V. S., Canto, E. B., Banga, J. R., 1999. Second-order sensitivities of general dynamic systems with application to optimal control problems. Chemical Engineering Science 54 (17), 3851–3860.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A novel approximation technique for online and multi-parametric model predictive control Romain S.C. Lambert, Pedro Rivotti, E.N. Pistikopoulos a
Centre for Process Systems Engineering, Imperial College, London, SW7 2AZ, UK
Abstract Multi-parametric model predictive control has been widely recognized in the control literature. The objective of explicit MPC is to solve the constrained optimal control problem and derive the control variables as explicit functions of the states. Explicit MPC is particularly relevant for systems in which classical real time MPC implementation is impractical; In effect, the computations to derive the optimal control moves are performed offline. A framework for the development of such multiparametric/explicit controllers has been presented in [1]. The framework emphasizes the need for model approximation as a key challenge for a wider use of multiparametric/explicit MPC. We propose an approach that uses an interpolation method employed in a receding horizon fashion as a transient system identification technique to derive linear explicit algebraic expressions of the dynamics of the system under the form of linear expressions in the state parameter and controls. A major advantage of the approach is the availability of an a priori global error bound for the model mismatch due to the approximation. Linear dependency on the state parameters and controls enables to recast nonlinear and non convex MPC problems, into mp-QP optimization problems. The approach is demonstrated on a nonlinear benchmark model example of a 30 stages distillation column. Keywords: Model-Predictive Control, Model Approximation, transient system identification, Multi-Parametric Control 1. Introduction The main difficulty in the implementation of online and explicit model predictive control is the capability of handling computationally expensive optimisation problems for large scale complex nonlinear dynamical systems. Two issues may occur, generally simultaneously: a.
b.
The control problem may be computationally prohibitive due to the high number of variables (This is the case if the order of the system i.e. number of states is high). In the case of online control this translates in increased computational time necessary to derive the solution. As for explicit control, this generally results in an explosion of the number of critical regions because of the curse of dimensionality. Besides the number of variables, the control optimisation problem is often nonlinear and non-convex. The only way to solve it at presents is either to use approximation techniques or resort to global optimisation which is very impractical.
These problems highlight a major trade-off between accuracy versus speed and optimality. A suitable adequate research direction is therefore the development of
A novel approximation technique for online and multi-parametric model predictive control
739
approximation techniques which, as well as reducing the order of a model, can offer the possibility to apply well established quadratic programming (QP) algorithms (mp-QP in the case of explicit/multiparametric model predictive control). In the next paragraph we present an approximation technique which can achieve this requirement. 2. Metamodel-based Control 2.1. Problem formulation Consider a continuous MISO dynamic system of the form: ݔሶ ൌ ݂ሺݔǡ ݑሻ ݕൌ ݃ሺݔǡ ݑሻ (1) Where ݔrepresents the vector of states א ݔԹ , א ݕԹ that of the output of the model and א ݑԹ of control inputs. ݂ and ݃ are square-integrable functions on the space of states and controls. ݂ǣ Թ ൈ Թ ՜ Թ and ݃ǣ Թ ൈ Թ ՜ Թ. The aim is to be able to formulate a convex control problem that enables the use of “state of the practice” i.e. linear online and explicit MPC. The reference tracking problem, where the output ݕis to be driven to the set-point ݕ௦ , over a time horizon ܰ, is formulated as follows: ேିଵ
்כ ்כ כ ் ܳݕ௧ା ߜݑ௧ା ܴߜݑ௧ା ൡ ൝ ்כ ݕܲ ݕ ݕ௧ା ߂ ݑ௧ାே ௧ାே st:
ୀ
כ ൌ ݕ௧ା െ ݕ௦ , ݇ ൌ ͳ ǥ ܰ ݕ௧ା ݕ௧ା ൌ ܣ ݔ௧ ܤ ܷ ܥ , ݇ ൌ ͳ ǥ ܰ ݑ௧ା ൌ ݑ௧ାିଵ ߜݑ௧ା , ݇ ൌ ͳ ǥ ܰ ݕ ࣳ א, ݇ ൌ Ͳ ǥ ܰ ݑ ࣯ א, ݇ ൌ Ͳ ǥ ܰ
(2) where ܴ א ܣଵൈ , Bܴ אଵൈ , ܥ are constants, οܷ ൌ ሾߜݑ௧ ǡ ǥ ǡ ߜݑ௧ାே ሿ் is the vector of input increments. ܷ ൌ ሾݑ௧ ǡ ǥ ǡ ݑ௧ାே ሿ் is the vector of control inputs. ܴ א ݕis the output vector, ܴ א ݑ is the state vector, ࣳ and ࣯ are subsets referring to the input and output constraints, respectively ܲ, ܳ, ܴ are positive semi-definite and positive definite matrices of appropriate dimensions, corresponding to the tuning parameters of the controller. The online MPC problem (2) may be rewritten as a multiparametric problem by considering the vector ߠ אൣݔ௧ ǡ ݑ௧ ǡ ݕ௦ ൧ as the parameter vector, as presented in [1]. The multiparametric problem can be solved to obtain the sequence of inputs ܷ as an explicit function of ߠ. 2.2. Receding-Horizon-Metamodelling Control If we note ܰ the prediction time horizon for the control problem for a specific sampling time ȟ ݐand ݑଵ ǡ ݑଶ ǥݑே the values of controls for each time interval it is possible to construct ܰ static mappings, i.e. algebraic expressions of the form:
Romain S.C. Lambert et al.
740
ͳۤ א ݆ǡ ܰۥǡ ݕ ൌ ݂ ሺݔ௧ ǡ ݑଵ ǡ ݑଶ ǥݑ ሻ (3) where ݔ௧ is the initial condition as in (2). The nonlinear and continuous dynamical system in (1) is thus discretized by replacing it by a set of algebraic functions. This actually allows transient system identification. This set of algebraic functions is used as a ‘surrogate model’ or ‘meta-model’. The procedure to compute such expansion is as follows. The expressions are linear expressions of the initial states and control parameters expressed as follows:
ݕ ൌ ݂ ൫ݔ௧ ǡ ݔ௧ ǡ ǥ ǡ ݔ௧ ǡ ݑଵ ǡ ݑଶ ǡ ǥ ǡ ݑ ൯ ൎ ߙ ߙ ݔ௧ ߙ ݑ ଵ
ଶ
ୀଵ
ୀଵ ୀଵ
(4) ߙ is the average value for ݕ ଵ
ߙ ൌ න ݂ ሺݔ௧ ሻ߮ଵ ሺ ݔ௧ ሻ݀ݔ௧
ଵ
ߙ ൌ න ݂ ሺݑ ሻ߮ଵ ሺݑ ሻ݀ݑ
(5) Analytical integration is not possible so we need to resort to Monte-Carlo integration. To uniformly sample the space of states and controls it is advised to use low discrepancy sequences such as Sobol’ [3].߮ଵ is the first order scaled Legendre polynomial. If we note ൌ ݔ௧ ଵ ǡ ݔ௧ ଶ ǡ ǥ ǡ ݔ௧ ǡ ݑଵ ǡ ݑଶ ǡ ǥ ǡ ݑ we use data collected from the simulation for each time point of the time horizon and sampling the space of all parameters ݕ ሺ ሻǡ ݅ ൌ ͳ ǥs, j=1...N (6) Using these data as the input-output samples, train the meta-models. ݕ ሺሻǡ ݆ ൌ ͳǤ Ǥ Ǥ ܰ, (7) As a result, an algebraic model is available for each time point over the time horizon. These predictions of the state of the system for each time are independent from other time points, preventing the approximation from suffering error accumulation. The results above can easily translate and be applicable to a MIMO dynamical system as not much extra cost. n.b.: The approximation technique only requires the data of the initial condition and the control input sequence, hence the possibility to obtain a N-step-ahead dynamical system for different regions without the harassment of switches. The state space is divided into
A novel approximation technique for online and multi-parametric model predictive control
741
regions for which a single linear MPC controller is used, to create a switchless multimodel predictive controller. The approach is demonstrated in the next paragraph. 3. Example The model is that of a distillation column for separation of cyclohexane and n-heptane [2]. The two components are separated over 30 theoretical trays. The model has 32 states which are the compositions at each tray and those of the distillate and reboiler. The control variable is the reflux rate. The problem is formulated with a time horizon of ܰ ൌ ʹͲ and a sampling time ߜ ݐൌ ͳ. We impose input and output constraints. 0.954
Simplified and reduced to 4 states Simplified full order
0.952
0.95
0.948
0.946
0.944
0.942
0.94
0.938
0.936
0.934
0
5
10
15
20
25
30
Fig 2: Close loop response for set point ࢞ࡰ ൌ Ǥ ૢ, for two approximated models Figure 2 shows the close set-point change using controllers based on metamodels approximating the system. The approximate models are not differential equations hence it is not appropriate to talk about of the order of these approximations. Nevertheless computing these models require more or less information. For example the second meta-model only requires the initial conditions (i.e measurement) of three states whereas the first meta-model requires information on all states, which is usually more expensive in terms of hardware. 4. Conclusions This paper presented a new approach for the approximation of nonlinear models. The method enables the reformulation of the tracking reference problem into a QP or mp-QP problems by transforming nonlinear dynamic model into a set of affine algebraic expressions. Future research will deal with the application of the method to parametrically uncertain systems, combining parametric model order reduction and robust control. Other research directions will include application of receding horizon meta-modelling to inherently hybrid systems.
5. Acknowledgments The authors are thankful for the financial support from the European Research Council (MOBILE, ERC Advanced Grant, No: 226462), EPSRC (EP /I019640) and the CPSE Industrial Consortium.
Romain S.C. Lambert et al.
742
References [1] Efstratios N. Pistikopoulos, Perspectives in multiparametric programming and explicit model predictive control (2009). AIChE Journal, vol. 55, no. 8, pp. 1918-1925, 2009. [2] Hahn, J., Edgar T.F, Marquardt W., (2002) Controllability and observability covariance matrices for the analysis and order reduction of stable nonlinear systems (2002). Journal of Process Control 13 (2003) 115–127 [3] Sobol’, I.M., 1967. On the distribution of points in a cube and the approximate evaluation of integrals. Computational Mathematics and Mathematical Physics, 7, pp. 86-112 (in Russian). Appendix: model of a 30 stages distillation column with constant volatilities [2]. Condenser
Feed Tray: ݀ݔǡଵ ͳ ൌ ൣݔܨǡிௗ ܮଵ ݔǡଵ െ ܮଶ ݔǡଵ ݀ݐ ்ܣ௬
ͳ ݀ݔǡଵ ൌ ܸሺݕǡଶ െ ݔǡଵ ሻ ݀ݐ ܣௗ Trays in the rectification section ݅ ൌ ʹǤ Ǥͳ ݀ݔǡ ͳ ൌ െ ݔǡ ൯ െ ܸ൫ݕǡ െ ݕǡାଵ ൯൧ ൣ ܮ൫ݔ ݀ݐ ்ܣ௬ ଵ ǡିଵ Feed Tray: ݀ݔǡଵ ͳ ൌ ൣݔܨǡிௗ ܮଵ ݔǡଵ െ ܮଶ ݔǡଵ ݀ݐ ்ܣ௬ െ ܸ൫ݕǡଵ െ ݕǡଵ଼ ൯൧ Trays in the stripping section ݅ ൌ ʹǤ Ǥͳ:
െ ܸ൫ݕǡଵ െ ݕǡଵ଼ ൯൧ Trays in the stripping section ݅ ൌ ʹǤ Ǥͳ: ݀ݔǡ ͳ ൌ െ ݔǡ ൯ െ ܸ൫ݕǡ െ ݕǡାଵ ൯൧ ൣ ܮ൫ݔ ݀ݐ ்ܣ௬ ଶ ǡିଵ Reboiler: ݀ݔǡଷଶ ͳ ൌ െ ሺ ܨെ ܦሻݔǡଷଶ െ ܸݕǡଷଶ ൧ ൣݔ ܮ ݀ݐ ܣோ ଶ ǡଷଵ Further equations:
݀ݔǡ ͳ ൌ െ ݔǡ ൯ െ ܸ൫ݕǡ െ ݕǡାଵ ൯൧ ൣ ܮ൫ݔ ݀ݐ ்ܣ௬ ଶ ǡିଵ
ܸ ൌ ܮଵ ܮܦଶ ൌ ܮଵ ܨ ܴܴ ൌ
ݕ ሺͳ െ ݔ ሻ ܮଵ ߙǡ ൌ ܦ ݔ ሺͳ െ ݕ ሻ
ܣௗ total molar holdup in the condenser
ܸ Vapour flowrate in the column
்ܣ௬ total molar holdup in each tray
ܴܴ reflux ratio
ܣோ total molar holdup in each tray
ݔǡ liquid composition of component ܣon the ݅௧
ܨFeed flowrate
stage
ܦDistillate flowrate
ݔǡிௗ Feed composition of component A
ܮଵ Flowrate of the liquid in the rectification section
ݕǡ vapour composition of component ܣon the ݅௧
ܮଶ Flowrate of the liquid in the stripping section
stage ߙǡ relative volatility (assumed constant)
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-Parametric Model Predictive Control of an Automated Integrated Fuel Cell Testing Unit Chrysovalantou Ziogoua,c, Christos Panosb, Konstantinos I. Kouramasb, Simira Papadopouloud, Michael C. Georgiadis,c, Spyros Voutetakisa, Efstratios N. Pistikopoulosb a
Chemical Process Engineering Research Institute (CPERI), Center for Research and Technology Hellas (CERTH) P.O. Box 60361, 57001 Thessaloniki, Greece e Department of Chemical Engineering, CPSE, Imperial College London, SW7 2AZ London, UK c University of Western Macedonia, Department of Engineering Informatics and Telecommunications, Kozani 50100, Greece d Alexander Technological Educational Institute of Thessaloniki, Department of Automation, P.O. Box 141, 57400, Thessaloniki, Greece
Abstract The aim of this work is to present a framework for the design of a constrained multiparametric Model Predictive Control (mp-MPC) strategy for a fully automated integrated Hydrogen Fuel Cell Testing Unit (HFCTU) at CERTH. The underlying process is described by an experimentally validated dynamic model which provided the basis for the control framework. This model was used to derive reduced order state space models based on which an explicit/multi-parametric MPC controller was designed in order to satisfy the load demand, avoid starvation and maintain the temperature at a nominal point. The derived controller was tested offline on several operating conditions and showed an excellent transient and steady-state performance. Keywords: Multi-parametric/Explicit Model Predictive Control; PEM Fuel Cell, Parametric Programming, Reduced order model
1. Introduction Fuel cells have received extensive attention and are under intensive development over the last few years. The Polymer Electrolyte Membrane type of fuel cell (PEMFC) is currently considered to be particularly suitable for ground vehicle applications, portable devices and small stationary applications (Pukrsuhpan et al., 2004, Arce et al., 2007). However to compete with internal combustion engines, fuel cells must reach similar level of performance and life time. The main technical issue is that ground vehicle propulsion is a challenging problem due to variety of transient behavior within the fuel cell system. The control system has to ensure that critical parameters are in their optimal values in order to optimize the operation of the system and avoid degradation that could damage the stack. This challenging control problem can be implemented mainly by using advanced control techniques, such as model based control (MPC). The PEM fuel cell system is described by a dynamic mathematical model based on the work of Ziogou et al. (2010), which has been validated using experimental data in several operating conditions in order to guarantee the accuracy of the simulation results. This validated mathematical model is used to derive the reduced order models and
744
C. Ziogou et al.
design the explicit/multi-parametric MPC controllers. Finally the controllers introduced in the actual process and their performance is validated.
2. Mathematical Modeling and Experimental Setup The 1D mathematical model that was used, includes mass balances for the anode and cathode side and recirculation, semi-empirical equations for the membrane, electrochemical equations, heat balances for the fuel cell and mass and energy equation for the humidifier, compressor and the cooling system. The mathematical modeling of the mass transport phenomena along the channels, the gas diffusion layers and the membrane is based on a five volume approach (del Real, 2007). Moreover, a simple semi-empirical equation was used for the voltage calculation that accounts for temperature and the various voltage losses. The main structure of the model along with the mass transport flow is depicted in Fig. 1 . Due to the lack of space the equations of the model are not presented, but can be found at Ziogou et al (2010). 2.1. Experimental System The developed Fuel Cell Testing Unit (FCTU) of LPSDI/CERTH is comprised out of a humidification system, two mass flows for the regulation of the gases and two PID controllers for the anode and cathode pressure regulation. Also the temperature control subsystem includes a fan assisted air cooling system and an electrical heat up system. The automated operation and the data acquisition are conducted through an on-line supervisory control and data acquisition system (SCADA). The integrated system is equipped with an electronic load, which simulates the power demands or the power fluctuations that occur in real systems that use fuel cells for power generation.
Figure 1 - Structure of the model
Figure 2 - PEM Fuel Cell Unit
3. Model Identification Prior to the control framework the non-linear model needs to be simplified in order to be used for the MPC controller design. Therefore a discrete reduced order state space (SS) model for each control objective is obtained using a model identification technique which reconstructs adequately the dynamic behavior of the system. The input-output data are obtained from simulations of the non-linear mathematical model for various operating conditions and the parameters of the SS models are determined from the Identification Toolbox of Matlab. The sampling time for the data acquisition is 100ms for the power, the current and the mass flow rates, and 1s for the temperatures, since temperature presents slower dynamic behavior. The mathematical representation of the SS model with additive disturbances is as follows: xt 1 Axt But Cvt , yt Dxt (1)
Multi-Parametric Model Predictive Control of an Automated Integrated Fuel Cell 745 Testing Unit The first SS model approximates the behavior of the power and has one manipulated variable (I), one control (outpout) variable (P) and one known disturbance, the difference between the maximum power and the measured power (ǻP) and two states. 0.028574 º ª0.99912 ª0.0059061º ª-0.00018942º A1 « » , B1 «-0.036945 » , C1 « 0.0016459 » , D1 ¬0.0054676 0.82201 ¼ ¬ ¼ ¬ ¼
>12.565
-0.0015507@
The second and the third reduced order models approximate the behavior of the oxygen and hydrogen excess ratio. Both SS models have one state, one disturbance (I), one manipulated variable which is the excess of oxygen (ȜO2) and hydrogen (ȜH2) and one output/control variable, the mass flow rate of air ( m air ) and hydrogen ( m H 2 ), respectively. The system matrices are given as follows A2 [-0.014276] , B2 [ 0.060283] , C2 [-0.0088574] , D2 [233.68] A3 [ - 0.055887], B3 [0.0086835] , C3 [-0.0055123], D3 [ 238.05] The fourth SS model represents the behavior of the temperature which is maintained at the desired set point through a heatup and a cooling system. The system matrices are presented bellow A4 1, B4 [0.0052417 -0.0004687] , C4 [3.3348e-009], D4 1 This model has one state, one output /control variable (Tfc), one known disturbance the ambient temperature (Tamb) and two inputs, the power to the resistance for the heatup (WR) and the power of the fans (Wcl), which both are translated to percentage of the full equipment operation. A measure of the fitness between the aforementioned models in question we calculated the mean square-root difference. It was found that for the power controller was 0.028W, for the lambda of air and hydrogen 0.0407 and 0.0371 and for the temperature controller 0.0042K. The above metric was calculated using 2400samples over a period 600s and the results are a clear indication that the SS models have the required accuracy to describe the behavior under consideration.
4. Multi-Parametric Model Predictive Control (mpMPC) Framework The next step involves the design of multi-parametric Model Predictive Controllers of the PEMFC system. A major drawback which often limits the applicability of the traditional MPC framework is concerned with the increased online computational requirements related to the solution of the constrained optimization problems. In order to overcome this drawback, explicit or multi-parametric model predictive control mpMPC was developed (Pistikopoulos et al., 2007) which avoids the need for repetitive online optimization. In mp-MPC the online optimization problem is solved off-line with multi-parametric programming techniques to obtain the objective function and the control actions as functions of the measured state/outputs (parameters of the process) and the regions in the state/output space where these parameters are valid i.e. as a complete map of the parameters. The control is then applied by means of simple function evaluations instead of typically demanding online optimization computations. The following MPC formulation is considered for the PEM fuel cell control system: Ny
min J x ,u , y
¦ y
i
i 1
s.t.
ysp ,i Q yi ysp ,i T
x t 1
Nu 1
¦ u j 0
usp , j R u j usp , j T
j
Ax(t ) Bu t Cv t ,
(2)
y (t ) Dx(t ) ymin d y (t ) d ymax , umin d u (t ) d umax , vmin d v(t ) d vmax
where u are the manipulated variables, y are the controlled variables, Nu is the control horizon and Ny the prediction horizon. The objective function is set to minimize the
C. Ziogou et al.
746
quadratic norm of the error between the output variables and the reference points. Moreover, the system includes physical constraints which should satisfied during the operation: 0.1A d I d 12 A , 500cc / min d mair d 3000cc / min , 0 d WR d 55.8W , 313K d T fc d 353 K 0.1W d P d 7W , 200cc / min d mH 2 d 1000cc / min
, 0 d Wcl d 25.8W , 288K d Tamb d 313K
The aforementioned optimization problem (2) is a multi-parametric Quadratic Programming problem and can be solved with standard multi-parametric techniques (mp-QP) (Pistikopoulos, 2007). In our study the explicit parametric controller was derived with the Parametric Optimization (POP) software. The control horizon in each problem is 2, therefore there are two optimization variables (ut+0, ut+1). Table 1. optimization problem parameters and settings
Optimization Parameters (ș) Pred.Hor. Weight Weight CR variables (u) (Ny) (Q) (R) T1 [ x1 x 2 ǻI ǻP P Psp ] 10 3 0.01 67 P I(t+0), I(t+1) T 2 [ x1 I O2 2 O2 2, sp ] 20 1 0.1 13 ȜO2 mO2(t+0),mO2(t+1) 40 100 0.1 13 mH2(t+0), mH2(t+1) T3 [ x1 I OH 2 OH 2, sp ] ȜH2 WR (t+0), WR (t+1), T 4 [ x1 Tamb T fc T fc , sp ] Tfc 100 1000 0.001 17 Wcl (t+0), Wcl (t+1,) The corresponding parameters of each problem are shown in Table 1 with the respective number of explicit/multi-parametric MPC controller’s critical regions, while Figure 3 presents the control design for the PEMFC system including the input/output variables of each controller and the interactions between them. Objective
ǻP(d) Power Controller
I I(d) O2 Excess Ratio Controller
I(d) H2 Excess Ratio Controller
Tamb(d) Temperature Controller
P
.
mair .
mH2
PEM Fuel Cell
Ȝȅ2 ȜǾ2
Wcl WR
Figure 3 Control structure
Tfc Figure 4 Temperature control and cooling/heatup
5. Simulation Results Figures 4,5 and 6 depict the simulation results of the mp-MPC implementation for different operating conditions (set points). During the simulation we assumed that the ambient temperature was kept constant at 298K. The performance of the temperature controller is presented in Figure 4, where simulations performed with three temperature set points changes (333K, 338K, 343K) while the power controller’s set point is set at constant level (5W), and it is observed that the controller follows rapidly the set point changes on the temperature without offset. Due to the small size of the PEMFC the system needs to be heated during steady state operation in order to follow the set point (the resistance is working at 1-4%).
Multi-Parametric Model Predictive Control of an Automated Integrated Fuel Cell Testing Unit 747 In Figures 5 and 6 the performance of the power and oxygen excess ratio controllers is presented. During the experiment the hydrogen excess ratio is kept constant through the controller (ȜH2=1.5) and the temperature controller has a fixed set point at 338K. The mass flow rates (Figure 6) are properly adjusted to fulfill the starvation avoidance constraint by keeping the excess ratio at constant level. The power controller showed excellent response to load changes and the excess ratio controller demonstrated fast settling time (less than 2s) after current disturbances.
Figure 5 Control of power and Ȝȅ2
Figure 6 Current and mass flow rates
Overall the mpMPC controller design is able to track the desired reference points regardless the fluctuations of the interacting variables. Finally the system response was within the feasible area of operation since the output of the controllers was bounded by the operating constraints and the stability was guaranteed.
6. Conclusions In this work an explicit/mutli-parametric MPC controller design has been developed and validated offline on the simulation model. Four controllers have been derived in order to fulfill the power demand, while avoiding starvation, minimizing the excess of hydrogen supply and maintain the fuel temperature at the desired set point. The results have shown an excellent control performance. Current work focuses on the implementation of the derived controller in the experimental PEMFC system.
Acknowledgements Financial support from the DECADE IAPP Project of FP7 is gratefully acknowledged (Contract number PIAP-GA-2008-230659)
References Arce, A., Ramirez, D.R., del Real, A.J. & Bordons, C. (2007). Proc. of the 46th IEEE Conf. Dec. Con., New Orleans, LA, USA. Pukrushpan, J.T., Stefanopoulou, A.G. & Peng, H. (2004). Control of Fuel Cell Power Systems: Principles, Modelling and Analysis and Feedback Design, Series in Advances in Industrial Control, Springer Pistikopoulos, E.N., Georgiadis, M. & Dua, V. (2007). Multi-parametric Model-based Control: Theory and Applications, Weinheim: Wiley-VCH. del Real, A.J., Arce, A., Bordons, C., 2007, Development and experimental validation of a PEM fuel cell dynamic model. Journal of power sources, 173 (1). 310-324. Ziogou C, Voutetakis S., Papadopoulou S. Georgiadis M.C. (2010). Modeling and Validation of a PEM Fuel Cell System, Computer Aided Chemical Eng, 28, 721-726.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Use of commercial structured databases as innovative solution for FEED projects Fabio Ferraria, Lorenzo Selmia a
Foster Wheeler Italiana, via Caboto 1, Corsico 20094, Italy
Abstract Foster Wheeler Italiana (FWI) decided to test and evaluate the impact of an integrated project database in the workflow. With this aim, FWI developed, using the usual working procedures of Process Design, an integrated project database by means of commercial database software. Process Flow Diagrams (PFD) were used to define the database structure and the connections relevant to process data were implemented using the Process Datasheets as main references between the design parameters of unit equipment. The development activities carried out internally has shown that implementing a good database configuration it is possible to offer to Process Engineers the best documentation structure that reflects the Process Flow Diagrams topology. The data management gives the possibility to differentiate the information depending on the responsibility of the single company department, avoiding errors during the subsequent data transfer. Moreover the documents normally shared among company sections (i.e. Process Flow Diagrams for material of construction) can be kept always consistent and updated, just reducing the time lost during drafting and revisioning phases. With this approach, FWI experienced an increased quality of the documentation issued, without affecting the manhours expenditure and leading to more streamlined project management. Keywords: Structured Database, Document Management, Workflow, Work Process
1. Introduction In process industry the growing demand to improve design cycles of engineering companies is leading to an extensive use of structured database software; however the databases available on the market are normally built to contain only a reduced quantity of standard information and the commercial application tools are usually not integrated in the process design environment used for project development. Those structured database softwares are often used only as archives for the issued documents or, at least, as simple document management systems; this approach often results in database structures that do not contain all the logical connections between the equipment/data and require difficult a-posteriori work to define the relationships among the documents. Initial improvement can be achieved through a better integration of application tools into the design environment (Bessling et al., 1997), in particular if the heterogeneous data generated during the process are adequately managed and stored inside the database (Bayer et al., 2003). The interoperation of different tools has been often integrated only through the standardization of the interfaces on a syntactical level for common technical platforms (Wasserman, 1990), leading to difficulties in the implementation of all the necessary correlations between the different application outputs.
Use of commercial structured databases as innovative solution for FEED projects
749
A better solution can be obtained with the standardization and connection of the different documents fine-grained dependencies through a semantic document oriented integration that gives a meaningful context to data exchange (Marquardt et al., 2004). Process Flow Diagrams can be considered the adequate context to manage the data transfer among different application tools and specifically can be the central documents to guide the integration between the other deliverables (Bayer et al., 2001). Following this solution FWI has experienced some advantage in the project document creation phase, through an enhanced data management for the results of process simulation inside a structured database (Selmi et al., 2010). The complexity of bigger databases may, however, lead to difficulties in finding a specific document and critical issues can be generated by revisions especially if many entities are involved in project workflow (i.e. Company Departments). To better understand the complexity of the problem, it is possible to analyze the typical workflow followed by documents. (Figure 1.1).
Figure 1.1
In summary, after the characterization of the operating conditions, the process engineer prepares the equipment datasheets and includes the design conditions and the main sizing information. Those documents are then used to define the additional equipment details (i.e. materials and mechanical requirements) and finalize the relevant Material Requisitions. Finally the Material Requisitions are sent to Vendors/Suppliers that will develop their offer compliant with the requisition. During Front End Engineering Design (FEED) projects, several difficulties can be experienced in proceeding with a fast and efficient follow-up on the information initially provided. In particular most of the problems arise when process specifications are passed to the other disciplines and the data have to be transferred into other documents, created by the relevant Engineering Departments. During this phase an adequate manual check on the data is required to guarantee the correctness and coherence of the issued documents, specifically to avoid uncontrolled error propagation. In addition the consistency checks become particularly important during the revision and follow-up activities, especially after Vendors/Suppliers feedback when the modified data have to be carefully monitored, before the subsequent implementation and issue. All those additional checks are time consuming and may become difficult to manage and control if the integration and connection among the documents is not adequately defined. In order to solve those issues, the idea is to modify the standard approach to document management and to start the creation of project databases, using the deliverables generated by process engineers as a basis for the definition of structures and for the identification of document correlations. This innovative approach will be discussed in this paper.
750
F. Ferrari et al.
2. FEED Projects database customization 2.1. Database customization The off-the-shelf software database offered a very limited basket of options and was completely inadequate to cover all the possible features necessary for a FEED Project. The software was completely open, but was lacking in its default structure and specifically not compliant with Engineering Companies needs. Significant customization has been successfully carried out in order to include all the standard documents necessary to cope with FWI quality requirements. In this context the variables have been created, completely disregarding the interdependencies between the different documents, but following only Company internal procedures as general guideline for data definition. 2.2. The importance of Process Flow Diagrams creation phase In parallel to documents customization, the information workflow has been deeply analyzed, in order to evaluate the possible advantages of a structured database. This study highlighted that the default structures, embedded in the software, were not adequate and need a huge quantity of time to be compliant with the projects requirements and to be easily accessible for the users. The reason is that the procedures normally followed in commercial softwares are mainly focused on the Engineering Department’s approach to the problem and the database is only viewed as an archive for issued deliverables. Vice versa a better approach is to start in defining the database structure using the Process Flow Diagram (PFD) creation. As a matter of fact the PFD can represent and can model all the connections among the different equipment items and can create the relevant dependencies through the use of objects connectors, to represent the link among the database elements and directly manage them during the PFD drafting phase (Bayer et al., 2003). Moreover that solution respects the sequence of definition commonly followed during FEED projects and developed in the Process Department from the very beginning. Through that operation the process engineers automatically configure the database and implement a bulk document structure that will be already consistent with Process Flow Diagrams and will increase the simplicity of access to project deliverables. 2.3. Data management through connectors use After database structure creation through PFD drafting phase, the subsequent step has been to identify where to store the main data. The solution has been to use process streams as principal objects for operating conditions identification and take advantage of the database equipment connectors to transfer the data inside the whole database; with this solution, the definition of heat and material balances directly defines data assignments from flowsheets, so that it is not necessary to integrate the unit operations defined inside simulation topology. Effectively through that approach, the database objects inherit the minimum data necessary for the design and the information is automatically archived in the right place, below the correct relevant connected equipment (Selmi et al., 2010). 2.4. Data and documents sharing The above described approach allows the data transfer from heat and material balances and creates preliminary datasheets, however the document finalization foresees the intervention of the designer to define the correct equipment sizing parameters. This operation requires the integration of different application tools, homogenizing their heterogeneous result to evaluate the performance of the specific equipment in all the
Use of commercial structured databases as innovative solution for FEED projects
751
possible operating cases. In addition many engineering disciplines have to be involved to finalize the activities and complete the equipment definition. The idea has been to select Process Datasheets as the main document for equipment design and identify the process variables defined inside those documents as the source of all the subsequent dependencies of database objects. These data identification criteria constitute bulk documents shared among all the project entities and identify the main variables that shall be kept as principal source for equipment design. All of the other equipment engineering datasheets (i.e. Material Requisition) constitute dependent database documents linked to the main structure and defined through the relevant Process Datasheet. During Front End Engineering Design projects the Engineering Departments can create their relevant deliverables starting from a shared common database, where data are automatically represented in different documents and with the absence of redundant information already eliminated through the Process Datasheet variable definition. With this approach the documents can be issued as required in Company Procedures, but using the consistency guarantees offered by the database structure. All the additional information is then included in the database but located in specific areas dedicated to the department in charge of their definition. In addition, due to the fact that the base document structure is always the same, the subsequent datasheets are kept always consistent during project development, especially during the revision phase of shared data, so that all the departments are assured they are working on the most up-to-date information. (Figure 2.1) The best example of the advantages shown by this approach can be highlighted for Material of Construction (MOC) diagrams, which are typical deliverables in FEED projects. The standard workflow foresees the process engineer creating the process flowsheets and defining the unit design conditions/contaminants. The material experts are responsible for material definition for equipment and lines and show this information on a different drawing type (MOC). All of those phases normally generate delays and can cause many difficulties during data revision events. The described innovative approach uses the PFD as the main document and, following this solution, guarantees the document consistency, because the process scheme is kept intrinsically consistent, whilst the other data are placed only as additional information in the right database position and shared among all the disciplines involved.
Figure 2.1
2.5. Rights management An additional key point in the definition of the database structure is the management of access rights, in particular to avoid that disciplines not responsible for a particular datum might modify it without the approval provided by the Department that has accountability for the datum. The effectiveness of the database use has been guaranteed by the rigid definition of information workflow, organized following the Company Procedures that define the responsibility of each entity involved in the project. Through this path the deliverables
752
F. Ferrari et al.
development in FEED projects allows the sharing of many documents and information among the Departments, but correctly defines the boundary of intervention of each single discipline responsible for each specific design variable.
3. Conclusions The possibility to reduce or, in some cases, completely avoid data manual inputs has greatly improved the consistency of the issued deliverables during the information transfer between Departments involved in FEED projects. In addition that approach reduces the effort required to search and check the documents and their revisions, which are kept coherent through the database structure. The possibility to share the documents among the Departments has been improved and the quality has been enhanced, without affecting the manhours spent; it has to be highlighted that the information is automatically charged in the relevant correct positions and the rights definition avoids the interferences/modifications from disciplines not responsible. This innovative approach guarantees the structure efficiency in data management and aligns with the normal flow of development during all project phases. Furthermore the database structure automatically archives all the project deliverables in the most efficient position, as demonstrated by the experienced reduction in the time required to find each single document, because the connections among each equipment are intrinsically built. All those features and tools have shown interesting perspectives for collaborations with external companies, when the sharing of the same database enhances the information exchange. This opportunity, given by the database structure’s robustness, allows the data transfer and modifications during the entire FEED can be easily followed and checked. Finally the possibility to define template structures for repetitive design configurations (Ferrari et al, 2010) can offer major manhours efficiencies in the re-use of Basic Design Packages developed with a structured database, foreseeing the integration with subsequent FEED development projects.
References B. Bayer, K. Weidenhaupt, M. Jarke & W. Marquardt, 2001, A flowsheet centered architecture for conceptual design, Computer Aided Chemical Engineering, 9, Elsevier, 345-350. B. Bayer, S. Becker, M. Nagl, 2003, Integration Tools for Supporting Incremental Modifications within Design Processes in Chemical Engineering, Computer Aided Chemical Engineering, 15, Elsevier, 1256-1261. B. Bessling, B. Lohe, H. Schoenmaker et al., 1997, Cape in process design - potential and limitations, Computers & Chemical Engineering, 21-Suppl.1, Elsevier, S17-S21. F. Ferrari, L. Selmi, E. Capossela, 2010, Advantages in the production of Process Design Packages (Hydrotreating Units as study case), Computer Aided Chemical Engineering, 28, Elsevier, 1949-1953. W. Marquardt, M. Nagl, 2004, Workflow and information centered support of design processes the IMPROVE perspective, Computers & Chemical Engineering, 29, Elsevier, 65-82 L. Selmi, E. Capossela, F. Ferrari, 2010, Enhanced Data management to create Project Documents starting from Process Simulations, Computer Aided Chemical Engineering, 28, Elsevier, 1663-1666. A. Wasserman, 1990, Tool integration in software engineering environments, Lecture Notes in Computer Science, 467, Springer, 137-149.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Controlled Variables from Optimal Operation Data Johannes Jäschke, Sigurd Skogestad∗ Department of Chemical Engineering; NTNU; Trondheim, Norway
Abstract In this paper we show how optimal operation data and concepts of self-optimizing control can be used for finding controlled variables which give optimal operation for the disturbances included in the data set. The method extracts the operation strategy which is hidden in the optimal data and may help to analyze and improve operation in the common case where it is difficult or very expensive to obtain a good model. Keywords: Controlled variable selection, data based methods, self-optimizing control
1. Introduction For many processes, obtaining a good mathematical process model is important for successful operation. However, obtaining a good model is often inhibited by several factors, such as a tight budget and limited knowledge or time. Thus, obtaining a good process model and keeping the model up to date is one of the major bottlenecks for the application of advanced process control in industrial applications [1]. It is therefore desirable to minimize the modeling effort, while still achieving good process performance. In this work we present a method which is based on logged process data, which is readily available for many processes in industry. This data is used to find self-optimizing controlled variables whose optimal setpoint does not change with varying disturbances. Previously, self-optimizing control structure design has been based on a process model. The contribution of this paper is to show how past process data can be analyzed to determine good controlled variables.
2. Motivation and problem formulation An example of a system which is hard to model is a marathon runner. However, it is easy to collect data from runners, such as e.g. heart rate, stride frequency, temperature, blood oxygen content and breathing frequency. The data from the best runs of the runners subject to expected disturbances such as hilly terrain and wind is collected in an optimal data matrix Y . This data is used to determine a linear combination of measurements, which is (almost) constant for all the best runs. By running such as to keep this linear combination of variables at their optimal values, an optimal running strategy can be implemented. Similarly, in a process plant, some operators may be able to operate the process more profitably than others. Analyzing the "optimal operation data" of these operators can reveal linear combinations of variables, which other operators can use as a guidance when operating the plant. Alternatively, these variables can be used for feedback control. We assume that optimal operation corresponds to minimizing a cost J, and that the optimization problem can be approximated in deviation variables around the optimal point ∗ [email protected]
754
Johannes Jä schke et al.
as min J = u
uT
dT
!
Juu Jdu
Jud Jdd
"!
u d
" (1)
where u ∈ Rnu and, d ∈ Rnd are the inputs and the disturbances, respectively. In order for a minimum to be unique, we require that Juu is positive definite. For each degree of freedom u we search for a controlled variable c which is a linear combinations of measurements, c = Hy. If the variables give acceptable performance when controlled at constant setpoints, they are called self-optimizing, as defined in [2]: Self-optimizing control is when we can achieve an acceptable loss with constant setpoint values for the controlled variables (without the need to reoptimize when disturbances occur). The loss is defined as L = J(u, d) − J(uopt , d), where u is the input generated by the current operating policy, for example adjusting u such that c = Hy is kept constant.
3. Data method The new method for finding these measurement combinations is directly inspired by the null-space method [3] which we present in the following. 3.1. Null space Method This method is based on the quadratic approximation of the cost function (1). In addition it is assumed that a linear noise free measurement model is available, y = Gy u + Gyd d. Here, y ∈ Rny is the vector of linear independent measurements and Gy , Gyd are the gain matrices of the system. Theorem 1 (Null space method). Given a sufficient number of noise-free linear independent measurements, ny ≥ nu + nd , select H such that HF = 0, where F = ∂ yopt /∂ d is the optimal sensitivity matrix. Then controlling c = Hy to zero gives optimal operation with zero loss. Proof: Close to d nom , by definition of F we have yopt (d) − yopt (d nom ) = F(d − d nom ). The optimal change in the controlled variables is: copt (d) − copt (d nom ) = HF(d − d nom ). Since H is selected such that HF = 0 optimal variation copt − copt nom is zero , too. Hence, controlling c = Hy to zero leads to optimal operation. The optimal sensitivity matrix F is usually obtained numerically, by optimizing a model −1 J [3]. or by linearizing at the nominal point, and evaluating F = Gyd − Gy Juu ud 3.2. Using optimal operation data In the case where we do not have an explicit model, we will not know the optimal sensitivity F = dyopt /dd. Now let us assume that we have “optimal” data for y for various disturbances collected in the data matrix Y . If we have sufficient data then Y will contain the same information as F, because all disturbances have been rejected optimally. In particular, all columns in Y are linear combination of the columns of F. For example, if we write the optimal sensitivity matrix F = ∂ yopt /∂ d = [ f1 , f2 , · · · , fnd ], we have that if the matrix is augmented by any (combination) of its columns, e.g. if Y = [F, α f1 + β f2 ], the left null space remains unchanged. This proves the following result: Theorem 2 (Optimal data method - No noise). Given sufficient measurements ny nu + nd , and optimal measurement data Y , where we for each distinct disturbance d there is
Controlled Variables from Optimal Operation Data
755
at least on column in Y . Then the optimal measurement combination can be determined by selecting H such that HY = 0. The H matrix may also give valuable insight into the operation policy. After scaling and centering of the data, the elements in the left singular vector of Y can be used to analyze the operation strategy. We will demonstrate this in an example from economy below. In practice, the data matrix Y will not be consistent such that a null space HY = 0 exists, either because of too many disturbances, or more likely because of measurement noise. One approach to handle the this is to do a singular value decomposition Y = UΣV T , and select the transpose of the nu columns in U which correspond to the smallest singular values in Σ. This is equivalent to approximating Y by the closest matrix with rank nu . More generally, the minimum loss method (exact local method) of [4] may be used, to handle cases with measurement noise, but this requires that we also have some “nonoptimal” data: Theorem 3 (Optimal data method with noise [4]). Given noisy optimal measurement data Y and given “nonoptimal” data for the effect of the inputs (degrees of freedom) u on the measurements Y , so Δy = Gy Δu, the optimal measurement combination can be determined by finding the H which minimizes ||(HGy )−1 HY ||F . Note that we want HGy to be large, that is we want to use “sensitive” measurements. With the sensitivities small and with little measurement noise, the contribution from the term HGy is small, and then Theorem 2 is sufficient.
4. Case studies 4.1. Optimal operation of a chemical reactor (use of Theorem 2) We consider a CSTR with a reaction A B, Fig 1. The feed contains mainly component A, and the objective is to maximize the profit which is calculated as the difference between the income from selling the product B and the cost of cooling: P = pBCB − pcool Ti2 . Ti is the cooling temperature which can be manipulated to optimize performance. The feed concentrations are the main disturbances, and the concentrations reactor temperature are measured, so y = [CA , CB , T ]. The optimal operation data is obtained by applying the NCO tracking procedure as described in [5] in combination with finite difference gradient estimates, where the input is perturbed to obtain a gradient estimate, and based on this estimate, it is adjusted to iteratively force the gradient to zero. The optimal data is collected into the data matrix Y , and a singular value decomposition Y = UΣV T gives (σ1 , σ2 , σ3 ) = (86.5, 4.8, 0.28). Since there is one input, Ti , we select the column in U corresponding to σ3 = 0.28. This gives a controlled variable c = Hy with H = [−0.77 0.63 0.005]. In Fig. 2 the simulated disturbance scenario is given and Fig. 3 shows the input usage when applying NCO tracking (to generate the optimal data) and when using a PI controller to control c = Hy = −0.77CA + 0.63CB + 0.005T to zero. Due to the continuous feedback control, controlling c = Hy gives much smoother input action than we have in the “optimal” data. Comparing the final profit in Fig. 4, shows that controlling the obtained invariant gives practically the same performance. 4.2. Economy example (use of Theorem 2) We consider economic indicators from 1991 to 2006 for France, Germany, Italy, Norway, UK, USA. The data is taken from [6]. The “measurements” y for each country are interest
756
Johannes Jä schke et al.
F CAin CBin
2 1.8 1.6 1.4
Ti Disturbances
1.2 1 0.8 0.6
T CA CB
0.4 0.2 0
CAin CBin 0
500
1000
1500
2000
2500 time [min]
3000
3500
4000
4500
5000
Figure 2. Disturbances CA,in , CB,in
Figure 1. CSTR 435
1.6
1.4 430 1.2
1
profit
Input Ti
425
420
0.8
0.6
0.4 415 0.2 NCO tracking selfíoptimizing control 410
0
500
1000
1500
2000
2500 time [min]
3000
3500
4000
4500
Figure 3. Inputs SOC and NCO tracking
5000
NCO profit SOC profit 0
0
500
1000
1500
2000
2500 time [min]
3000
3500
4000
4500
5000
Figure 4. Profit Comparison
rate (y1 ), unemployment (y2 ), the industrial production index (IPI, y3 ), the consumer price index (CPI, y4 ), tax revenue (% of GDP, y5 ) and exchange rate to SDR (special drawing rights, a “lumped” currency derived from the Yen, US Dollars, British Pounds and Euros, y6 ). The GDP growth, Fig 5, is the criterion for optimality. The measurements of year prior to the three years with highest GDP growth are used for Y . This results in H = [ - 0.67 - 0.02 0.22 0.62 0.32 0.10], Fig. 6. The most influential factors are the interest rate (-0.67) and the inflation rate (0.62). This is not unexpected, because the interest rate is used as a handle to control inflation. Of course the economics of countries is too complex to be described accurately by our selected variables, but we have shown that applying our method to economic data can reveal some of the operation strategy behind the data.
5. Discussion and conclusion The proposed “null space data method” picks out the weak directions in the data Y , whereas other “chemometric” regression methods concentrate on the strong directions in the data. An important reason for this is that we assume that the data is optimal, and we look for hidden combinations in this data that characterize the optimum. On the other hand, in regression methods one looks for relationships between variables X and Y . To show that the methods are different, assume our data contains two data sets, Y = [Y1 X]T and we want to find how the relationship between Y1 and X.#We assume #that dim(Y1 ) = dim(u) = nu . From our method, the problem becomes minH # H[Y1 X]T #F . Here, H is not unique, so we have that if H is an optimal solution, so is DH, where D
Controlled Variables from Optimal Operation Data
Figure 5. Annual GDP growth
757
Figure 6. Magnitude of elements in H
is an invertible matrix [4]. This degree of freedom may be used to set H = [I Hx ], and we optimize the problem minHx ||Y + Hx X||F , which has the least squares solution Hx = −Y X † . Thus our method is equivalent to the normal regression methods for problems where the norm of ||HY ||F is small, such that the contribution from the term Juu (HGy )−1 can be neglected, that is, for the noise free case. However, a significant difference to standard regression methods for the case when we simply minimize HY F , is that we do not distinguish between Y1 and X data and try to find a relationship between these, but instead focus on finding invariant variable combinations c = Hy = Hy y1 + Hx x = 0. Our method has the advantage that it only uses data and does not rely on a model. Thus it is applicable to systems where it is very expensive or impossible to obtain an accurate model. Not even the cost function has to be known as long as the data is optimal. However, it is important that the data is consistent in the sense that the data gives the correct optimal sensitivity F = dyopt /dd and contains little measurement noise. The main drawback is that we rely on optimal data, and performance cannot be improved beyond the learning data. However, one could obtain the optimal data using some expensive method, and then analyze it to find a cheap method which gives similar performance, as is done in the CSTR example above. Other applications could be to find the “secret” of good operators or the “control strategy” of a marathon runner or of some economy.
6. References [1]
[2] [3] [4] [5] [6]
D. Dochain and W. Marquardt and S. C. Won and O. Malik and M. Kinnaert and J. Lunze. Monitoring and Control of Process and Power Systems Adapting to Environmental Challenges, Increasing Competitivity and Changing Customer and Consumer Demands. Status report prepared by the IFAC Coordinating committee on Process and Power Systems. Proceedings of the 17th IFAC World Congress Seoul, Korea, July 6-11, 2008 S. Skogestad. Plantwide control: The search for the self-optimizing control structure. Journal of Process Control, 10:487-507, 2000 V. Alstad and S. Skogestad. Null space method for selecting optimal measurement combinations as controlled variables. Ind. Eng. Chem. Res., 46: 846 - 853, 2007. V. Alstad, S. Skogestad, and E. Hori. Optimal measurement combinations as controlled variables. Journal of Process Control, 19(1): 138 - 148, 2009. G. François, B. Srinivasan, D. Bonvin, Use of measurements for enforcing the necessary conditions of optimality in the presence of constraints and uncertainty, J. of Proc. Contr., 15(6), 2005
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of IMC-PID Tuning Parameters for Adaptive Control: Part 1 Chih-Wei Chua, *B. Erik Ydstie a, Nikolaos V. Sahinidis a ̙
a
Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213, USA, [email protected], *[email protected], ̙[email protected]
Abstract This paper describes Part 1 of a two-part strategy for robust certainty equivalence adaptive PID control. In Part 1 we develop the strategy for simple PID controller tuning which maximizes the bandwidth subject to gain and phase margin constraints. An implementation of the non-adaptive strategy in a real time environment using model estimation based on non-convex optimization is described. The test shows the potential of the tuning method. In the next part, which due to space limitations could not be included here, we describe adaptive implementation. Keywords: IMC-PID controller, robustness, optimization, adaptive control.
1. Introduction Surveys on PID control [1] show that the majority of PIDs are left on factory settings. This observation shows that the PID has inherent robustness properties when it is applied to typical chemical processes. However, one might suspect that significant gains could be achieved if the controllers were optimized since the accumulated effect of millions of poorly tuned PIDs may be large. Many methods have been proposed for on and off line PID tuning. Most of these are not suitable for adaptive control since they do not tune performance subject to robust performance. For example, classical methods for PID tuning taught in undergraduate classes on process control (e.g. [2]) do not include any tuning knobs. In this respect the Internal Model Control (IMC) tuning procedure by Rivera et al. [3] is better suited since includes the filter parameter ߬ which tune closed loop performance [4,5] to achieve robust stability. In this paper we develop a tuning procedure for IMC which minimizes the filterparameter to maximize bandwidth subject to pre-specified gain and phase margin [6-9]. An analytical solution is developed for the first order dead time process. In the next section we show that the approach meshes with certainty equivalence adaptive control.
2. Robustness of Certainty Equivalence Adaptive Control A compelling paradigm for adaptive control was developed in the 1970s under the banner of Certainty Equivalence Adaptive Control. In this approach the parameters of a transfer function model ܩ ሺݏሻ is estimated in real time by matching the model to process data. The resulting model is used to update the controller as shown in Figure 1A. The figure does not highlight that it is critical update the controller tuning to achieve robust performance. This property is better illustrated in the equivalent diagram Figure 1B where the adaptive system is viewed as two composite systems. The first system
759
Optimization of PID
shows the controller in feedback with the model. The second system shows how the model adapts to the plant. Robust performance is achieved when the nominal feedback system on the left does not generate high frequency inputs.
Figure1. Figure 1A on the left shows the classical representation of the certainty equivalence approach to adaptive control. Figure 1B on the right shows the structure used in stability analysis.
The controller tuning only needs to be robust with respect to unstructured (additive) uncertainty. Closed loop stability is ensured if ȁ ሺሻୡ୪ ȟሺݏሻȁ ൏ ͳ where ȟሺ) is the model uncertainty. It follows that the PID controller should be tuned so that it has pre-specified gain and phase margins to compensate for given unstructured uncertainty. In this sense adaptive control achieves better performance and robustness than robust control theory alone since we do not need to tune for parametric uncertainty.
3. PID Control with Pre-specified Gain and Phase Margins The IMC design achieves optimal performance robustness by minimizing ߬ subject to gain margin and phase margin constraints. Thus we want to solve the problem ݉݅݊ ߬ ݏǤ ݐǤܣ כܣ כ ߔ ߔ
߬ Ͳ כ where כܣ is the desired phase margin (typically ͳȀ͵Ɏሻ Ȱ୫ is the desired gain margin (typically 1.7). Denoting the process and controller transfer functions by ܩ ሺݏሻ and ܩ ሺݏሻ we get
ܣ ൌ
ଵ
(1)
หீ ሺఠ ሻீ ሺఠ ሻห
ܽ݃ݎൣܩ ሺ݆߱ ሻܩ ሺ݆߱ ሻ൧ ൌ െߨ
(2)
ߔ ൌ ܽ݃ݎൣܩ ሺ݆߱ ሻܩ ሺ݆߱ ሻ൧ ߨ
(3)
หܩ ሺ݆߱ ሻܩ ሺ݆߱ ሻห ൌ ͳ
(4)
where ߱ , ߱ are the phase and gain crossover frequency. Below we simplify this problem.
4. Tuning algorithm A first-order-plus-time-delay (FOPTD) plant model models process control systems in this paper. The first order Pade approximation gives ܩ ሺݏሻ ൌ
ఛ௦ାଵ
݁ ିఏ௦ ؆
భ
ଵିమഇೞ
ఛ௦ାଵ ଵାభ మഇೞ
(5)
Chih-Wei Chu et al.
760
The IMC-PID formula for FOPTD is given as [9]: ഇ
ܩ ሺݏሻ ൌ
ቀଵା మ ௦ቁሺఛ௦ାଵሻ
(6)
ഇ
ቀఛ ା ቁ௦ మ ഓ
ଵ ଶቀഇቁାଵ
ܭ ൌ
(7)
ଶቀഓ ቁାଵ ഇ ఏ
ܶ ൌ ߬
(8)
ଶ
ܶௗ ൌ
ఛ
(9)
ഓ
ଶቀ ቁାଵ ഇ
The open-loop and closed-loop transfer function are given by ഇ
ܩ ሺݏሻ ൌ ܩ ሺݏሻܩ ሺݏሻ ൌ ீ ሺ௦ሻ
ܩ ሺݏሻ ൌ
ଵାீ ሺ௦ሻ
؆
ቀଵା మ ௦ቁ ഇ
ቀఛ ା మ ቁ௦
݁ ିఏ௦
భ మ
ଵି ఏ௦
(10) (11)
ఛ ௦ାଵ
Substituting Eq. (10) into (1) – (4) results in ഇ
ܣ ൌ
ቀఛ ା మ ቁఠ
(12)
మ
ഘ ഇ ඨ൬ ൰ ାଵ మ
ఠ ఏ
ܽ ݊ܽݐܿݎቀ
ଶ
ቁ െ ߱ ߠ ൌ െ ఠ ఏ
ߔ ൌ ܽ ݊ܽݐܿݎቀ ߱ ൌ
ଶ
గ
(13)
ଶ
ቁ െ ߱ ߠ
గ
(14)
ଶ
ଵ
(15)
ඥఛ మ ାఛ ఏ
Thus the optimization problem becomes ݉݅݊ ߬ ഇ
ݏǤ ݐǤ
ቀఛ ା మ ቁఠ ഘ ഇ మ ඨ൬ ൰ ାଵ మ
ܽ ݊ܽݐܿݎቀ ܽ ݊ܽݐܿݎቀ ߱ ൌ
כܣ
ఠ ఏ ଶ ఠ ఏ ଶ
ቁ െ ߱ ߠ ൌ െ
గ ଶ
గ
כ ቁ െ ߱ ߠ ߔ
ଵ ඥఛ మ ାఛ ఏ
ଶ
߬ , ߱ , ߱ Ͳ
כ This problem has 3 variables (߬ , ߱ , ߱ ) and 3 parameters (ߠ, כܣ , ߔ ). However, the problem needs the time delay information, and an explicit analytical relation between the tuning parameter, ߬ , gain margin, ܣ , and phase margin, ߔ can be found from Eq. (12) – (15). Solving Eq. (12) gives a constant and for convenience denoted as ߙ:
߱ ߠ ൌ ߙ ൌ ʹǤͶͷͺ
(16)
Optimization of PID
761
By substituting Eq. (12), (15) and (16) into (14) and using Eq. (12) and (16) we express ߔ and ߬ as functions of ܣ so that గ
ଶ
ଶ
ర ටమ ቀଵା మ ቁିଵ ഀ
ߔ ൌ െ ఏ
ସ
ଶ
ఈమ
߬ ൌ ቆܣ ටͳ
ܽ݊ܽݐܿݎ
ଵ
(17)
ర ටమ ቀଵା మ ቁିଵ ഀ
െ ͳቇ
(18)
The plot in Fig. 2 shows that for a given process that the gain and phase margins are coupled so that only one of the two constraints will be active. 1.2
85 80
1
2.5ș
2ș
75
0.8
1.5ș
0.6
P h a se m a rg in ( Φ m )
70 T =ș
65
0.4 O u tp u t
c
60 55
0.2 0
0.5ș
50
-0.2
A *=2
45
-0.4
A * = 2.5
-0.6
A *=3
m
40
T = 0.3ș c
35 1
1.5
2
2.5
3 3.5 Gain margin( A )
4
4.5
5
5.5
-0.8 0
m m
1
2
m
Figure 2. ܣ vs. ߔ respect to ߬
3
4
5 Time (sec)
6
7
8
9
Figure 3. Closed-loop step responses.
The bandwidth ߱ௐ is is defined as the frequency at which ܴܣ ሺ݆߱ௐ ሻ ൌ ȁܩ ሺ݆߱ௐ ሻȁ ൌ
ଵ ξଶ
(19)
Substituting Eq. (11) and (18) into (19) gives ߱ௐ ൌ
ଶ
(ʹͲ)
ర ర ఏඨ మ ቀଵା మ ቁିଶ ටଵା మ ିଵ ഀ ഀ
The relation between ߱ௐ and ܣ provides an estimate for closed-loop performance. Now we can propose a tuning method based on gain margin specification. According to Eq. (12), ܣ is proportional to ߬ . So for given כܣ , the minimal ߬ can be located when ܣ equals to its minimal value, כܣ . Then the PID controller parameters, corresponding phase margin and bandwidth can be calculated from Eq. (7) – (9), (17) and (20). Fig. 3 shows the simulation result for the closed-loop step responses of the controller designed by different gain margin specifications. As כܣ getting larger, the performance of the controller gets more conservative. Substituting into (20) gives כܣ
ξଶାଵ ర
ටଵା మ ഀ
ൌ ͳǤͺ
(21)
5. Real-time experiments The experimental set up comprises of a countercurrent shell and tube heat exchanger. Hot water flows through the shell side and the cold water flows through the tube side. Temperatures and flow rates are recorded at a sampling time of 0.1 seconds. The FOPTD model
10
Chih-Wei Chu et al.
762
ܩ ሺݏሻ ൌ
ିǤଷହ ଵǤଷ௦ାଵ
݁ ିଷ௦
(22)
was identified using global optimization as shown [10]. Fig. 3 shows the response of the system output for a set-point change followed by a load disturbance on hot water flow rate change. The controller gives quick set-point response and well disturbance rejection. 63.6 63.4
Temp (F)
63.2 63 62.8 62.6 62.4 62.2
0
50
100
150
200
250
300
350
400
Time (sec) 10
Cold water Hot water
Flow rate (gpm)
9 8 7 6 5 4
0
50
100
150
200
250
300
350
400
Time (sec)
Figure 3. Real-time experimental result. The precision is limited by 8bit AtoD conversion
6. Conclusions An optimization problem for the IMC-PID controller suitable for adaptive control is developed. The analytical solution for the optimization of bandwidth from gain and phase margin constraints is derived. We show that that gain and phase margins are coupled. The real time experiment result gives satisfied satisfactory set-point response and disturbance rejection. The proposed approach is ideally suited for application to adaptive control since the tuning criteria (gain margin and phase margin) are based on closed rather than open-loop performance.
References [1] K.J. Astrom, T. Hagglund(2nd ed.), PID Controllers: Theory, Design, and Tuning, Instrument Society of America, Research Triangle Park, NC, 1995. [2] J.G. Ziegler, N.B. Nichols, Optimum settings for automatic controllers, Trans. A.S.M.E., 64 (1942) 759–768. [3] D.E. Rivera, M. Morari, S. Skogestad, Internal model control. 4. PID controller design, Ind. Eng. Chem. Res., 25 (1) (1986) 252–265. [4] S. Skogestad, Simple Analytic Rules for Model Reduction and PID Controller Tuning, J. Process Control, 13(2003) 291-309. [5] I.L. Chien, P.S. Fruehauf, Consider IMC tuning to improve controller performance, Chemical Engineering Progress (1990) 33–41. [6] W.K. Ho, C.C. Hang, L.S. Cao, Tuning of PID controllers based on gain and phase margin specifications, Automatica, 31 (3) (1995) 497-502. [7] Q.G. Wang, H.W. Fung, Y. Zhang, PID tuning with exact gain and phase margins, ISA Transactions, 38 (1999) 243-249. [8] W. K. Ho, T. H. Lee, H. P. Han, and Y. Hong, Self-Tuning IMC-PID Control with Interval Gain and Phase Margins Assignment, IEEE Transactions on Control Systems Technology, 9 (3) (2001). [9] D.E. Seborg, T.F. Edgar, and D.A. Mellichamp(2nd ed.), Process Dynamics and Control, Wiley, New York, 2003. [10] G.H. Staus, L.T. Biegler, and B.E. Ydstie, Global optimization for identification, Proceeding of the 36th Conference on Decision and Control, (1997) 3010-3015.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
System identification using wavelet analysis Zdenˇek Váˇna1 , Samuel Prívara1 , Jiˇrí Cigler1 and Heinz A. Preisig2∗ 1 Department 2
of Control Engineering; CTU in Prague; Prague; Czech Republic Department of Chemical Engineering; NTNU; Trondheim; Norway
Abstract System identification (SID) plays a central role in any activity associated with process operations. With control being done on different levels, different models are required for the same plant each for a different range of dynamics. Besides that most identification methods apply to the linear models, they also do not allow for selecting a frequency range. Wavelet methods have the ability to select the time and frequency windows and are also applicable to the nonlinear processes. The paper presents an approach, in which wavelet transform is used for SID enabling selection of the particular frequency range. Even though, the wavelet transform as a tool is known for a long time and has a number of desirable properties, it is not frequently used in the applications. Keywords: wavelet transform, system identification, singular perturbation
1
Introduction
Wavelet transform as a mathematical tool serves mainly for data analysis in both time and frequency domains. The interconnection between the wavelet and identification theories was partly shown in e.g. Ghanem and Romeo (2001). The wavelets are used mostly for the nonlinear SID with the particular structure, where the unknown time-varying coefficients are expressed as a linear combination of the basis (wavelet) functions (Tsatsanis and Giannakis (2002); Wei and Billings (2002)). This is improved in Staszewski (1998). Yet another option comes out from the character of the wavelets searching for the system natural frequencies and dumping (Ruzzene et al. (1997); Kijewski and Kareem (2003)). Apart from the simple wavelet analysis the bi-orthogonal wavelets (Ho and Blunt (2003)), wavelet frames (Sureshbabu and Farrell (2002)) or even the wavelet networks (Shi et al. (2005)) can be used for SID. Preisig and Rippin (1993a,b,c) deals with the system of the particular input-output structure, where the parameters are identified via the spline wavelets and its derivatives. Carrier and Stephanopoulos (1998) showed, that least squares can be extended to the wavelet transform. We will show the use of some wavelet filters with a property of a superior selectivity (significant magnitude on specific frequency range) in a frequency domain and having compact support in a time domain, which in turn, influences an accurate implementation. This provides us with the possibility of a measured data analysis in the frequency domain without loss of information. Selection of a proper filter allows us to identify a system on a desired frequency range, or to identify a number of systems for distinct frequency ranges. This is especially convenient for the systems with more dominant modes. We demonstrate yet another facet of a wavelet-based identification thus continuing earlier expositions (Preisig (2010)). The paper is organized as follows. In Section 2 the properties of the discrete wavelet transform (DWT) is introduced. Section 3 introduces linear SID theory and interconnects it with the wavelet transform. The case study is provided in Section 4. The last section is summary.
2
Discrete wavelet transform - the basic principle
DWT can be found in Frazier (1999); Kolzow (1994). The main idea of a discrete wavelet transform is described by a multiresolution analysis of the Hilbert space (Frazier (1999)): ∗1
[email protected] ; 2 [email protected]
764
ˇ et al. Zdenˇek Vá na ˇ
Zden
ek Vá
ˇ
na
V j and W j denote the space of approximations at the jth level and space of details, respectively. It holds 2 (ZN ) = V p ⊕ W p ⊕ W p−1 ⊕ · · · ⊕ W1 . Spaces V j , W j have dimension 2Nj and that is the reason, why N has to be divisible by 2 p , where p is the maximum level of $ {R2k ψ}M−1 an analysis. Let N = 2M, ϕ, ψ ∈ 2 (ZN ) and let B = {R2k ϕ}M−1 k=0 k=0 is wavelet st basis at the 1 level and vectors ϕ, ψ are its generators. Vector ϕ is called a father wavelet (or filter) and vector ψ is called a mother wavelet (or wavelet). The coefficients of the signal z ∈ 2 (ZN ) in the basis B can be expressed as the inner product of z with the basis vectors. First we analyze the vector z ∈ 2 (ZN ) by filters ϕ1 , ψ1 . Thereby we obtain x1 , y1 ∈ 2 (ZN/2 ). 2 (ZN/2 ) is also Hilbert space, thus we can analyze x1 by filters ϕ2 , ψ2 . In a similar manner x2 , y2 are obtained and this procedure is repeated up to the pth level. We consider a real space signal z sampled by Ts = f1s , which has a full, symmetric spectra. To comply with Shannon-Kotelnik theorem we count only the single sided spectra. Then, to retain the full signal energy, we have to multiply it by 2. If we Fourier express transform of wavelet analysis of the vector z, we obtain F {[z]B } = 14 zˆ m2 ϕˆ m2 , 14 zˆ m2 ψˆ m2 . It could be applied repeatedly up to the pth level as well. Since the spaces V j , W j have the same dimension, the frequency range is divided into the halves. An analogous operation is performed at the next levels.
3
System identification and wavelet transform
We consider a discrete-time linear time-invariant (LTI) system y(t) = G(q)u(t)+H(q)e(t) = y(G, H, u), with q being a shift operator, e(t) zero mean white noise with variance σe2 and u(t), y(t) system input-output signals (Ljung (2002)). Let us choose the linear predictor nb a y(t|t ˆ − 1, θ ) = − ∑nk=1 ak q−k y(t) + ∑k=0 bk q−k u(t) = zT θ , where z is measured data and θ are the unknown parameters. This can be written as Y = ZΘ. For a filtered prediction error ε f (t, θ ) = L(q)ε(t, θ ) = L(q) [y(t) − y(t|t ˆ − 1, θ )] holds (Ljung (2002)). If the predictor is time-invariant and linear in parameters and u(t), y(t) are scalars (what means the single-input single-output (SISO) system is considered), then the result of filtering of ε is the same as filtering the input-output data first and then applying the predictor. Recall, that the wavelet coefficients are to be evaluated as an inner product of the time signal and even shifts of the wavelet filters. On 2 (ZN ), the inner product can be written as a vector multiplication. Then an equation Y = ZΘ can be extended by multiplying with thewavelet matrix T and user defined weighting matrix W as W TY = W T ZΘ. It is important to realize that each wavelet coefficient bears information about time interval of the same length as is length of the wavelet filter, thus it is impossible to construct matrices Y, Z from filtered data, because it does not keep the required time structure of the model. There are two limitations while implementing wavelet matrix T : the lenght of analyzed data (It is convenient but impractical to have data of length 2 p .) and the data periodicity (The periodical extension of a vector is used for analysis of the whole length of data. However, due to e.g. adding high frequencies or disabling the recursive identification, it is not convenient to periodically extend data). There are two possible points of view of the basic principle of wavelet analysis: a) both the approximations and the details of the analyzed data are kept of length N (upsampling operator) and scaled wavelet filters are used, b) the lengths of both approximations and details at all levels are decreased and the analysis is always performed by the same basic filters ϕ, ψ. Because the analyzed data in practice need not to be of length of 2 p for any p ∈ N, the latter approach is more accurate. Let L is length of wavelet filter and S is their basic shift. Then theanalyzed data length N +L N j at the jth level can be written recursively as N j+1 = 1 + jS as long as the data •
are long enough for analysis at the next level, [o]• denotes the integer part of o. Then the number of iterations is maximum level p of wavelet analysis. With the knowledge of p and individual lengths N j , j = 1, . . . , p the wavelet matrix T can be computed successively. At the 1st level, the analysis is done by matrix T1 which contains the
765
System identification using wavelet analysis
even shifts of the wavelet filters in rows. Because the wavelet filter of length L could be smaller than the data length N1 , the wavelet filters have to be replenished onto the length! L with analysis is characterized by matrix multiplication – " " zeros. !Higher " levels ! I 0 T j,D {R2k ψ} Tj = , Tj = T = {R ϕ} , k = 0, . . . , N j+1 − 1. T j,D , T j,A are submatrices 0 Tj j,A 2k of the mother and father wavelets, respectively. The matrix T1 is defined as T j for j = 1. Let us now briefly introduce some specific frequency properties of wavelet filters. In Section 2 the halving of the frequency range was mentioned. The frequency characteristics of both types of the wavelet filters cover full range of frequencies, however, only that half with a major influence can be considered. Let us call ”main interval" of the filter such a frequency range, where the filter has the major influence. The overview of the main intervals at distinct levels is in the following table. Table 1. Table of wavelet filters and their main intervals at distinct levels. Analysis
jth level details jth level approximations
Main interval fmin
fmax
fs 2 j+1
fs 2j fs
0
Frequency-domain filter
ψˆ
m 2j
2 j+1
j−1 m · ∏i=1 ϕˆ 2i j ∏i=1 ϕˆ 2mi
The wavelet matrix W is a user defined, diagonal matrix. Because of an overlapping of the wavelet filters in frequency domain, the filters from Table 1 do not have unit gain at any frequency, which can be compensated by the weighting matrix W1 computed from the 1 1 1 vector of weights V1 = max(wˆ (m)) , . . . , max(wˆ p (m)) , max(wˆ (m)) , where wˆ i , i = 1, . . . , p+1 1 p+1 are the filters from the Table 1. Moreover, the user defined weights of the filters should be chosen such that no filter overweights any other. In consequence, there is a bound for weights for each particular wavelet filters family Frazier (1999); Kolzow (1994). 3.1 Asymptotic properties: Convergence and consistency Let Θ∗ denotes such a model parameters vector, which is the best theoretical solution of an ˆ N denotes the solution computed identification problem as defined in Ljung (2002) and Θ from measured data. Then, the following holds for the convergence and the consistency of the ARX model: a) Convergence: limN→∞ θˆN = θ ∗ + limN→∞ E{(Z T Z)−1 Z T e}. This limit holds for wavelet filters as well. b) Consistency: In case of an open-loop, the variance of the frequency function Φv (ω) estimate at certain frequency ω can be written as VarG(ω, θˆN ) ≈ Nn Φ with u (ω) v(t) = H(q, θˆN )e(t) being filtered white noise (see Zhu (2001)). Then after straightforward modifications: VarG(m, θˆN ) ≈
n Φv (m) N Φu (m)
p+1
∑ %%
j=1
1
V ( j)wˆ
j (m)
%2 , where V ( j) is %
a normalized weight for jth level analysis, V (k) is kth element of the vector of weights which the matrix W is constructed from. In case of no additional weighting, this sum equals 1 for all frequencies and the results of SID with and without wavelet filtering are similar (Figure 1).
4
Case study
The proposed algorithm was implemented and tested on system with given transfer func(s+100)(s+1) tion G = 100(s+10)(s+0.1) discretized with Ts = 0.1s. The frequency characteristic is well
766
ˇ et al. Zdenˇek Vá na ˇ
Zden
ˇ
ek Vá
na
divisible into the slow and fast parts. The results parameters modeling without any additional filters weights are depicted in Figure 1/left. The maximum possible analysis level was used. We can see, that the ARX and the wavelet filtered ARX models (WAV model) provide similar results. In case of suitable filter weights selection, we can choose frequencies of interest and identify the system in this frequency range. To get credible results it is suitable to compare ARX with the wavelet filtering and ARX with prefiltering by filter F. 16 s4 This has been chosen as Flow = (s+2) 4 and Fhigh = (s+2)4 for low and high frequencies; both discretized by Ts . The results are depicted in Figure 1/middle and right for slow and fast subsystems. Both results are similar. Figure 1. Results from identification procedure. all frequencies
low frequencies
8
output response
6
6
6
4
4
2
2
2 0
−4
0
100 200 discrete time error for ARX
histograms
8
4
−2
magnitude (dB)
high frequencies
8
35
30
30
25
25
20
20
15
15
10
10
5
5
0 −2
0 −2
2
4
0
−2
−2
0
error for WAV
35
0
300
0
0
2
4
100 200 discrete time
300
0
error for WAV
error for ARX
100 200 discrete time error for ARX
30
30
30
30
20
20
20
20
10
10
10
10
0 −1
0
0
−10
−10
−20
−20
0
1
0 −1
0
1
0 −2
0
2
4
300
error for WAV
6
0 −2
0
2
4
6
0 −10 −20
original system prefiltered ARX model
−30
−30
−30
WAV model prefilter for ARX
−40
−2
0
10
10 frequency
−40
−2
0
10
10 frequency
−40
−2
0
10
10 frequency
Now, the question arises why to prefilter data by the wavelets. Wavelet filters have advantage in comparison to the filters designed in a classical way. Firstly, they have simple structure in a frequency domain and secondly, they complement each other in frequency domain. This provides us with a big advantage in problems, where the frequency characteristics of the system are not known in advance. The satisfactory results could be acquired by tuning of the weights only.
5
Conclusion
5.1 Possible extensions 1. Multi-input single-output (MISO) system identification: The ARX model structure (matrices Y, Z) can be simply expanded for MISO systems. This poses no change in matrices T,W . The only problem is with identifiability due to collinearity in data. 2. Thresholding of thw wavelet analysis: This means to nulify the wavelet coefficients lower than some threshold εt ∈ R. The threshold is the lower limit for the considered portion of the particular frequency range in the original signal. Globally, it can lead to more accurate numerical results. The price is a loss of some input-output data information. 3. Keeping of the wavelet analysis coefficients at lower levels: For the increasing number of equations, the approximations at each level could be doubled. Moreover, one half is kept and the other is used for the analysis at the next level. This expands the number of equations, i.e. improves the result (for the price of higher computational demands). 4. Recursive identification: There is one inherent difference from the recursive LS solution (Ljung and Ljung (1985); Engel et al. (2004)) i.e., minimum length of measured data has to be greater than the length of shift of the wavelet filter ! " at Z the lowest used level. Then the predictor can be extended by Tnew Z Θ= new
System identification using wavelet analysis
767
! " Y Tnew Y , where Tnew is the matrix of all new possible shifts of the wavelet new filters. 5.2 Concluding remarks The proposed algorithm presents the way how to use the wavelet transform as a tool for data (pre)filtering in the field of the model identification. This method enables us to identify slow or fast subsystems of the singularly perturbed system as well as to do the reduced order model identification for control. However, the quality of identification is sensitive to wavelet family and analysis level selection.
6
Acknowledgements
This work has been supported from the state budget of the Czech Republic, through the Ministry of industry and commerce, in the scope of grant No. FR-TI1/517, ”Control systems for energy consumption optimization in low-energy and passive houses".
References Carrier, J., Stephanopoulos, G., 1998. Wavelet-based modulation in control-relevant process ldentification. AIChE journal, report from MIT. Engel, Y., Mannor, S., Meir, R., 2004. The kernel recursive least-squares algorithm. IEEE Transactions on Signal Processing 52 (8), 2275–2285. Frazier, M., 1999. An introduction to wavelets through linear algebra. Springer Verlag. Ghanem, R., Romeo, F., 2001. A wavelet-based approach for model and parameter identification of non-linear systems. International Journal of Non-Linear Mechanics 36 (5), 835–859. Ho, K., Blunt, S., 2003. Adaptive sparse system identification using wavelets. Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on 49 (10), 656–667. Kijewski, T., Kareem, A., 2003. Wavelet transfrom for system identification in civili engineering. report from Dpt. of civil engineering and geological science, University of Notre dame. Kolzow, D., 1994. Wavelets. A tutorial and a bibliography (*). Rendiconti dell’Istituto di matematica dell’Università di Trieste, 49. Ljung, L., 2002. Prediction error estimation methods. Circuits, Systems, and Signal Processing 21 (1), 11–21. Ljung, S., Ljung, L., 1985. Error propagation properties of recursive least-squares adaptation algorithms. Automatica 21 (2), 157–167. Preisig, H., 2010. Parameter Estimation using Multi-Wavelets. Computer Aided Chemical Engineering 28, 367–372. Preisig, H., Rippin, D., 1993a. Theory and application of the modulating function method–I. Review and theory of the method and theory of the spline-type modulating functions method. Computers & Chemical Engineering (17), 1–16. Preisig, H., Rippin, D., 1993b. Theory and application of the modulating function method–II. algebraic representation of Maletinsky’s spline-type modulating functions. Computers & Chemical Engineering 17 (1), 17–28. Preisig, H., Rippin, D., 1993c. Theory and application of the modulating function method–III. application to industrial process, a well-stirred tank reactor. Computers & Chemical Engineering 17 (1), 29–39. Ruzzene, M., Fasana, A., Garibaldi, L., Piombo, B., 1997. Natural frequencies and dampings identification using wavelet transform: application to real data. Mechanical Systems and Signal Processing 11 (2), 207–218. Shi, H., Cai, Y., Qiu, Z., 2005. Improved system identification approach using wavelet networks. Journal of Shanghai University (English Edition) 9 (2), 159–163. Staszewski, W., 1998. Identification of non-linear systems using multi-scale ridges and skeletons of the wavelet transform. Journal of Sound and Vibration 214 (4), 639–658. Sureshbabu, N., Farrell, J., 2002. Wavelet-based system identification for nonlinear control. Automatic Control, IEEE Transactions on 44 (2), 412–417. Tsatsanis, M., Giannakis, G., 2002. Time-varying system identification and model validation using wavelets. Signal Processing, IEEE Transactions on 41 (12), 3512–3523. Wei, H., Billings, S., 2002. Identification of time-varying systems using multiresolution wavelet models. International Journal of Systems Science 33 (15), 1217–1228. Zhu, Y., 2001. Multivariable system identification for process control. Elsevier.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Robust Reallocation and Upgrade of Sensor Networks for Fault Diagnosis Suryanarayana Kolluri and Mani Bhushan Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai-400076, India
Abstract We propose a reallocation and upgrade strategy for improving an existing sensor network design from a fault diagnostic perspective. Following the work of Bhushan et al. (2008) for base case robust design, we perform reallocation and upgrade of an existing network such that to the extent possible, the resulting sensor network is robust to variations in the uncertain fault occurrence and sensor failures probabilities. Robustness to modeling errors is also incorporated by considering distributed networks. The resulting formulations are applied to the Tennessee Eastman (TE) problem. Keywords: Reallocation, Upgrade, Robust Design, Fault Diagnosis, Optimization.
1. Introduction Several approaches have been presented in literature for designing sensor networks (choosing variables to be measured along with the number of sensors on each variable) that can ensure reliable fault diagnosis. Bhushan et al. (2008) considered the unreliability of fault detection as a criterion for choosing optimal networks. They required an underlying cause-effect model to predict the effect of different faults on process variables and information about fault occurrence and sensor failure probabilities, to compute the unreliability of detection of any fault. The resulting problems typically had multiple optimal solutions. Bhushan et al. (2008) exploited this multiple solutions feature, to further select networks (amongst these multiple solutions) which were robust to uncertainties in fault occurrence and sensor failure probability data as well as uncertainties in the underlying cause-effect information. Theirs and most other approaches in literature focus on base case design scenarios where the problem is to design a sensor network from scratch. Since the operating point in a process plant changes over time and most processes already have (non-optimal) sensor networks, it is necessary to have comprehensive reallocation and upgrade strategies for improving an existing sensor network design. The aim in this work is to propose such strategies for obtaining robust sensor networks which are optimal from a fault diagnosis perspective.
2. Related Previous Work The proposed formulations are based on the concept of unreliability of detection of fault which is defined as probability of fault occurring and remaining undetected due to simultaneous failure of all sensors affected by that fault. For ith fault, it is given as (Bhushan et al., 2008): n
Ui
fi
(s ) j
( Bij x j )
(1)
j 1
The overall system unreliability U is defined to be the maximum across all faults
Robust Reallocation and Upgrade of Sensor Network for Fault Diagnosis
769
(Bhushan et al., 2008). However, as seen in equation (1), the unreliability depends on fault occurrence probabilities (fi) and sensor failure probabilities (sj) as well as the faultvariable bipartite matrix B (obtained from a cause-effect process model). In case some of the probability data is only approximately known, a sensor network design which is optimal with respect to the nominal values, may not give the optimal system unreliability when some of the approximately known values increase. To alleviate this problem, Bhushan et al. (2008) utilized the fact that there are typically several sensor networks which give optimal system unreliability when nominal values are used. They then considered a second objective function (in a lexicographic sense) which corresponded to minimizing the unreliabilities of detection of faults whose unreliability values depend on approximately known data. This way, even if some of the approximately known values were to change, the overall system unreliability is not affected (or is less affected). Robustness to the uncertainties in the process model was considered by incorporating network distribution (number of variables measured) as another objective with the idea that: more the number of variables measured, less are the chances of missing a fault due to inaccurate fault-effect modeling. Bhushan et al. (2008) presented their approach for the base case design formulation while Bhushan et al. (2003) considered upgrade and reallocation but did not consider robustness issues. Bagajewicz and Sanchez (2000) have also presented upgrade and reallocation formulations but have considered optimal variable estimation related objectives.
3. Robust Reallocation and Upgrade of Sensor Networks In the current work, we propose a reallocation and upgrade strategy to modify an existing sensor network so as to minimize the overall system unreliability while incorporating robustness to uncertain probability data and underlying cause-effect models. Accordingly, we consider three situations based on: (i) uncertainties only in some fault occurrence probabilities, (ii) uncertainties only in some sensor failure probabilities, and (iii) either of scenarios (i) or (ii) along with uncertainties in the underlying cause-effect models. In the formulations below, only salient features of the robustness aspects are explained and further details can be obtained from Bhushan et al. (2008) where the base case design formulations are presented. All the resulting formulations are mixed integer linear programming in nature. 3.1 Robustness to available probability data It is assumed that for some faults, the occurrence probabilities are exactly known, while for certain other faults, only approximate values are available. The proposed reallocation and upgrade formulation is: Formulation I: Robustness to Inaccurate Fault Occurrence Probability Data min [D1U D 2I f xs ] (2) (xj )
n
Subject to,
¦c q ¦ ¦ h j
j 1
t , r ut , r
j
xs
C*
U t log(U i ), i I / I f U
(3)
tM t rM r
log(U i ) I fi , i I f
(4) (5)
I f d I *f
(6)
I f d I fi I *f yi , Pyi t I fi I *fi , P( yi 1) d I fi I *fi ; i I f
(7)
770
xj
S.Kolluri and M.Bhushan q j x*j
¦u
j ,r ,
j M t , M r
(8)
j M r , M t
(9)
rM r
xj
q j x*j
¦u
t, j ,
tM t
xj
q j x*j
¦u
t, j
tM t
xj
q j x*j ,
¦u
j ,r
d
x*j ,
¦u
j,r ,
j Mr Mt
(10)
rM r
j M r Mt
(11)
j Mt
(12)
rM r
x j , q j , ut , r z ;U , U i \ ; (xs , I f , I fi ) \ , yi ^0,1`
(13)
In formulation I, the primary objective U is the system unreliability based on nominal values. The secondary objective f indicates the robustness of the primary objective to uncertain fault occurrence probabilities. It is defined as the minimum of the robustness values in the unreliabilities of individual uncertain faults (constraints 7). The third objective xs is the cost saved while performing upgrade and reallocation and is required since different cost networks may yield the same U and f values. Further details can be obtained from Bhushan et al. (2008). Constraints 8-11 are the reallocation and upgrade constraints which keep track of the number of sensors (xj) on the jth variable. The number of sensors is a combination of existing sensors xj* (if any) plus new sensors qj (upgrade if any) plus (minus) the additional sensors gained (lost) due to reallocation. Constraint 12 ensures that the number of sensors reallocated from a variable to other variables cannot be more than the number of existing sensors on that variable. The cost constraint (3) takes into consideration cost of upgrade and reallocation. 3.2 Robustness to Inaccurate Sensor Failure Probability Data We now assume that some sensor failure probabilities are approximately known, while all fault occurrence and remaining sensor failure probabilities are exactly known. The formulation is similar to earlier scenario with the only difference being in the calculation of maximum meaningful robustness required for each fault. Formulation II: Robustness to Inaccurate Sensor Failure Probability Data min [O1U O2I f xs ] (14) (xj )
Subject to, U t log(U i ), i I / I s U
log(U i ) Isi ; Isi*
¦ B (log s ) x ; ij
j
j
(15) i I s
(16)
jJ s
alongwith constraints (3) and (6)-(13) of formulation I with f replaced by s. In formulation II, constraint 16 is written for faults affecting variables to be measured by inaccurate sensors, and si* is the maximum meaningful value of the slack for individual fault unreliability of detection values which depends on chosen sensors. 3.3 Robustness to Modeling Errors It is now assumed that apart from some uncertain fault occurrence or sensor failure probabilities, there are uncertainties in the fault-variable bipartite matrix B. This is due to errors present in the underlying models used to predict effect of faults on different variables. In order to incorporate robustness to these errors, network distribution is considered as an additional objective.
Robust Reallocation and Upgrade of Sensor Network for Fault Diagnosis
Formulation III: Network Distribution and Uncertain Probability Data min[ E1 U E 2 I E3 N xs ] (xj )
771
(17)
Subject to, Constraints of formulations I or II depending on the case and n
N
¦n ; k
n j d x j , n j ^0,1` , j 1, 2,......n
(18)
k 1
Constraints 18 coupled with maximization of N in the objective function ensure that N is the number of different variables measured in the process (irrespective of hardware redundancy).
4. Case Study: Tennessee Eastman Process The proposed formulations are applied to the TE process (Downs and Vogel, 1993). In this process 50 measurable variables and 33 faults are considered (Bhushan et al., 2008). The data corresponding to fault occurrence and sensor failure probabilities, sensor costs and matrix B is taken from Bhushan et al. (2008). The existing sensors are assumed to be located on variables: [1-10,22-29,35,36,45,47,49]. The cost for reallocation of sensors as assumed in the present work is: 10 units for ([1,2], [1,3], [2,1], [2,3], [3,1], [3,2]), 60 units for [4,13] and 160 units for ([5,22], [6,23], [7,24], [8,25], [9,26], [10,27], [22,5], [23,6], [24,7], [25,8], [26,9], [27,10]). In this notation, pair [i,j] means that transferring a sensor from variable i to variable j is allowed with cost as specified. For the existing sensors the objective function values are: U=-2, f =2, s=0, N=23, with cost used=13650. The formulations are solved using CPLEX and results are: Cases I and III: Uncertainties in occurrence probabilities of faults 1 and 9 are considered with (case III) and without (case I) N in the objective function: The results are presented in Table 1 where for various cases and C* values, the objective functions (column 3) and the decision variables: sensor reallocation (column 4 pairs indicating from and to variables) and upgraded (new sensors in column 5), are reported. The term i(j) in column 5 means that j new sensors are selected on variable i (hardware redundancy). For C*=3000, same objectives are obtained for both cases I and III. However compared to the existing sensors, it is found that by placing additional sensors and reallocating some sensors at a small cost (2940 units), the system unreliability can be significantly improved (-6 compared to -2 earlier). For C*=5000, U can be decreased even more as expected. Further, case III has a higher N than case I thereby leading to more robustness to modeling errors. Cases II and IV: Uncertainties in failure probabilities of sensors 3 and 4 are considered with (case IV) and without (case II) N in the objective function: The results are presented in Table 2 and are similar to results listed in Table 1. Once again significant improvement in the existing values of U and s by incurring minor additional cost can be noticed.
5. Conclusions In this work, optimization formulations have been proposed for upgrading and reallocating an existing sensor network to ensure reliable fault detection and diagnosis in the presence of uncertainties in some of the underlying probability data as well as fault-variable models. The formulations have been applied to the TE process and result in significant improvement of the existing sensor network.
772
S.Kolluri and M.Bhushan Table 1. Results for Uncertainty in Faults
Case
C*
I
3000
III
3000
I
5000
[-8, 2, 24,1000]
[1,3], [2,3], [22,5], [25,8], [26,9]
III
5000
[-8, 2, 27, 320]
[22,5], [25,8],[26,9]
[U, f, N, xs]
Reallocated [1,3], [22,5], [-6, 2, 25,60] [25,8], [26,9] Same as Case I, C*=3000
Upgraded Sensors 4,13(2),42(2),43(2),45,46, 47,48,49,50 4(2),13(2),42(2),43(2), 45(2),46(2),47(2),48(2), 49(2), 50(2) 3(2),4(2),13(2), 42(2), 43(2), 44,45(2),46(2),47(2),48(2), 49(2),50(2)
Table 2. Results for Uncertainty in Sensors Case
II IV
C* 3000 3000
[U, s, N, xs] Reallocated Same as Case I, C*=3000 (Table 1) Same as Case I, C*=3000 (Table 1)
II
5000
[-8, 9, 24, 200]
[1,3], [2,3], [22,5], [25,8], [26,9]
IV
5000
[-8, 9, 26,20]
[22,5], [25,8],[26,9]
Upgraded Sensors
3(2),4(4),13(2),42(2), 43(2),45(2),46(2),47(2), 48(2),49(2),50(2) 3(4),4(4),13(2),42(2),43(2), 45(2),46(2),47(2),48(2), 49(2),50(2)
Notation cj ,C*
ht,r I If ,Is Js Mt Mr n nj ut,r xj Į, ȕ, Ȝ , * fi, si
: : : : : : : : : : : : : :
cost of putting a new sensor on variable j, total available cost respectively cost of reallocating sensor from variable t to r set of faults indices considered in formulations set of fault indices which affect inaccurate faults and sensors respectively set of indices of inaccurate sensors set of variables whose sensors may be reallocated (to other variables) set of variables to which sensors can be reallocated (from other variables) number of measurable variables binary variable which is 1 if variable j is measured and is 0 otherwise number of sensors moved from variable t to r number of sensors measuring variable j after upgrade and reallocation constants used for lexicographic optimization overall robustness and its maximum meaningful value robustness for inaccurate fault i for formulations I and II
References M.Bagajewicz and M.Sanchez, 2000, Reallocation and Upgrade of Instrumentation in Process Plants, Computers and Chemical Engineering,24(8), 1945-1959. M.Bhushan, S.Narasimhan and R.Rengaswamy, 2003, Sensor Network Reallocation and Upgrade for Efficient Fault Diagnosis, Proceedings of FOCAPO2003, Florida, USA, 443-446. M.Bhushan, S.Narasimhan and R.Rengaswamy, 2008, Robust Sensor Network Design for Fault Diagnosis, Computers and Chemical Engineering, 32(4-5), 1067-1084 J.J.Downs and E.F.Vogel,1993, A Plant Wide Industrial Process Control Problem, Computers and Chemical Engineering, 17(3), 245-255
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Explicit/Multi-Parametric Model Predictive Control of a Solid Oxide Fuel Cell Kostas Kouramas,a Petar S. Varbanov,b Michael C. Georgiadis,c JiĜí J. Klemeš,b Efstratios N. Pistikopoulos a a
Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK b Centre for Process Integration and Intensification CPI2, Research Institute for Chemical and Process Engineering, Faculty of Information Technology, University of Pannonia, Egyetem u. 10, 8200 Veszprém, Hungary c Department of Engineering Informatics and Telecommunications, University of Western Macedonia, Kozani 50100, Greece
Abstract In this work we present a general framework for the design and validation of explicit/multi-parametric MPC controllers for Solid Oxide Fuel Cells (SOFC). The framework features four key steps comprising the development of a dynamic mathematical model of the process at hand, the development of a reduced order model of the process and the design of an explicit/multi-parametric MPC for online control. The framework is illustrated on a SOFC system. Keywords: Solid Oxide Fuel Cell, Explicit Model Predictive Control, Multi-Parametric Programming
1. Introduction Fuel Cell power systems have a high potential for serving many power generation applications, both stationary and mobile, due to their outstanding energy (electric) efficiency, fuel versatility and minimal environmental impact characteristics [2,7]. In particular, Solid Oxide Fuel Cell (SOFC) systems have emerged as one of the most commercially widespread technologies, together with Proton Exchange Membrane Fuel Cells (PEMFC), offering high energy efficiency and robust performance, including also hybrid CHP arrangements increasing further the overall fuel utilisation [8]. The SOFC operation is based on an exothermic electrochemical reaction taking place at elevated temperatures [2,3]. The complex physical, chemical and electrical operation of SOFCs gives rise to important modelling and control challenges [2,3]. The efficient and stable operation of SOFCs depends on the efficient control/regulation of the generated voltage/power in the presence of varying operating conditions and disturbances [2]. These disturbances are associated with the fluctuations of the electric load (current), which mainly correspond to current demand changes/fluctuations or failures in the network [3]. Recent research on the control of fuel cells (mainly PEM FC) [1,3,4,7] showed that modern advanced model-based control methods such as Linear Quadratic Regulation [7], Model Predictive Control (MPC) [1,4] and Explicit/Multi-Parametric MPC (mp-MPC) [1,3,4] are suitable for the voltage and temperature regulation of FC in the presence of disturbances and constraints. Based on our previous work on multiparametric programming and control of FC [3,5], we present a unified framework for the off-line design and validation of explicit/multi-parametric MPC controllers for FC,
774
K. Kouramas et al.
which we then apply for an SOFC system. A dynamic mathematical model of the SOFC system is presented that is used for performing dynamic simulation studies, and a reduced-order model of the SOFC is developed that is suitable for the design of modelbased controllers. An mp-MPC controller is then designed and validated off-line by directly implementing it to the dynamic model of the SOFC. The proposed framework is presented in detail in the following section.
2. Framework for explicit/multi-parametric MPC of fuel cells The proposed framework for the design and validation of explicit/multi-parametric MPC controllers [5,6] is illustrated in Figure 1. This framework consists of four key steps which are described as follows: (i) development of a dynamic mathematical model of the fuel cell that is used for detailed simulation and (design and operational) optimization studies, (ii) development of a reduced-order/approximating model, suitable for control design, (iii) design of the explicit/multi-parametric MPC controller using off-line multi-parametric programming and control methods, and (iv) off-line validation/testing of the controller. All the steps of this framework are performed offline before any real implementation on the system takes place. Hence the controller can be fully tested and validated off-line - step (iv), thereby reducing the cost and time of testing as well as the risk of failing at the online implementation [5]. The four steps of this framework are discussed in detail in the following sections, for the explicit/multiparametric control design for a SOFC system. 2.1. SOFC mathematical model The SOFC system under consideration is shown in Figure 2. The SOFC operates via the exothermic oxidation (see Figure 2) of the fuel (assumed to be hydrogen) with oxygen being the oxidant. The cell consists of a solid electrolyte (Y2O.ZrO2), an anode (where the fuel is supplied and the oxidation takes place) and the cathode (where the oxygen is supplied and reduction to O2- takes place).The SOFC stack consists of 384 unit cells (Figure 2) of a rectangular configuration, connected in series – each unit cell is either connected to another unit cell or to the wiring of the load. The dynamic model of the SOFC was developed in [3] and its most important equations are shown in Table 1. This model is implemented in Matlab/Simulink (The Mathworks Inc., 2007) and was used to perform a set of open-loop dynamic simulation studies. In these simulations the voltage and temperature of the SOFC are obtained for a set of varying operating conditions in which a set of electric current step disturbances ¨I = 20, 40, 60 A are applied to the system at time 50 s. The results of these simulations are given in Figures 5 and 6. 2.2. Reduced order model for the SOFC Current model-based control methods, such as mp-MPC, cannot use detailed dynamic models such as the SOFC model in Table 1 and they usually rely on linear input-output or state-space models [5,6]. Therefore, in the second step of the framework (Figure 1) a reduced-order state space (SS) model of the SOFC process is obtained by performing system identification based on the input/output data of the open-loop simulations. Note that the SOFC voltage V and temperature T are the system outputs y [V T ]T and the H2 and O2 mass flowrates u [ FH 2 FO2 ]T are the system inputs. The mathematical model of the derived SS model is given by (1)
Explicit/multi-parametric MPC of a Solid Oxide Fuel Cell
775 Load Cooling
FUEL
e-
Anode
e-
ELECTROLYTE e
Cathode
-
OXIDANT Cooling
Fuel
eAnode
Fuel channel Oxidant
eElectrolyte
O2e-
Cathode
Figure 1. Framework for mp-MPC.
Oxidant channel
H 2 O2 o H 2O 2e O2 4 e- o 2O2
e-
Figure 2. SOFC system and unit cell configuration
Table 1. SOFC mathematical model.
, QH 2O
QO2
Valve Molar Constants
QH 2
Constitutive Equations
PH 2 Van
nH 2 RT , PH 2OVan
Dynamic Molar Balances
d PH dt 2
RT in QH 2 QHout2 2 K r I Van
Partial Pressure
PH 2 s
Dimensionless Heat Balance
w4 wW
Dimensionless Heat Generation Open Characteristic Electric Potential Concentration, Activation and Ohmic Losses Closed Circuit SOFC Voltage
F0
xt 1
K H2
PH 2
PH 2O
1/ K H 2 1 W H2 s
Q
in H2
K O2
PO2
nH 2O RT
,
d PH O dt 2
2 K r I , W H2
V / K H2 RT
§ Oeff , x w 2 4 Oeff , y w 2 4 Oeff , z w 2 4 1 K e Vj ¨ ( UC P ) s h ¨© Os w[ 2 Os wK 2 Os w] 2 K e Os 'T / heff 0.72S01.1 , F0
· ¸ ¸ ¹
Os 't / Us CP heff2 , S0 Vj 1 Ke / Ke Os 'T / heff
0.1 § RT ª PH 2 PO2 º ·¸ N O ¨ EO «ln » ¨ 2 F ¬« PH 2O ¼» ¸¹ ©
Vcon
i· RT § ln¨¨1 ¸¸ , Vact na F © iL ¹
VDC
VOCP Vcon Vact VOhmic
§i· RT log¨¨ ¸¸ , VOhmic D na F © i0 ¹
Cxt
rI
(1)
A
C
ª 4953 1593.5 118.82 1.66 º «18322 5891.4 47.346 0.66 » ¬ ¼
-0.00054 -0.00208 0.99374 0.00396
2 eff
VOCP
Axt But dt , yt
RT in Q QHout2O QHrxn2O Van H 2O
Os 't
ª0.99839 « «-0.00512 «-0.00097 « ¬«0.00162
-0.00506 0.98427 0.00298 0.00063
K H 2O ,
-3.6269 10 0.00026 0.00553 0.99569
5 º
» », B » » ¼»
0.00107 º ª-0.00106 « 0.00331 0.00331 »» « , «0.00072 » 0.00072 « » 5 8.6062 105 ¼» ¬«8.6062 10
where d is the model mismatch. Note that the input-output data and hence the SS model was derived for a sampling time of 0.01 s. The Matlab system identification toolbox
K. Kouramas et al.
776
was used for performing the system identification calculations. The modelling error between the dynamic mathematical model and the reduced model of the SOFC is shown in Figure 3. 2.3. Explicit/multi-parametric MPC design An explicit/multi-parametric MPC controller is designed in the third step of the proposed framework. First the following MPC control formulation is considered VN* x s.t.
min ut
xt 1
N M T ¦ t o rt i yt i Q rt i yt i ¦t 0 utT Rut
A xt B ut dt , yt
Cxt , t
0,1,
(2)
, N 1
ª0 mol/s º ª6 mol/s º ª200 Volt º ª350 Volt º «0 mol/s » d ut d «6 mol/s » , « 1000 K » d yt d « 1300 K » , t ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼
0,1,
,N
where y is the output vector, u is the input vector, d is the vector of the model mismatch and N is the output horizon and M is the control horizon. The objective function in the above optimization is set to minimize the difference between the actual output values and the set point r, as well as the control increments at all times. The input-output constraints correspond to the physical limitations of the inlet hydrogen and oxygen flowrate and the operational limitations for the voltage and temperature (for example below 200 V are not useful for everyday operation). We assume that N=5 and M=2, Q 100 I 2 and R 0.1 I 2 . The MPC optimization problem (2) is a multi-parametric Quadratic Programming (mp-QP) which consists of four optimization variables ut, ut+1 and eight parameters x [ xt , Vt , Tt , Vsp , Tsp ]T . The Parametric Optimization (POP) (Paros, 2007) software was used to solve the mp-QP problem (2) and derive an explicit controller which consists of 154 critical regions and corresponding control laws which are shown in Figure 4. The Mathematical description of the controller is given by
ut
Ki x ci if
x CRi
^x | Ai x d bi ` ,
i 1,
,51
(3)
2.4. Controller validation The controller is implemented directly to the dynamic model (Table 1.) of the SOFC and a set of closed-loop simulation are performed for varying load conditions. A number of step changes are considered for the current load that are of magnitude of ¨I = 20, 40, 60 A. The results of these simulations are shown in Figures 5 and 6 together with the results from the open-loop dynamic simulations (Section 2.1). By comparing the open-loop and closed-loop simulations we notice that although the open-loop voltage is significantly reduced for an increasing current load, in the closed-loop simulations the controller manages to regulate and maintain the voltage to its desired value despite the varying disturbance. In additions, the temperature increase that is noticed in the open-loop simulations is smaller for the closed-loop simulations. This is an important feature of the controller since no cooling system is considered in this study – it is clear that the inclusion of a cooling/heat exchanging system would have further improved the temperature changes due to the disturbances. 2.5. Concluding remarks In this work we presented a framework for the design and validation of explicit/multiparametric controllers for SOFC. The resulting controller manages to regulate the SOFC voltage to its desired value despite the disturbances while at the same time satisfying the system constraints.
Explicit/multi-parametric MPC of a Solid Oxide Fuel Cell
777
2.6. Acknowledgements The financial support of the following projects is acknowledged: EPSRC (EP/E047017/1, EP/G059071/1), EU (DECADE IAPP project, PIAP-GA-2008-230659) and European Research Council (MOBILE ERC Advanced Grant No: 226462).
Figure 3. Model mismatch between SOFC dynamic and approximating models.
Figure 4. Critical regions of the explicit controller.
Figure 5. Open-loop and Closed-loop simulation of the SOFC voltage V.
Figure 6. Open-loop and Closed-loop simulations of the SOFC temperature T.
References 1. A. Arce, D.R. Ramirez, A.J. del Real, C. Bordons, 2007, Constrained explicit Predictive Control Strategies for PEM Fuel Cell Systems, Proc. of the 46th IEEE Conf. Dec. Con., New Orleans, USA, 6088-6093. 2. M. Bavarian, M. Soroush, I.G. Kevrekidis and J.B. Benziger, 2010, Mathematical Modelling, Steady-State and Dynamic Behavior and Control of Fuel Cells: A Review, Industrial & Engineering Chemistry Research, 49, 17, 7922-7950. 3. D.I. Gerogiorgis, K.I. Kouramas, N. Bozinis and E.N. Pistikopoulos, 2006, AIChE Annual Meeting, San Francisco, California, USA, Session 455. 4. C. Panos, K. Kouramas, M.C. Georgiadis and E.N. Pistikopoulos, 2010, Modelling and Explicit MPC of PEM Fuel Cell Systems, Computer Aided Chemical Engineering, 28, 517-522. 5. E.N.Pistikopoulos, 2009, Perspective in Multiparametric Programming and explicit Model Predictive Control, AIChE Journal, 55, 8. 6. E.N. Pistikopoulos, M. Georgiadis, V. Dua (eds.), 2007, Multi-parametricModel-Based Control: Theory and Applications, Wiley-VCH, Weinheim, Germany 7. J.T. Pukrushpan, A.G. Stefanopoulou and H. Peng, 2004, Control of Fuel Cell Power Systems, series in advances in industrial control, Springer, London, UK 8. Varbanov P., Klemeš J., 2008. Analysis and Integration of Fuel Cell Combined Cycles for Development of Low-Carbon Energy Technologies. Energy, 33(10), 1508-1517.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Reformulation Scheme for Parameter Estimation of Hybrid Systems Ines Mynttinen and Pu Li Simulation and Optimal Processes Group, Institute of Automation and Systems Engineering, Technische Universität Ilmenau, 98693 Ilmenau, Germany
Abstract We present a new reformulation scheme for parameter estimation of hybrid systems. A key step in this method is the introduction of a continuous switching variable which approximates the strict complementarity condition by means of a smoothened step function (SSF). The reformulation is implemented using discretization with collocation on finite elements, and the resulting optimization problem is solved by a NLP solver. The effectiveness of the proposed reformulation scheme is demonstrated for the parameter estimation of a three-tank model by comparison with the results of a penalization approach and a heuristic particle swarm optimization. Keywords: parameter estimation, hybrid systems, reformulation methods
1. Introduction Parameter estimation of dynamic models is an important issue in many fields of industrial research [1], since simulation and optimization are usually based on differential algebraic equations (DAE) involving a multitude of parameters. To calibrate these dynamic nonlinear models, the parameters have to be estimated based on measured data [1, 2]. Until now continuous dynamic systems have been considered in most parameter estimation studies. However, in many fields of applications such as chemical processes, power plants and transport vehicles, continuous and discrete state dynamics are coupled strongly. Such systems with mixed continuous and discrete dynamics are called hybrid systems. In simulation studies on hybrid systems discrete transitions are handled through embedded logical statements [3]. For optimization tasks one can apply a heuristic (gradient-free) search algorithm, such as particle swarm optimization (PSO), which has recently been used for parameter estimation and optimization of a hybrid system [4, 5]. Another approach is to solve a high-dimensional optimization problem subject to the DAE system as constraints via a gradient-based method, which is much more efficient than a heuristic search method. However, NLP-based parameter estimation for hybrid systems is an extremely challenging task because due to instantaneous switches of system dynamics the objective functions, constraints and gradients can be non-smooth, divergent or discontinuous. As a consequence constraint qualifications will be violated [6]. To overcome this difficulty mixed-integer approaches [7, 8] and reformulation strategies have been proposed. Since the former can be computationally expensive for complex systems [8], in this study we focus on reformulation methods. Reformulation methods introduce additional variables to remove the non-smoothness from the problem while retaining the desired system features. Reformulation strategies can be classified as relaxation [6] or penalization of incomplete switching (PICS) [6, 9]. In this study, we propose a new relaxation method making use of a smooth step function. Applied to the parameter estimation of a three-tank model, the performance, accuracy and robustness
A Reformulation Scheme for Parameter Estimation of Hybrid Systems
779
of this reformulation method are studied and compared with those of a PICS approach and the PSO method.
2. Reformulation strategies for parameter estimation of hybrid systems Parameter estimation aims at extracting the best values of parameters determining the dynamics of the system under consideration, based on a series of measurements xjƐ(m) of several state variables xj , j = 1,…,M at different time points tƐ , Ɛ = 1,…,N. Due to measurement error, the estimated parameters are subject to some uncertainty. Assuming that the measurement error is uncorrelated and normally distributed with variance ıj2 model parameters can be estimated by minimizing the weighted least-squares function J ( p)
M
N
j
A
¦¦
( x j ( p, tA ) x (jAm ) ) 2
V 2j
(1)
subject to the DAE system as equality constraints and variable boundaries as inequality constraints. In this study we consider parameter estimation problems for hybrid systems i.e. a mode transition will take place when state variables meet certain switching conditions. This leads to a hybrid nonlinear dynamic optimization problem. For a convenient description, we consider a general binary hybrid system with autonomous transitions controlled by the transition condition s(x,p) and the optimization problem is formulated as min J ( x, p ) p
s.t. (mode 1) : x
f (1) ( x, p ), s( x, p ) t 0
(2)
(mode 2) : x f (2) ( x, p ), s( x, p ) 0 with states x = x(t) and parameters p. Solving this problem directly, the instantaneous transition between the modes will lead to the violation of constraint qualifications mentioned above and possibly to a failure of the NLP solver. Thus, we reformulate the equality constraints by introducing a time-dependent switching variable 0 ı(t) 1. The latter connects both operating modes resulting in the mixed but continuous dynamic x ı f (1) ( x, p ) (1 ı ) f (2) ( x, p ). (3) To describe the transition behavior, the switching variable needs to be forced to meaningful values, i.e. ı = 1 for mode 1 and ı = 0 for mode 2. For this purpose, we propose to use a smooth step function (SSF) to represent this variable. To be specific, we employ the Fermi-Dirac function 1 (4) ı ( s) 1 exp ( IJs ) where a non-negative relaxation parameter IJ is introduced to regulate the steepness of the smooth step. In this way the hybrid optimization problem is relaxed to a continuous optimization problem which can be solved by available dynamic optimization approaches. As in other relaxation methods, the complementarity is not strictly fulfilled. Instead, a sequence of relaxed problems with increasing IJ can be solved to approach the solution of the original problem by limnĺ IJn ĺ. For comparison, we also consider the penalization of incomplete switching (PICS) approach where the complementarity is ensured by an inner optimization combined with penalization of the constraint violation [9]. The inner optimization problem is defined as min s ı (5) 0d ı d1
which will be converted into the corresponding Karush-Kuhn-Tucker conditions í s – Ȝ0 + Ȝ1 = 0 with complementarity constraints Ȝ0 ı = 0, Ȝ1(1 í ı) = 0 and non-
780
I. Mynttinen et al.
negative Lagrange multipliers Ȝ0, Ȝ1 0. The violation of the complementarity constraints is penalized in the outer optimization with the objective function tf
min J ( x, p ) U ³ (O0V O1 (1 V )) dt p
(6)
t0
where the penalty is weighted by a time-independent parameter ȡ. If ȡ is greater than a certain value, the solution of this optimization problem will be exact, since all stationary points of the original problem are local minimizers to the penalized version [6]. After a time discretization (e.g. collocation on finite elements) both SSF and PICS lead to a smooth NLP problem which can be solved by any of the available NLP solvers. Solving the problem Eq. (2) by PSO, the objective function Eq. (1) is calculated repeatedly for the so-called particles of a swarm, where each complete parameter set p represents a particle position. This position is adapted using the particle velocity which is based on information from previous simulation cycles. For suitable tuning parameters, PSO provides a proper balance of parameter space exploration at the early stage of the search and a good precision of the final results [5].
3. Parameter estimation for a three-tank system In order to examine the performance of our reformulation method, we consider a tank system similar to that used in [8, 10], since it is simple enough to understand its behavior intuitively but exhibits non-trivial hybrid properties. The system consists of three tanks in a row connected to each other (Fig.1). There are inflows Qzi, i = 1, 3 to the left Qij
Aij sign( sij ) 2 g | sij |
sij
hi h j ,
(i , j )
{(1, 2), (2,3)}
exact
SSF
PICS
PSO
A12 [10-5m2]
6.0
6.51
6.06
6.28
A23 [10-5m2]
2.0
2.29
2.98
2.30
Fig. 1. Three-tank system, Toricelli’s law and optimal parameter values.
and the right tank. The dynamics of the tank levels hi, i = 1, 2, 3 is given by the mass balance of the tanks. The outflows QLi, Q3 and the flows between the tanks are modeled by Toricelli’s law (see Fig. 1). The sign function switches the direction of the flow between two tanks abruptly from +1 to í1 or vice versa, when the condition sij = hiíhj = 0 is passed. Note that the gradient of the flow diverges to infinity at this point. Our aim is to estimate via minimization of the objective function (Eq.(1)) the flow parameters Aij based on (simulated) measurement data hƐ(m), Ɛ = 1...10 of the tank levels taken equidistantly within the time horizon (t0, tf ) = (0, 20) s. The data are generated via simulation of the original model with added Gaussian noise. The equality constraints representing the system dynamics are reformulated using the proposed relaxation method. Then the dynamic optimization problem is discretized by the collocation method, and finally the discretized NLP problem is solved by using IPOPT [11]. In order to evaluate the performance of the proposed method, we compare it with that of PICS and PSO. First, we study the accuracy, i.e. the capability of the algorithms to re-
A Reformulation Scheme for Parameter Estimation of Hybrid Systems
781
produce the correct switching behavior and the optimal parameter values. The optimal state trajectories found by SSF and PICS are shown in Fig. 2a). They agree quite well with each other. In particular, the crossing points of the levels h1 and h2 and the levels h2 and h3 nearly coincide. This reflects the fact that the correct switching behavior is obtained in both cases as shown in Fig. 2b). It can be seen that using SSF the switch is smooth and rather slow, which apparently has almost no impact on the trajectories. In contrast, using PICS the switch takes place almost instantaneously. Obviously, PSO will also provide the correct switching behavior if a simulation tool able to treat hybrid systems is used. b)
a)
SSF
h1
PICS
h2 h3
F ig. 2. Trajectories of (a) the states at the solution for SSF (solid) and PICS (dashed) as well as the measurements of h1 (diamonds) and h2 (triangles) and (b) the corresponding switching variables ı12 (blue) and ı23 (green).
The table in Fig. 1 compares the optimal parameter values estimated by the three methods. With PICS we obtain the best value for the parameter A12, but the deviation of A23 from the exact value is considerable. SSF and PSO result in moderate deviations for both parameters. In summary, all three approaches provide reasonably accurate results. The robustness of the algorithms is evaluated regarding their sensitivity to a change of the reformulation parameter as well as the capability of the three methods to handle error-in-measurement. The dependence of the objective function value obtained by SSF and PICS on the respective reformulation parameter is shown in Fig. 3a) and b). For SSF, J(IJ) is monotonous and thus the parameter IJ can simply be increased as long as the problem stays sufficiently smooth for the NLP solver to find a solution. In contrast, the non-monotonous behavior of J(ȡ) with the PICS approach demonstrates that to find the proper balance between the least-squares term and the penalty is not a trivial task. We also studied the influence of a finite error-in-measurement on the estimated parameter values. Parameter estimation was carried out for 50 series of h2 for different values of ıM, so that the mean parameter values Ɩ12 as well as their standard deviation ıp can be evaluated. As expected, Ɩ12 is constant over a wide range of random error whereas ıp increases linearly with increasing ıM (not shown). It turns out that PICS can handle only small random errors, whereas the results of SSF and PSO are reasonable for ıM 0.02m. The CPU time needed by PICS is higher than that needed by SSF, since the number of iterations with SSF is generally lower, in particular in the case of a fine temporal grid. In the example of the three-tank system, the computation time required by PSO is not higher than that required by the reformulation strategies. With a small number of particles (nȞ = 25) and iterations (ni = 15) a small value of the objective function and proper parameter estimates can be achieved, see Fig. 1 and 3c). It should be noted, that the good performance of PSO is due to the low dimensionality of the parameter space. Reformulation strategies are expected to outperform PSO considerably for problems involving a large number of parameters or time-dependent control variables.
782
a)
I. Mynttinen et al.
b)
c)
Fig. 3. Dependence of the objective function on the reformulation parameters, i.e. (a) the smoothing parameter IJ and (b) the penalty ȡ in the case of SSF and PICS, respectively. (c) Objective function as a function of the number of iterations for PSO.
4. Conclusions Reformulation strategies of nonlinear optimization problems with complementarity constraints such as smoothing of the step function (SSF) and penalization of incomplete switching (PICS) can be used to solve hybrid dynamic parameter estimation problems. They provide a computationally attractive alternative to heuristic optimization methods like particle swarm optimization (PSO) in conjuction with a simulation tool for the underlying DAEs. The SSF method is more robust against the variation of the reformulation parameter than PICS. SSF and PSO proved to be quite robust against error in measurement. In the next step, we plan to apply these methods to large-scale systems and test their viability for industrial purposes.
5. Acknowledgement We would like to thank SIEMENS AG for financial support. We are grateful to Erich Runge for valuable help regarding the interpretation of results and methods.
References [1] K. Schittkowski (2002), Numerical Data Fitting, Kluwer Academic Press. [2] C. Michalik, B. Chachuat, W. Marquardt (2009), Incremental global parameter estimation in dynamical systems, Ind. Eng. Chem. Res., vol. 48, pp. 5489–5497. [3] R. Goebel, R.G. Sanfelice, A.R. Teel (2009), Hybrid dynamical systems, IEEE Control Syst. Mag., pp. 28–93. [4] V. S. Pappala, I. Erlich (2008), A new approach for solving the unit commitment by adaptive particle swarm optimization in IEEE PES General Meeting, pp. 1–6. [5] M. Schwaab, E.C. Biscaia, J.L. Monteiro, J.C. Pinto (2008), Nonlinear parameter estimation through particle swarm optimization, Chem. Eng. Sci., vol. 63, pp. 1542–1552. [6] B. T. Baumrucker, J.G. Renfro, L.T. Biegler (2008), MPEC problem formulations and solution strategies with chemical engineering applications, Comp. Chem. Eng., vol. 32, pp. 2903– 2913. [7] P. I. Barton, J.R. Banga, S. Galan (2000), Optimization of hybrid discrete/continuous dynamic systems, Comp. Chem. Eng., vol. 24, pp. 2171–2182. [8] J. Till, S. Engell, S, Panek, O. Stursberg (2004), Applied hybrid system optimization: An empirical investigation of complexity, Control Eng. Pract., vol. 12, pp. 1291–1303. [9] S. Sager (2009), Reformulations and algorithms for the optimization of switching decisions in nonlinear optimal control, J. Process Contr., vol. 19, pp. 1238–1247. [10] B. T. Baumrucker, L.T. Biegler (2009), MPEC strategies for optimization of a class of hybrid dynamic systems, J. Process Contr., vol. 19, pp. 1248–1256. [11] A. Wächter, Ipopt, https://projects.coin-or.org/Ipopt, Jul., 2010.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Dynamic optimization of bioreactors using probabilistic tendency models and Bayesian active learning Ernesto Martíneza, Mariano Cristaldi,b Ricardo Grau,b Joao Lopesc a
INGAR (Conicet-UTN), Avellaneda 3657, Santa Fe, S3002 GJC, Argentina
b
INTEC (Conicet-UNL), Güemes 3450, Santa Fe, 3000, Argentina
c
Porto University, Chemistry Dept., R. Aníbal Cunha 164, Porto 4099-030, Portugal
Abstract First-principles models of fermentation processes typically have built-in errors in the form of structural mismatch and parametric uncertainty. A model-based optimization approach for run-to-run improvement under uncertainty of fed-batch bioreactors by integrating probabilistic tendency models with Bayesian inference is proposed. Probabilistic models grounded on first principles are used in the design of dynamic experiments to bias data gathering towards the subspace of most promising operating conditions. Results obtained in the fed-batch fermentation of penicillin G are presented. Keywords: Bayesian inference, Bioprocesses, Model-based experimental design, Modeling for optimization, Uncertainty.
1. Motivation Most optimization techniques are model-based, and since accurate dynamic models are rarely available, guaranteeing the performance of an operating policy under uncertainty is crucial for a successful scale-up (Terwiesch et al., 1994; Schenker and Agarwal, 1995; Bonvin, 1998; Kadam et al., 2007). The best use of an imperfect first-principles model through proper handling of its inherent uncertainty is a challenging problem for fast productivity improvement of innovative fed-batch fermentations using a handful of production runs (Walsh, 2007). The main problem in bioreactor modeling for optimization is that biological activity occurs in alternative metabolic pathways with switches which are triggered in response to changes in environmental conditions (Visser et al., 2000; Riascos and Pinto, 2004; Martínez et al., 2009). Due to the complexity of metabolic regulation and sparse measurements, first-principles models of bioreactor dynamics can only capture the qualitative tendency of sampled state variables such as biomass and protein concentrations. Migration from the bench scale to production runs is thus made with high levels of uncertainty about the maximum level of productivity that can actually be achieved. As a result, a sub-optimal policy is typically used to compensate for the inherent uncertainty in a model-optimized policy (Bonvin, 1998).
784
E. C. Martínez et al.
2. Methodology 2.1. Probabilistic tendency model In order for a tendency model to reflect the observed bioreactor dynamics as accurately as possible it must faithfully represent its own fidelity statistically. A probabilistic model quantifies this uncertainty by integrating first-principles knowledge with data bias to capture all plausible dynamics in a distribution over model predictions for state transitions between samples. To this aim, let us assume that bioreactor dynamics is modeled using a number of state variables x(t) that can be measured and the vector y(t) represents measured values of the outputs at a given sampling time t. Also, it is assumed that the tendency model can be described by a dynamic stochastic model constituted by
f ( x, x, u (t ), w, T , t ) 0 , y
g ( x(t ))
(1)
with the set of initial conditions x(0)=x0, u(t) and w are, respectively, the time-dependent and time-invariant control variables (manipulated inputs), Tis the vector of i.i.d. model parameters with given a priori distributions p(T i ), i 1,..., k , and t is time. Run-to-run optimization of a bioreactor aims at increasingly improving a performance index (e.g., productivity) J at the end of each run by purposefully setting the parameters in the operating policy Mand the sampling strategy Mdefined as follows:
M1[ y 0 , E , w, t f ]; M 2
[t1 ,..., t n ]
(2)
where y0 is the set of initial conditions of the measured variables, and tf is the duration of an experiment. The idea is to exploit current knowledge about the prior distribution p(T) to define a model-optimized policy M and then explore over a evaluation run by sampling data to revise parameter distributions so that an improved policy is found. Control vector parameterization is used to discretize the control input u=[(t;E) profiles. To make predictions at an arbitrary sampling time, we take the uncertainty about tendency model parameters into account by averaging over predicted state transitions with respect to their probability distributions. Thus, a predictive state distribution p( x ti ) is obtained for each sample time ti, which sheds light not only on the expected value of the state x , but also on the uncertainty of this estimation. Within the Bayesian framework, probability distributions over model parameters T i , i 1,..., k , capture the a priori parametric uncertainty in a tendency model. Using samples, these distributions can be conveniently modified on a run-to-run basis so as to reflect modeling bias introduced by new sampled data. In this work parameter distributions are represented by histograms obtained by bootstrapping. The bootstrap method is a simulation method for statistical inference using re-sampling with replacements. The method has been succesfully applied in quantifying confidence intervals of uncertain kinetic parameters in metabolic networks (Joshi et al, 2006). 2.2. Model-based policy iteration A high-level description of the model-based policy iteration framework is given in Fig. 1. It is important to highlight that the activiity called policy evaluation corresponds to the actual running of a designed experimental run whereas other activities such as policy optimization, experimental design and sensitivity analysis are entirely based on model
Dynamic optimization of bioreactors using probabilistic tendency models
785
simulations. The operating policy M is first initialized by resorting to expert judgement and a priori knowledge from lab scale to avoid undesirable physiological states. Samples are taken along this experiment so as to make a rough estimation of probability distributions or histograms for parameters in the tendency model. Equipped with a probabilistic model which explicitly addressed its own uncertainty, the policy iteration loop can be entered. First, the “most probable” model parameterization is used to find a model-optimized operating policy. Using this policy an optimally informative experiment is designed to define an optimal sampling strategy Malong the next evaluation run. The policy is then evaluated experimentally and new data is gathered. To use incoming data more efficiently, a sensitivity analysis is made to pinpoint which are the subset of parameters that explain most of the variance of the performance index J. Finally, using new data the probabilistic tendency model is updated by bootstrapping the distributions of sensitive parameters, and a new policy improvement round begins. 2.3. Experimental design Optimal sampling times must be calculated so as to bring new information to selectively reduce parametric uncertainty which significantly affect the optimality of M regarding the performance index J. For effective sampling, the best criterion is D-optimality which maximizes the determinant of the Gram matrix M (Martinez, et al., 2009)
M 2*
max M 2 det M
; M
QT Q ;
subject to:
t iL d t i d t iU , i 1,...n
(3)
where each entry of the matrix Q, Siij, measures the sensitivity of the performance index J at the i-th sampling time with respect to the j-th parameter of the operating policyM. 1: Policy evaluation Exploratory run. Define priors p(T) for parameter distributions. 2: Model initialization 3: Loop 4: Model-based optimization Policy improvement. Optimal sampling strategy. 5: Experimental design 6: Policy evaluation Collect observations. 7: Sensitivity analysis Introduce modeling bias. 8: Probabilistic model update Bootstrapping. 9: End loop Fig. 1. High-level description of model-based policy iteration.
3. Example 3.1. Fed-batch fermentation of penicillin G Penicillin production is an established benchmark in fermentation processes for testing new approaches in modeling, optimization and control of novel bioprocesses. Run-torun optimization aims at maximizing the final amount of penicillin obtained. Firstprinciples equations and parametric uncertainty for an unstructured tendency model of a
786
E. C. Martínez et al.
fed-batch bioreactor are detailed in Menezes et al. (1994). An alternative tendency model for the penicillin bioreactor has been provided by Riascos and Pinto (2004). 3.2. Results Table 1 depicts policy run-to-run optimization results obtained when sampled data is generated using the model proposed by Riascos and Pinto (2004) as the in silico bioreactor with realistic added measurement noise, whereas for policy iteration the probabilistic model is based on the structure of the tendency model proposed by Menezes et al. (1994). The operating policy includes important parameters such as the concentration of substrate in the feed and initial volume along with a feed rate profile modeled using inverse polynomials (see Martinez et al., 2009) with parameters A, B and C. As can be seen, model-based policy iteration achieves a significant improvement in penicillin production in just three evaluation runs. In Table 2, selective uncertainty reduction is highlighted whereas in Fig. 2 the evolution of feed rate profiles are shown. Table 1. Model-based policy iteration under structural errors and parametric uncertainty
Policy parameters -2
A [L h ] B [h-1] C [h-2] tfeed [h] tfinal [h] Substrate feed Conc. [g L-1] First discharge [h] Discharge volume [L] Discharge frequency [h] Initial volume [L] Penicillin obtained, J (Kg) Performance mismatch, std(J)
Initial
1st run
2nd run
3rd run
Optimum
0.6882 0.1431 0.0002 0 240 240 24 60 24 600 16.12
0.8707 0.1 2.e-4 24 300 500 24 80 24 500 35.47
0.9697 0.1022 3e-4 23.6 300 500 24.17 80 24 500 40.49
1.3494 0.1015 9e-4 24 294.8 500 24.02 80 24.62 500 57.51
1.2755 0.1018 0.0012 22.37 300 500 24 79.57 35.28 500 63.24
3.1
2.1
2.5
1.2755
Table 2. Run-to-run uncertainty reduction using sensitivity analysis
Param.
Prior interval
Exploratory run
Pmáx
0.12-0.17
Ks
0.006-0.4 (5 – 10)e-3 (0.01- 8)e-3 0.40-0.58 0.4 - 1 (3 – 15)e-3 (1 – 20)e-5 0.014-0.029 (1 – 20)e-5
0.1444-0.17 (6.0 – 6.89)e-2 (5-5.0004)e-3 1e-6 - 6.65e-3 0.40 - 0.58 0.4 - 0.5178 (7.2 – 9.3)e-3 (1 – 1.09)e-4 (2.48 – 2.9)e-2 (1 – 1.14)e-5
Kh
(2 – 10)e-3
(2.0 – 2.04)e-3
Kx Kd klis Yxs Yps
Smáx Kp
]máx
1st run
2nd run
3rd run (6.0 – 6.89)e-2
0.40 – 0.4857
0.40 - 0.4757
(7.2 – 8.3)e-3 (2.48 – 2.9)e-2 (2.0 – 2.04)e-3
Dynamic optimization of bioreactors using probabilistic tendency models
787
Fig. 2. Run-to-run optimization of the substrate feed rate.
4. Final remarks A novel run-to-run optimization strategy for fast productivity improvement under uncertainty in fed-batch fermentation units integrating probabilistic tendency models with sensitivity analysis has been proposed. Bayesian inference and probabilistic models are very important for bioprocess scale-up since production data are very sparse.
References D. Bonvin, 1998, Optimal operation of batch reactors. J. Proc. Control, 355-368. M. Joshi, A. Seidel-Morgenstern, A. Kremling, 2006, Exploiting the bootstrap method for quantifying parameter confidence intervals in dynamical systems, Metab. Eng, 8, 447–455. J. Kadam, M. Schlegel, B. Srinivasan, D. Bonvin, W. Marquardt, 2007, Dynamic optimization in the presence of uncertainty: from off-line nominal solution to measurement-based implementation, J. of Proc. Control, 389-398. E. Martínez, M. Cristaldi M., R. Grau, 2009, Design of Dynamic Experiments in Modeling for Optimization of Batch Processes, Ind. Eng. Chem. Res, 48, 7, 3453-3465. J. Menezes, S. Alves, J. Lemos, 1994, Mathematical Modelling of Industrial Pilot-Plant Penicillin-G Fed-Batch Fermentations, J. Chem. Tech. Biotechnol., 123-138. C. Riascos, J. Pinto, 2004, Optimal control of bioreactors: a simultaneous approach for complex systems, Chemical Engineering J., 99, 23–34. B. Schenker, M. Agarwal. 1995, Prediction of infrequently measurable quantities in poorly modeled processes, J. Proc. Control , 329-339. P. Terwiesch, M. Agarwal, D. W. T. Rippin, 1994, Batch unit optimization with imperfect modeling: a survey, J. Proc. Control , 238-258. D. Visser, R. van der Heijden, K. Mauch, M. Reuss, S. Heijnen, 2000, Tendency modeling: A new approach to obtain simplified kinetic models of metabolism applied to Saccharomyces cerevisiae. Metabolic Engineering, 2(3), 252-275. G. Walsh, 2007, Pharmaceutical biotechnology: concepts and applications; John Wiley & Sons Ltd, Chichester, England.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Plantwide Control Design of a Postcombustion CO2 Capture Process Marc-Oliver Schacha, Rüdiger Schneiderb, Henning Schrammb, Jens-Uwe Repkea a
Institute of Thermal, Environmental and Natural Products Process Engineering, TU Bergakademie Freiberg, Leipziger Straße 28, 09596 Freiberg, Germany b Siemens AG Energy Sector, Fossil Power Generation, Industriepark Höchst, 65926 Frankfurt am Main
Abstract Coal-fired power plants are operated flexible over a large operating range and processes for postcombustion CO2 capture have to follow the power plant operation load and to separate the carbon dioxide in every operating point with minimal energy demand. In this work control structures for these processes were designed using self-optimizing control. The derived control structures allow the separation of 90% of CO2 over an operating range of 40 – 100% load with acceptable energy requirements. Keywords: CO2 capture, postcombustion, control structures, self-optimizing control
1. Introduction Carbon capture and storage from coal-fired power plants have become an important field of research in the last decade. Different concepts were analyzed and evaluated. For retrofitting already existing power plants postcombustion processes should be applied. Chemical absorption processes have shown a very good performance in this field. Pilot plants of these processes are already constructed on some power plant sites to gain more insight in the operation. In the fields of process configuration, solvent design and main equipment extensive research work has been conducted. The next step in the development of the process is the design of a control and operation strategy. Contributions in this field are scarce in the literature. Lawal et al. (2010) published results of a dynamic simulation of an absorption process for CO2 capture, but they concentrated more on the analysis of different cases e.g. change of flue gas mass flux and change of the mass flow of the heating steam. The same control structure was used for all the discussed cases. Ziaii et al. (2009) showed the dynamic performance of a desorber when the heating steam changes. The motivation was an economic operation at times when the price for electricity is high. Then it could be advantageous to use the heating steam of the power plant for the production of electricity instead of using it for the solvent regeneration. Kvamsdal et al. (2009) made simulation studies with a dynamic absorber model. Effects of load changes on the absorber performance were analyzed. Since solely the absorber was modeled the used control structure didn’t took the whole process into account. Panahi et al. (2010) designed a control structure for the complete capture plant using the “self optimizing control” concept of Skogestad (2000). The control structure was designed for one operating point with respect to disturbances. In this work control structures for an operating range of 40 to 100% load of the power plant were designed. Due to the increased usage of renewable energy sources coal-fired power plants are operated more dynamically. Therefore a control structure for a carbon capture plant has to be designed for a large operating range.
Plantwide Control Design of a Postcombustion CO2 Capture Process
789
2. Process Simulation As shown in Schach et al. (2010) the standard configuration of the chemical absorption process with absorber intercooler has a very good performance in terms of cost of CO2 avoided in comparison with other configurations. The flowsheet of the process is shown in Figure 1. The flue gas enters the absorber after passing a blower and a water cooler. The CO2 reacts with the solvent, a 30 wt-% monoethanolamine (MEA) solution, and 90% of the CO2 is separated. The solvent is intercooled in the absorber column leading to a higher loading of the solvent which results in a more efficient regeneration. The loaded solvent is pumped through a cross heat exchanger to the top of the stripper. In this column the solvent is regenerated by providing heat in form of heating steam of the power plant. The vapours are condensed in a partial condenser at 40°C. As gaseous product CO2 is obtained which is liquefied in a multistage compressor by pressurizing the CO2 up to 110bar. During the compression water condenses and the resulting liquefied CO2 has a purity of >99.5 mol-%. The lean solvent is routed back to the absorber. For this process configuration a control structure was designed in this work. The process was simulated using Aspen Plus 2006.5 with the amine package MEAREA which provided the reaction model considering both kinetically controlled and Table 1: Flue gas data Load
100%
Mass flow Temperature xCO2 xN2 xO2 xH2O
750 kg/s 49°C 13.5 71.5 3.5 11.5
Figure 1: Flowsheet of the analyzed process
equilibrium reactions. For the absorber and stripper columns the RadFrac model with rate-sep calculation was used. The control structure was designed for an operating range of the power plant of 40 to 100% load. Four different operating points were considered: 40, 60, 80 and 100% load. The mass flow and composition of the flue gas for the 100% case are shown in Table 1. At lower loads the mass flow decreases and the composition changes. Since in part load the coal is burnt with excess air the concentration of CO2 and water decrease whereas the concentration of oxygen and nitrogen increase.
3. Control Structure Design The self-optimizing control concept of Skogestad (2000) was applied for the control structure design. The objective of this concept is to find the control structure which realizes an acceptable loss with constant setpoint values for the controlled variables even if disturbances occur. For the analyzed process this means that the control structure has to maintain an economic separation of 90% of the CO2 over the whole regarded operating range. In the following the steps and results of the procedure are described. 3.1. Objective, Constraints and Disturbances The objective is to minimize the equivalent energy demand of the capture process while maintaining a separation degree of 90% CO2. The energy demand is composed of the required energy for the blower, pumps and compressor and the equivalent work for the solvent regeneration.
790
M.-O. Schach et al.
The process is subject to 8 constraints: CO2 separation degree of 90%, the flue gas is cooled down to 40°C before entering the absorber, the pressure in the absorber is 1bar, the temperature in the condenser is 40°C, concentration of MEA has to be maintained with MEA and water makeup streams, the temperature of the pumparound in the feed cooler is constant and the CO2 is compressed up to 110bar. The capture process at 100% flue gas load was regarded as the nominal process and the other loads of the power plant were considered as disturbances. 3.2. Degree of Freedom The degree of freedom was determined using the concept of the restraining numbers from Konda et al. (2006) resulting in a degree of freedom of 18. All 8 constraints and the 5 levels have to be controlled. In order to have the solvent mass flow as a free manipulated variable the level in the absorber is controlled with the temperature of the lean solvent. With this temperature the water leaving the column at the top can be manipulated and in this way the level can be controlled. Since the feed is predetermined by the power plant load, the number of degrees of freedom is reduced by one. Taking this and the above mentioned constraints into account the degree of freedom is 4: pressure in the stripper column, temperature and mass flow of the solvent at the intercooling stage and mass flow of the solvent. 3.3. Optimization Parameters left for optimization are: the pressure in the stripper (1.1 – 1.8bar), the temperature of the intercooling stage (30 – 50°C) and the mass flow of the solvent. Since always the whole hold up was intercooled this was no parameter for optimization. The temperature approach in the cross heat exchanger for the integration of the heat of the lean solvent has a big influence on the performance of the process. It is determined by the exchanger area. For the optimization of the process at 100% load this was also an optimization parameter, as well as the stage of the intercooler in the absorber. The molecular-inspired parallel tempering algorithm from Ochoa et al. (2009) was used for the optimization. For the optimization of the process at 100% load the objective function was the cost of CO2 avoided which was calculated according to the cost model from Schach et al. (2010). Since for this load the size of the main apparatuses, except for the columns was not determined, the cost of CO2 avoided are a more meaningful value to evaluate the overall performance. Since no parameter was at its bound after the optimization there are still 4 degrees of freedom minus 1 for the mass flow in the intercooler. The pressure in the stripper has to be controlled due to safety reasons. Therefore two degrees of freedom are left: solvent mass flow and mass flow of cooling water in the intercooling stage. 3.4. Identification of candidate controlled variables Two degrees of freedom are left for which controlled variables have to be identified. The energy for solvent regeneration, i.e. the mass flow of heating steam is decisive for the overall performance. Therefore it is crucial to find for this manipulated variable a controlled variable which leads to an economic separation. To get the mass flow of heating steam as an additional degree of freedom the constraint that 90% of the CO2 emissions have to be separated was deleted. Now for three manipulated variables controlled variables had to be selected. The minimum value of the objective function Jmin(u,d) = J(uopt(d),d) is reached when for every disturbance the manipulated variables change the controlled variables to a new optimal value. In chemical plants one finds mostly feedback control, where the controlled variables have constant setpoints and the objective function is J(u,d). The difference between those two definitions is the loss
Plantwide Control Design of a Postcombustion CO2 Capture Process
791 (1)
with z = Juu1/2(u - uopt) = Juu1/2 G-1 (c - copt). Juu is the Hessian of the objective function and G is the steady-state gain matrix. The objective is to find those controlled variables which minimize the loss. If each controlled variable ci is scaled such that ||ec|| = ||c copt||2 1, the worst case loss can be expressed according to Halvorsen et al. (2003) (2) where S is the matrix of scalings for the controlled variables ci, S = diag{1/span(ci)}, where span(ci) = ¨ci,opt(d) + ni(¨ci,opt(d)) is the variation of ci due to variation on disturbances and ni is the implementation error of ci. Those sets of controlled variables with the highest minimal singular value minimize the maximal loss and should be chosen for the control structure. Out of 29 different controlled variables those were selected which minimize the maximal loss. With three manipulated variables there are 3654 possible sets. To avoid screening all possible sets a branch and bound algorithm of Cao et al. (2008) was used. Since this screening method evaluates the sets in terms of minimizing the objective function, a further analysis had to be employed to analyze the applicability of the proposed sets. This was done using the relative gain array (RGA) and the performance relative gain array (PRGA). Table 2 shows a selection of the best sets of controlled variables which maintain an economic separation of CO2 over the whole operating range and which are not coupled. Table 2: Selection of the best sets of controlled variables Set
C1 (MRichSolvent)
C2 (MHeating Steam)
C3 (MCooling Water)
I II III IV
MCO2, Feed/ MRichSolvent MCO2, Feed/ MRichSolvent MCO2, Feed/ MRichSolvent MCO2, Feed/ MRichSolvent
MHeating steam/MCO2 Feed xCO2, Sweetgas loading lean solvent MHeating Steam/MCO2 Feed
T18, Absorber T18, Absorber T18, Absorber MCooling Water/MFlue Gas
Table 3 shows for the four sets the loss (Loss = J - Jopt) in terms of equivalent power in MWel and the separation degree of CO2 for the three part load cases with constant setpoints. With set II a separation degree of 90% cannot be reached. Due to this the energy requirement are lower than for the optimized cases. With the other sets 90% of the CO2 can be separated with an acceptable loss of energy. This was achieved although the separation degree was not defined as a controlled variable. Since the loading of the lean solvent is difficult to measure online and set II does not separate 90% of the CO2, set I and set IV can be recommended as a control structure. The process with the control structure according to set I is shown in Figure 2. Table 3: Separation degree of CO2 and loss in MWel at part load with constant set points
Loss in MWel
Separation degree of CO2
Set
80%
60%
40%
80%
60%
40%
I II III IV
0,96 -2,47 -0,48 1,12
-0,52 -5,08 1,97 -0,5
0,84 -5,2 -3,76 0,89
90,65 89,01 90,03 90,51
89,74 87,09 90,83 89,82
90,49 85,16 88,11 90,72
792
M.-O. Schach et al.
Figure 2: Flowsheet with control structure according to set I
4. Conclusion Control structures for a postcombustion CO2 capture process using self-optimizing control based only on steady state analysis were designed. Three of the four presented structures allow the separation of 90% of CO2 with acceptable losses in terms of equivalent energy demand although the separation degree was not a controlled variable.
References Cao, Y., Kariwala, V., 2008, Bidirectional branch and bound for controlled variable selection part I. Principles and minimum singular value criterion, Computers and Chemical Engineering, 32, 2306 – 2319. Halvorsen, I., Skogestad, S., Morud, J., Alstad, V., 2003, Optimal selection of controlled variables, Ind. Eng. Chem. Res., 42, 3273 – 3284. Konda, M., Rangaiah, G., Krishnaswamy, P., 2006, A simple and effective procedure for control degrees of freedom, Chemical Engineering Science, 61, 1184 – 1194. Kvamsdal, H., Jakobsen, J., Hoff, K., 2009, Dynamic modeling and simulation of a CO2 absorber column for post-combustion CO2 capture, Chemical Engineering and Processing, 48, 135 -144 Lawal, A., Wang, M., Stephenson, P., Koumpouras, G., Yeung, H., 2010, Dynamic modelling and analysis of post-combustion CO2 chemical absorption process for coal-fired power plants, Fuel, 89, 2791 – 2801. Ochoa, S., Repke, J.-U., Wozny, G., 2009, A new parallel tempering algorithm for global optimization: applications to bioprocess optimization, Computer Aided Chemical Engineering, 26, 513 – 518. Panahi, M., Karimi, M., Skogestad, S., Hillestad, M., Svendsen, H., 2010, Self-optimizing and control structure design for a CO2 capturing plant, Proceedings of the 2nd Annual Gas Processing Symposium. Schach, M.-O., Schneider, R., Schramm, H., Repke, J.-U., 2010, Exergoeconomic analysis of post-combustion CO2 capture processes, Computer Aided Chemical Engineering, 28, 997 – 1002. Schach, M.-O., Schneider, R., Schramm, H., Repke, J., 2010, Techno-economic analysis of postcombustion processes for the capture of carbon dioxide from power plant flue gas, Ind. Eng. Chem. Res., 49, 2363 – 2370. Skogestad, S., 2000, Plantwide control: the search for the self-optimizing control structure, Journal of Process Control, 10, 487 – 507. Ziaii, S., Rochelle, G., Edgar, T., 2009, Dynamic modeling to minimize energy use for CO2 capture in power plants by aqueous monoethanolamine, Ind. Eng. Chem. Res., 48, 6105 – 6111.
@AB?<=2.;'F:=<@6B:<;<:=BA2?6121%?<02@@;46;22?6;4 I '%
#%6@A68<=
9@2C62?*99?645A@?2@2?C21
#!#(!!$" !#"#""! % # $"! #" ("" #.C22; .?A68 '5.;8.?#.?.@6:5.;.
$%%&%! !!(#$
"#!# '<3A@2;@@.?2 6;0?2.@6;49F/26;4B@21A<2@A6:.A2163360B9AA<:2.@B?2C.?6./92@B@6;4 B@6;4.:.A52:.A60.9:<129.;1 B.?2@ .;1 %?6;06=.9 <:=<;2;A@ &24?2@@6<; .?2 AD< =<=B9.? :2A5<1@ 3 12C29<=6;4 A52 96;2.? :<129@ B@21 6; @<3A @2;@@ B2/F0<:/6;6;4% D6A5 0<;20=A@1?.D;3?<:.A.&20<;0696.A6<; A205;6>B2@(52@<9BA6<;D2=?<=<@26@. :.A52:.A60.99F?64
(&!" @<3A@2;@=?6;06=.90<:=<;2;A@1.A.?20<;0696.A6<;
#!$# <;A?<9 <3 >B.96AF C.?6./92@ 6@ .; 6:=B.96AFC.?6./92@3?<:
; A52 12C29<=:2;A <3 @<3A @2;@@ 96;2.? :<129@ .?2 2EA?2:29F =<=B9.? /20.B@2 A52F .?2 ?29.A6C29F 2.@F A< 12C29<= .;1 B@2 %.?A6.9 !2.@A '>B.?2@ %!' .;1 %?6;06=.9<:=<;2;A@;.9F@6@%.?2AD<<3A52:<@A0<::<;A205;6>B2@B@21A< 12C29<= 96;2.? :<129@ 3?<: 1.A. %!' 12C29<=@.96;2.?:<129/2AD22;A52?24?2@@ C.?6./92@ ' .;1 A52 =?2160A21 C.?6./92@ ( A.86;4 6;A< .00B.?2@A<?29.A2A52=?2160A21.;1?21B021 @2A <3 C.?6./92@ (5B@ %& 0.; .00
' #% %
794
9682965<<1@2;@2(52=?<=<@21.==?<.050<:/6;2@=?6;06=92@.;10<;02=A@1?.D;/
!#% # +2 36?@A =?B2@ 3
כ
ൌͳ
Ǣ
ൌͳ
ൌ
כ
ൌͳ
Ǣ
ൌͳ
+236?@A0<;@612?A52@6:=920.@2?232??21A<.@A525<:<@021.@A600.@2D52; A52 :2.@B?2:2;A2??@ 67 .;1 67 .?2 :BAB.99F 6;12=2;12;A .;1 612;A60.99F 16@A?6/BA21 .B@@6.; C.?6./92@D6A5G2?< :2.;.;1 C.?6.;02 +29.A2?2EA2;1 B2;02@ A :.F .9@< /2 ;@ .;1D2
ൌ
3A52;B:/2?<396;2.??29.A6<;@A5.A2E6@A/2AD22;A52C.?6./92@6@; 0.;.9@</2D?6AA2;6;A2?:@<3A52?24?2@@21 .;1=?2160A21C.?6./92@/F.==?<=?6.A2=.?A6A6<;6;4<3A520<9B:;@<3 .@
ൌ
%!#%(#!#!&$""#!%!$!%$ $!#'!" % &$
795
; A52 =?2160A6<; =5.@2 > 6@ B@21 A< B2<3.A.&20<;0696.A6<; 6@B@21A< D52?2 A52 C.?6./92@ ' .?2 12;B2A520<;@A?.6;A :<1296@A52 <;2612;A63621 B@6;4 %6;A52 :<129 /B6916;4 =5.@2 3 A52 2??@ 6; A52 :2.@B?21 C.?6./92@ .?2 .99 6;12=2;12;A .B@@6.; C.?6./92@ D6A5 G2?< :2.; .;1 612;A60.9 C.?6.;02@ A52; A52 :.E6:B: 9682965<<1 2@A6:.A2@ <3 .99 C.?6./92@.?2
ሺ െ ሻ ሺ െ ሻ
ൌ D52?2 .?2 A52 :2.@B?2:2;A@ <3 A52 C.?6./92@ .;1 ǡ .?2 A52 2@A6:.A2@ <3 A52 C.?6./92@ FB@6;4&:2.00B?.A22@A6:.A2@ <3A52?24?2@@C.?6./92@.?2B2 6@ B@21 A< @<9C23A522@A6:.A2@ ;A52./
$#"$#""$"""
64B?2 (5239B.A6<;@
(52 23360.0F <3 A52 =?<=<@21 A205;6>B2 6@ 699B@A?.A21 B@6;4 . @6:=92 39
796
' #% %
.@2 9
- 6; A52 52A2?<@021.@A600.@22C.9B.A6;4A52=2?3B2 .?2 .==9621 .;1 A526? ./696A62@ A< .00B?.A29F 2@A6:.A2 A52 =?2160A21 C.?6./92@ .?2 2C.9B.A21 <; A52 A2@A 1.A. @2A /F 0<:=BA6;4 A52 ?<B.?216332?2;02@/2AD22;A52=?2160A212@A6:.A2@.;10?2@=<;16;4A?B2C.9B2@ (./92@ @5
$!( ; A56@ D8 . ?64
%!#%(#!#!&$""#!%!$!%$ $!#'!" % &$
797
/2 B@21 /F 0<:/6;6;4 A52 A2?.A21 % :2A5<1 3 @6:B9A.;2
"2A5<1
%&
%!'
%&
%!'
%&
%!'
(./92 &"'6; 2@A6:.A2@D52;2??@.?252A2?<@021.@A60 .@2
"2A5<1
%&
%!'
%&
%!'
%&
%!'
!"
.1920 % .1?F@ 'A?.;1A ' .A.1?6C2; '<3A '2;@@ 6; A52 =?<02@@ 6;1B@A?F <:=BA 52: ;4 I
+<91 ' &B52 +<91 B;; + (52 0<996;2.?6AF =?92: 6; 96;2.? ?24?2@@6<; A52 =.?A6.9 92.@A @>B.?2@ %!' .==?<.05 A< 42;2?.96G21 6;C2?@2@ '" '06 'A.A <:= #.?.@6:5.; ' '5.5 '! "<129 612;A6360.A6<; .;1 2?? 0
+2;AG299 % ;1?2D@ ( ".E6:B: !682965<<1 :B9A6C.?6.A2 0.96/?.A6<; ;.9 52:
#.192? <63:.; && %.?A6.9!2.@A'>B.?2@22?J@!.D.;1A52;2A.;.9FA2 @64;.9'[email protected]"<1296;4.;1;.9F@6@ 52:<: +2;AG299 %;1?2D@ (.:69A<; ./2?
I #.?.@6:5.; ' 1.052 .A. &20<;0696.A6<; .;1 ?<@@ ?? 2A20A6<; (2E.@B93%B/96@56;4<:=.;F
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Approximate Multi-Parametric Programming based B&B Algorithm for MINLPs Taoufiq Gueddar and Vivek Dua* Centre for Process Systems Engineering, Department of Chemical Engineering, University College London, London WC1E 7JE, United Kingdom. * E-mail: [email protected]
Abstract In this work an improved B&B algorithm for MINLPs is proposed. The basic idea of the proposed algorithm is to treat binary variables as parameters and obtain the solution of the resulting multi-parametric NLP (mp-NLP) as a function of the binary variables, relaxed as continuous variables, at the root node of the search tree. It is recognized that solving the mp-NLP at the root node can be more computationally expensive than exhaustively enumerating all terminal nodes of the tree. Therefore, only a local approximate parametric solution, and not a complete map of the parametric solution, is obtained and it is then used to guide the search in the tree. Keywords: Parametric Programming, Branch and Bound, MINLP.
1. Introduction and Problem Definition 1.1. Mixed-Integer Nonlinear Programming Consider the following Mixed-Integer Nonlinear Program (MINLP), problem P1:
z1 = min f ( x, y ) x, y
subject to : h(x, y ) = 0 g ( x, y ) ≤ 0 x ∈ ℜn x n y ∈ {0,1} y The deterministic algorithms for MINLPs can be broadly classified as those based upon decomposition principles and Branch & Bound (B&B) techniques (Bonami et al., 2008; Floudas, 1995). Two most commonly used decomposition algorithms are based upon Generalized Benders Decomposition (GBD) (Geoffrion, 1972) and Outer Approximation (OA) (Duran and Grossmann, 1986). In the GBD and the OA algorithms a sequence of iterating primal and master sub-problems is constructed that converges in a finite number of iterations. The sequence of primal sub-problems represents non-increasing upper bounds and the sequence of master sub-problems represents non-decreasing lower bounds. The primal sub-problem is formulated by fixing the binary variables resulting in a nonlinear program (NLP). The main difference between the GBD and the OA algorithm is in the formulation of the master subproblem. In the GBD algorithm the master sub-problem is based upon duality theory whereas in the OA algorithm the master sub-problem is obtained by linearizing the constraints and the objective function. In the GBD and the OA algorithms the master sub-problems is formulated as a mixed-integer linear program (MILP). It has been shown that the lower bound generated by the OA algorithm is greater than or equal to that generated by the GBD algorithm. The OA algorithm therefore takes fewer iterations
Multi-Parametric Programming based Algorithm for MINLPs
799
than the GBD to converge and has been successfully applied to several process and product design problems. Decomposition algorithm based upon simplical approximation (Goyal and Ierapetritou, 2004) and cutting plane methods requiring repetitive solution of MILPs have also been presented (Westerlund and Pettersson, 1995). B&B algorithms are based upon a systematic tree search methodology (Borchers and Mitchell, 1994; Gupta and Ravindran, 1985). At the root node of the tree all the binary variables are relaxed as continuous variables, at the terminal nodes all the binary variables are fixed and at the intermediate nodes some of the binary variables are fixed and the remaining ones are relaxed. The problem at each node of the tree corresponds to an NLP; the solution at the root node represents a lower bound and at a terminal node represents an upper bound on the solution. The efficiency of the B&B algorithms depends upon enumerating as few nodes as possible. The decision of whether to enumerate or fathom a node depends upon the solution obtained at its predecessor node and the best upper bound that is available. In this work an improved B&B algorithm for MINLPs is presented and its performance is analyzed. The algorithm is based upon the fundamentals of parametric programming, which are discussed next. 1.2. Multi-parametric Programming Consider the following multi-parametric Nonlinear Programming (mp-NLP) problem (Pistikopoulos et al., 2007a,b; Pistikopoulos, 2009), problem P2:
z2 (ș) = min f (x, ș) x
subject to : h(x, ș) = 0 g (x, ș) ≤ 0 x ∈ ℜnx ș ∈ ℜ nθ Parametric programming provides x* as a set of explicit functions of θ without exhaustively enumerating the entire space of θ, the regions where these explicit functions are valid are known as Critical Regions (CRs). For the solution of the mpNLPs, the nonlinear terms are outer-approximated and multi-parametric Linear Program (mp-LP) is formulated and solved. The points in the space of θ where the difference between the solution of the NLP and the mp-LP is maximum are identified and at those points mp-LPs are formulated and solved. This procedure is repeated until this differenece is within a certain tolerance. In the next section a new B&B algorithm for solving P1 based upon parametric programming is presented, in section 3 an illustrative example is presented and concluding remarks are presented in section 4.
2. Multi-parametric Programming based B&B Algorithm for MINLPs In this work the MINLP (problem P1) is reformulated as an mp-NLP (problem P2) by relaxing the binary variables, y, as continuous variables bounded between 0 and 1 and treating y as parameters, problem P3:
z3 ( y ) = min f (x, y ) x
800
Gueddar and Dua
subject to : h(x, y ) = 0 g(x, y ) ≤ 0 x ∈ ℜn x y ∈ [0,1] y n
The solution of problem P3 provides the objective function, z3, and the continuous variables, x, as a function of y given by z3(y) and x(y) respectively. The optimal solution can then be obtained by fixing all the possible combinations of y and evaluating z3(y) through simple function evaluations at those fixed values and then selecting the best solution. 2.1. Motivating Example Consider the following MILP, formulated as an mp-LP: z( y1 , y2 ) = max 8.1x1 + 10.8 x2 x subject to : 0.80 x1 + 0.44 x2 ≤ 24000 + 6000 y1 0.05 x1 + 0.10 x2 ≤ 2000 + 500 y2 0.10 x1 + 0.36 x2 ≤ 6000 0 ≤ y1 , y2 ≤ 1 The solution of this problem is given in Table 1. Evaluating z(y1,y2) by fixing y1 and y2 at the binary values gives z(0,0) = 2.87E5, z(0,1) = 3.05E5, z(1,0) = 3.15E5, z(1,1) = 3.51E5. The optimal solution of the MILP is therefore given by z = 3.51E5, y1 = 1, y2 = 1, x1 = 3.34E4 and x2 = 7.38E3. Table 1. mp-LP Solution of the MILP i
CRi
1
-827.59 y1 + 2103.45 y2 ≤ 896.55 0 ≤ y1 ≤ 1 0 ≤ y2 -827.59 y1 + 2103.45 y2 ≥ 896.55 0 ≤ y1 ≤ 1 y2 ≤ 1
2
Optimal solution given by z(y1,y2) and x(y1,y2) z(y1,y2) = 27931.04 y1 + 43758.63 y2 + 286758.6 x1(y1,y2) = 10344.83 y1 - 3793.10 y2 + 26206.897 x2(y1,y2) = -5172.41 y1 + 6896.55 y2 + 6896.552 z(y1,y2) = 45147.55 y1 + 305409.86 x1(y1,y2) = 8852.459 y1 + 24590.164 x2(y1,y2) = -2459.016 y1 + 9836.065
This approach in general can be more computationally expensive than exhaustively solving the MILP or MINLP for all the possible fixed values of y. In the next section an algorithm based upon an approximate parametric programming, which estimates the solution at the terminal nodes of the B&B tree from the solution at the root node and intermediate nodes is presented. 2.2. Approximate Parametric Programming In this work a complete parametric solution profile of problem P3 is not obtained and instead an approximate parametric solution of problem P3 is obtained. First and second order approximations of the optimal parametric solution are obtained (Fiacco, 1983; Jackson and McCormick, 1988). These approximations are obtained by solving the NLP in P3 for a certain value of y, at the root node of the B&B tree, and are given by explicit functions of y. Evaluating these approximate solutions at the terminal nodes of the B&B tree provides an estimate of the solution of the original MINLP for fixed binary values of y. These estimated solutions are then ranked, based upon whether the solution is feasible as well as the values of the estimates. This ranking is then used to guide the search in the B&B tree – so as to make decisions which nodes to fathom and which ones
Multi-Parametric Programming based Algorithm for MINLPs
801
to branch on. Second order approximation is computationally more expensive to obtain than the first order approximation, but provides a better estimate of the solution and therefore requires fewer nodes to be explored. There is clearly a trade-off between the computational effort required to obtain the approximations and the number of NLPs solved at the nodes. An example where this approximate parametric programming approach has been applied is presented next to demonstrate the main ideas of the approach.
3. Illustrative Example Consider the following MINLP example problem (Duran and Grossmann, 1986): z = min 5 y1 + 6 y 3 + 8 y 2 + 10 y 4 + 6 y 5 - 10 x1 - 15 x 2 - 15 x 5 + 15 x 3 + 5 x 4 - 20 x 6 + exp( x1 ) + exp( x 2 / 1.2) - 60 ln( x 3 + x 4 + 1) + 140 Subject to : − ln( x 3 + x 4 + 1) ื0 − x1 - x 2 - 2 x 5 + x 3 + 2 x 6 ื0 − x1 - x 2 - 0.75 x 5 + x 3 + 2 x 6 ื0 x5 - x 6 ื 0 2 x 5 - x 3 - 2 x 6 ื0 − 0.5 x 3 + x 4 ื0 0.2 x 3 - x 4 ื0 exp( x1 ) - 10 y1 ื1 exp( x 2 / 1.2) - 10 y 2 ื1 1.25 x 5 - 10 y 3 ื0 x 3 + x 4 - 10 y 4 ื0 − 2 x 5 + 2 x 6 − 10 y 5 ื0 y 4 + y 5 ื1 y1 + y 2 = 1 y ෛ 0, 1}5 , a ื X ื b, X = x j , j = 1,2,3,4,5, 6}, a T = (0,0,0,0,0,0), b T = (2,2,−,−,2,3)
{
{
A B&B tree depicting the path taken in the tree is shown in Figure 1. The dark coloured nodes are the nodes that were explored and light coloured nodes were not explored. The nodes are numbered for reference, the root node is numbered as 1 and the terminal nodes are numbered from 32 to 63. The approximate parametric solution of the NLP at node 1 is used estimate the solution at nodes 32 to 63. Nodes 32-45 give infeasible solution, a feasible solution is estimated at node 46. The binary variables are fixed corresponding to node 46 and the corresponding NLP is solved, to provide the current upper bound. The rest of the proposed algorithm steps are highlighted in Figure 1. The computational requirements for the proposed algorithm are comparable to or better than the traditional B&B algorithm. The finite convergence and guaranteed optimality properties, for convex functions for the traditional B&B algorithm, are retained in the proposed algorithm. A more detailed computational comparison will be presented in the future publications; the main objective of this work was to present and demonstrate applicability of the basic concepts of the proposed algorithm.
4. Concluding Remarks An approximate parametric programming solution at the root node of the B&B tree is obtained and used to estimate the solution at the terminal nodes. These estimates are then used to guide the search in the B&B tree, resulting in fewer nodes being evaluated and reduction in the computational effort. Preliminary computational results are encouraging, future work will involve testing the proposed algorithm on larger scale problems and comparing with other algorithms reported in the literature.
802
Gueddar and Dua
Figure 1. Approximate Parametric Programming based B&B Tree for MINLP
References P. Bonami, L.T. Biegler, A.R. Conn, G. Cornuejols, I.E. Grossmann, C.D. Laird, J. Lee, A. Lodi, F. Margot, N. Sawaya and A. Wächter, 2008, An algorithmic framework for convex mixed integer nonlinear programs, Discrete Optimization, 5, 186-204. B. Borchers and J.E. Mitchell, 1994, An improved branch and bound algorithm for mixed-integer nonlinear programs, Computers and Operations Research, 21, 359-367. M.A. Duran and I.E. Grossmann, 1986, An outer-approximation algorithm for a class of mixedinteger nonlinear programs, Mathematical Programming, 36, 307-339. A.V. Fiacco, 1983, Introduction to Sensitivity and Stability Analysis in Nonlinear Programming, Academic Press, New York. C.A. Floudas, 1995, Nonlinear and Mixed-Integer Optimization: Fundamentals and Applications, Oxford University Press. V. Goyal and M.G. Ierapetritou, 2004, Computational studies using a novel simplicalapproximation based algorithm for MINLP optimization, Computers and Chemical Engineering, 28, 1771-1780. O.K. Gupta and A. Ravindran, 1985, Branch and bound experiments in convex nonlinear integer programming, Management Science, 31, 1533-1546. R.H.F. Jackson and G.P. McCormick, 1988, Second-Order Sensitivity Analysis in Factorable Programming: Theory and Applications, Mathematical Programming, 41, 1-27. E.N. Pistikopoulos, 2009, Perspectives in multi-parametric programming and explicit model predicitive control, AIChE Journal, 55, 1918-1925. E.N. Pistikopoulos, M.C. Georgiadis and V. Dua, 2007a, Multi-Parametric Programming, Vol 1, Wiley-VCH. E.N. Pistikopoulos, M.C. Georgiadis and V. Dua, 2007b, Multi-Parametric Model-Based Control, Vol 2, Wiley-VCH.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Experimental Comparison of Type-1 and Type-2 Fuzzy Logic Controllers for the Control of Level and Temperature in a Vessel B. Cosenza, M. Galluzzo Dipartimento di Ingegneria Industriale, Università degli Studi di Palermo, Viale delle Scienze, Ed. 6, 90128 Palermo, Italy
Abstract The objective of this experimental study is to compare the performance of type-1 and type-2 fuzzy logic controllers on a real system where the control of liquid level and temperature are considered. By the use of genetic algorithms it is possible to optimize the fuzzy sets of each fuzzy controller assuring high control performance. The experimental results show that a better control in terms of robustness can be achieved by type-2 fuzzy logic controllers. Keywords: type-1 fuzzy logic controller, type-2 fuzzy logic controller, genetic algorithms.
1. Introduction It is well known that it is not possible to obtain good performances with traditional controllers when the processes to be controlled are characterized by high nonlinearity or uncertainties. Therefore many nonlinear controllers like fuzzy logic controllers have been reported to be successfully used just for their robustness and for their ability to handle the system nonlinearities. Fuzzy logic controllers are usually built up using type1 fuzzy sets and are referred as type-1 FLCs (Mandani, 1974). Recently it has been shown that a new kind of fuzzy controllers, the “type-2” FLCs (Karnik and Mendel, 1998), can better handle all nonlinearities and uncertainties present in a system, making use of particular fuzzy sets, defined as “type-2 fuzzy sets”. In many control fields as bioprocess control (Galluzzo and Cosenza, 2010), control of autonomous mobile robots (Martinez et al., 2009), anesthesia control (Castillo et al., 2005), control for quarter vehicle active suspensions (Cao et al, 2008), level control (Wu and Tan, 2006), the superiority of type-2 FLCs over their type-1 counter-parts has been successfully shown and confirmed. In spite of their superiority there are not many applications of type-2 fuzzy logic controllers to real systems. The main aim of this paper is therefore to test on a laboratory experimental system their concrete applicability and to compare their performance with type-1 fuzzy logic controller performance. Type-2 FLCs give better results than their type-1 counterparts above all in environments full of uncertainties. The main characteristic of type-2 fuzzy sets is just their ability to handle uncertainties more efficiently than type-1 FLCs. This is possible because a larger number of parameters and more freedom degrees are available with type-2 fuzzy sets.
B. Cosenza et al.
804
In this work the design of type-2 FLCs is carried out optimizing the controller fuzzy sets by a technique that uses genetic algorithms. Also type-1 FLCs used for comparison are optimized with the same technique. Type-1 and type-2 FLCs are tested in a real system for the control of temperature and liquid level in a vessel. In the experimental system uncertainties are present or have been introduced as measurement noises.
2. Experimental rig The experimental system consists of a simple cylindrical pressurized vessel in which water is heated by an electrical coil. The flow rate of the water leaving the system is proportional to the square root of the height of the water in the tank. This term constitutes the main source of nonlinearity in the system. Additional non linearities and uncertainties are present as variable transport delay and noise in the sensor outputs.
3. Interval type-2 fuzzy logic
~ An interval type-2 fuzzy set AI is defined as follows: ~ AI
³
³ 1 /( x, u)
(1)
xX uJ x [ 0,1]
Therefore the secondary grade of interval set belongs to interval [0, 1]. The main characteristic of type-2 fuzzy sets is their ability to take in to account the uncertainty of a system. This is possible through a bounded region (Fig. 1a) in the membership functions that is called Footprint of Uncertainty (FOU). The FOU can be described in terms of upper (UMF) and lower (LMF) membership functions. In the real study case the measurement noise is the main source of uncertainty and by use of the FOU it is possible to capture the uncertainty, minimizing consecutively its negative effects on the control system.
Fig. 1. a) Three-dimension Interval Type-2 Triangular Fuzzy Set. b) FOU in terms of Upper and Lower Membership Functions.
As type-1 fuzzy logic systems, type-2 fuzzy logic systems contain four components as well: a rule-base, a fuzzifier, an inference-engine and an output-processor. The last component (the output-processor) is just the main difference between type-1 and type-2 FLS. It maps a type-2 fuzzy set into a type-1 fuzzy set and then transforms (as a normal type-1 defuzzifier) the fuzzy output in a crisp output (Fig. 1b).
Experimental comparison of type-1 and type-2 fuzzy logic ocntrollers for the control of 805 level and temperature in a vessel
4. Controller optimization by genetic algorithms The parameters of type-1 and type-2 fuzzy controllers for the control of level and temperature were optimized with genetic algorithms. Genetic algorithms are an optimization technique that discovers more than one solution to a problem (Holland, 1975). The approach for selecting the parameters of a type-2 fuzzy logic system is the totally independent approach. In this optimization method all the parameters of the type-2 fuzzy logic systems are tuned, without the aid, as reference, of a type-1 fuzzy logic system. The method avoids local minima and assures great design flexibility. The parameter optimization process of fuzzy controllers with genetic algorithms is based on the simulation of the system controlled by the fuzzy controllers being designed. Because of the approximation that characterizes the system model, the performance of the designed controllers may deteriorate when they are used in the real system. Therefore each fuzzy controller is exposed to model uncertainties during the phase of controller design (Wu and Tan, 2006), to preserve in the real application, the good results obtained in simulation. To create the effect “uncertainty” four different plant models were used in simulation and different plant conditions were considered for each model. The same genetic algorithms are used to evolve the sets of type-1 and type-2 fuzzy controller parameters. They make use of the sum of the integral of the time weighted absolute error (ITAE) obtained from the four plant models, to evaluate the fitness function of each candidate solution.
5. Experimental results Some experimental results are shown in the following figures. The response of the level to a change in the set-point value from 0 to 4 cm at instant t = 10 sec and from 4 to 6 cm at about t = 100 sec is reported in Fig. 2. In these conditions there is a very slight difference between the performance of type-1 and type-2 FLCs. The difference becomes more evident introducing a noise in the level measurement (Fig. 3 a,b).
Fig. 2. Type-1 FLC vs. Type-2 FLC. a) Response to a level set-point change (from 0 to 4 at about t= 10 sec and from 4 to 6 at about t= 100 sec.
806
B. Cosenza et al.
Fig. 3 a) Response to a level set-point change (from 0 to 4 at about t= 10 sec and from 4 to 6 at about t= 250 sec) with a small amplitude of noise. b) Response to a level setpoint change (from 0 to 4 at about t= 10 sec and from 4 to 6 at about t= 250 sec) with a larger amplitude of noise. In Fig. 3a the behavior of the system controlled by type-2 FLC is characterized by oscillations with a smaller amplitude value than that of type-1 FLC; moreover an offset is present only in the answer of the system controlled by type-1 FLC. The result shown in Fig. 3b was obtained artificially increasing the noise amplitude in the level measurement. It confirms the previous result, with type-1 FLC showing the worst performance and increasing the off-set for both set-point changes. A change in the set-point value from 29 to 30 °C at t = 0 sec was considered as shown in Fig. 4. Also for the temperature control, the performance of the type-2 FLC is better than its type-1 counterpart, with the difference that in this case the addition of an artificial noise was not necessary because the temperature measurement is already characterized by a large noise.
Fig. 4 Response to a temperature set-point change (from 29 to 30 at t= 0 sec).
The temperature oscillates in both cases around the set-point value but the type-2 FLC is able to decrease the oscillation amplitude more than the type-1 FLC.
Experimental comparison of type-1 and type-2 fuzzy logic ocntrollers for the control of 807 level and temperature in a vessel
6. Conclusions Type-2 FLCs are able to control the temperature and level in the system more efficiently than type-1 FLCs especially when uncertainties due for instance to measurement noises are present. Both type-1 and type-2 fuzzy controllers used in the study were optimized by a genetic algorithm method based on the totally independent approach. Increasing the uncertainty degree of the control system, the difference between the performance of type-1 and type-2 FLCs becomes more evident. Type-2 FLCs represent an effective solution for control problems originated by parameter uncertainty and measurement noise.
Acknowledgments Authors are grateful to Dr. D. Wu and Dr. W. W. Tan for making available their genetic algorithm code.
References Cao, J., Liu, H., Li, P., Brown, D., 2008. Adaptive Fuzzy Logic Controller for Vehicle Active Suspension with Interval Type-2 Fuzzy Membership Functions, in Proc. FUZZ-IEEE 2008, Hong Kong, 1361-1373. Castillo, O., Huesca, G., Valdez, F., 2005. Evolutionary Computing for Optimizing type-2 Fuzzy Systems, Intelligent Control of Non-Linear Dynamic Plants, in: Proc. North American Fuzzy Info, Processing Society (NAFIPS 2005), Ann Arbor, MI, 6, 247-251. Galluzzo, M., Cosenza, B., 2010. Adaptive Type-2 Fuzzy Logic Control of a Bioreactor, Chemical Engineering Science, 65 (14), 4208–4221 Holland, J.H., 1975. Adaptation in natural and artificial systems, University of Michigan Press, Mi. Karnik, N.N., Mendel, J.M., 1998. Introduction to type-2 fuzzy logic systems, in Proc. 7th Intl. Conf. on Fuzzy Systems FUZZ- IEEE 1998, Anchorage, AK, 915-920. Mamdani, E.H., 1974. Application of fuzzy algorithms for control of simple dynamic plants, in Proc. of IEEE, 121 (12), 1585-1588. Martinez, R., Castillo, O., Aguilar, L.T., 2009. Optimization of interval type-2 fuzzy logic controllers for a perturbed autonomous wheeled mobile robot using genetic algorithms, Journal of Information Sciences, 179 (13), 2158-2174. Wu, D., Tan, W.W., 2006. Genetic learning and performance evaluation of interval type-2 fuzzy logic controllers, Engineering Applications of Artificial Intelligence, 19, 829-841.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Simulation-based Dynamic Optimization under Uncertainty of an Industrial Biological Process Guillermo A. Duranda, Aníbal M. Blancoa,Fernando D. Meleb, J. Alberto Bandonia a
Planta Piloto de Ingeniería Química, PLAPIQUI (UNS – CONICET), Camino La Carrindanga Km7, (8000) Bahia Blanca, Argentina b Depto. Ingeniería de Procesos, Universidad Nacional de Tucumán, Av. Independencia 1800, (4002) San Miguel de Tucumán, Argentina
Abstract Parametric uncertainty can have a great impact in the outcome of dynamic optimization of industrial biological processes, leading to sub-optimal or infeasible solutions. However, if parameters’ stochastic data is considered in dynamic optimization models, and if the handling of uncertainty is done with deterministic approaches, intractable problems arise frequently. In this work, a simulation-based approach is used to dynamically optimize under uncertainty a brewery mashing process. This technique proposes the combination of dynamic simulation with deterministic and stochastic optimizations in a two levels framework. In the inner level a deterministic optimization algorithm ignores all the random elements in the problem and obtains a deterministic optimal solution, then the dynamic simulation implements this solution handling the uncertainties while respecting the guidelines given by the optimization algorithm. This level is within an outer level where a stochastic optimization technique utilizes the information from the simulation to search the decision space systematically, trying to improve the performance of the problem. The selected case study for this work is the beer mashing process, which consists in the enzymatic degradation of the polysaccharides present in the malt. This is a fundamental step within the brewery process since the composition of the mashing wort determines the quality of the final product. The main reactions that take place in the mashing are the degradation of starch, ȕ-glucans and arabinoxylans into small chain fermentable and non-fermentable carbohydrates. The manipulation of the temperature profile of the batch reactor is the main mechanism to control the extent of the ongoing reactions. Since high temperatures favor the production of fermentable matter but also increases the concentration of undesirable species in the wort, the choice of an adequate temperature profile is not an obvious decision. Keywords: Dynamic Optimization, Optimization under Parametric Uncertainties, Beer Mashing
1. Introduction The concepts involved in decision-making under uncertainty are closely linked to those of optimization under uncertainty. Literature on optimization under uncertainty very often divides the problems into categories such as “wait and see" and “here and now”. In the “wait and see" approaches, one has to wait until an observation is made on the random elements, and then solve the deterministic problem. Conversely, a “here and
Simulation-based Dynamic Optimization under Uncertainty of an Industrial Biological 809 Process now" approaches involve optimization over some probabilistic measure of the system performance –usually the expected value. It should be noted that many realistic problems have both “here and now” and “wait and see" problems embedded in them. The trick to overcome this complicated situation is to make the right partition of decisions between these two categories and use a coupled approach (Diwekar, 2002). In this regard, many advances have been observed in the supporting theory, including algorithmic developments and computational capabilities for solving this class of problems, most of which fall into one of two approaches: multistage stochastic programming or stochastic optimal control. Efficient numerical solution proposals can be achieved by combining several techniques that belong to each approach. The resulting strategy needs to be adapted to solve the specific problem, defining some approximations or heuristic-based methods. The works by Cheng et al. (2004) and Jung et al. (2004) are relevant examples in this regard. Among these approximate approaches, an attractive alternative to address large-scale problems is Simulation-based Optimization (SbO), whose main characteristics will be laid out in Section 3. Dynamic optimization of the industrial biological process has been proved to be able to produce a better overall performance for the case of beer mashing (Durand et al., 2009), by means of controlling the temperature profile over the batch cycle. However, for biological systems used in the industry it is very difficult to obtain the right values of the kinetic parameters, because the operation conditions favorable for their estimation are often not those used in production. These uncertainties in the kinetic parameters can lead to sub-optimal results or even infeasible situations when applying the conclusions derived from deterministic dynamic optimization. Therefore, the objective of this work is to handle the parametric uncertainty present in the dynamic optimization of the beer mashing process using a SbO approach. Results will be later compared against other optimization under uncertainty techniques. The work is the first step a larger study on the handling of parametric uncertainties with simulation-based dynamic optimization of industrial biological process.
2. Motivating Problem Mashing is a key step in the beer production process. During mashing, the enzymatic degradation of the polysaccharides present in the malt takes place. Fermentable carbohydrates are produced from the degradation of the polysaccharide starch. Such carbohydrates are converted into alcohol in the fermentation step of the beer manufacturing. Non-starch polysaccharides, ȕ-glucans and arabinoxylans also degrade during mashing into smaller chain carbohydrates. The schematic representation of the main reactions that take place in the mashing process is shown in Fig. 1. The most important reaction in the mashing process is the enzymatic hydrolysis of the gelatinized starch since it determines the produced amount of fermentable carbohydrates and therefore the alcoholic content of the final product. Therefore, a natural objective of the mashing operation is to maximize the production of such fermentable matter. However, non-fermentable carbohydrates as limit dextrins are produced in the starch hydrolysis whose concentration in the mashing product affects the organoleptic properties of the beer. Consequently it is also necessary to keep the concentration of such intermediates at adequate levels to ensure product quality. Moreover, high concentrations of non-starch polysaccharides as ȕ-glucans and arabinoxylans in the mashing product are known to cause processing problems in breweries such as low extract yields, poor filterability, hazes and gels formation. For
810
G.A. Durand et al.
this reason the minimization of such compounds in the mashing product is required to prevent negative downstream impact. Different enzymes catalyze all the involved reactions. Since the activity of the different enzymes is highly dependent on temperature, the manipulation of such variable is the main control mechanism to reconcile the above described multiple, often conflicting, objectives of the mashing process. As the temperature rises, the reaction rates increase steeply but the enzymes are denatured faster (Hardwick, 1995; Hough, 1990). Typically, mashing is performed in a batch reactor and the imposed mashing temperature profile is a succession of increasing temperature rests designed to cover the activity temperature range of each enzyme. Denatured Į-Amylase
Į-Amylase
Maltotriose
Dissolved Į-Amylase
Starch Large grains Gelatinized Starch Starch Small grains
Glucose Dextrins
Maltose Limit Dextrins
ȕ-Amylase
Dissolved ȕ-Amylase
Denatured ȕ-Amylase Insoluble ȕ-Glucan
Soluble ȕ-Glucan
ȕ-Glucanase Arabinoxilans
Soluble Arabinoxilans
Dissolved ȕ-Glucan
ȕ-Oligosaccharides
Dissolved ȕ-Glucanase
Denatured ȕ-Glucanase
Dissolved Arabinoxilans
Oligo ȕ-Xylosides
Insoluble Arabinoxilans Endo Xylanase
Dissolved Endo Xylanase
Denatured Endo Xylanase
Figure 1. Schematic representation of mashing enzymatic reactions. Durand et al. (2009) addressed the complete set of reactions of the mashing process with a dynamic optimization approach. They identified optimal temperature programs to enhance the overall mashing operation, thereby demonstrating the applicability of the approach. The dynamic optimization model was obtained using the reactions’ kinetic expressions to obtain the metabolites concentrations over the batch cycle. However, for biological systems used in the industry it is very difficult to obtain the correct values of the kinetic parameters, because the operation conditions favorable for their estimation are often not those used in production. These uncertainties in the parameters can lead the application of the temperature programs found in the dynamic optimization to have sub-optimal results or even infeasible situations.
3. Proposed solving methodology SbO is an attractive combined strategy that involves the situation in which the analyst wishes to find which of many possible sets of input parameters lead to the optimal performance of a system. Thus, a simulation model can be understood as a function whose explicit form is unknown and which converts input parameters to performance measures (Law and Kelton, 2000). SbO provides an alternative for problems in which analytical methods are inefficient. Moreover, SbO is an active area of research in the
Simulation-based Dynamic Optimization under Uncertainty of an Industrial Biological 811 Process field of stochastic optimization (Gosavi, 2003). In the literature on Process Systems Engineering (PSE), SbO approaches have received some attention and are currently awaiting further study. The SbO scheme proposed in this work combines dynamic deterministic optimization and stochastic optimization, divided in two nested loops. In the internal loop, a dynamic optimization algorithm solves the optimization problem for a sample of the uncertain parameters’ values. This is carried out several times with a Monte-Carlo tool, and in each dynamic simulation a new sample of the values of the uncertain parameters is taken. The result of the internal loop is the sampling of the objective function‘s distribution for a given realization of the decision variables of the external loop. In the external loop, the objective function’s distribution is represented with a statistical performance indicator (e.g. the mean of the distribution), and a stochastic optimization algorithm is used to search the external loop’s decision variables space, trying to improve the solution performance of the problem.
4. Implementation and results For this approach to work, the decision variables had to be divided between the optimization of the internal loop and the external one. For the problem of the beer mashing process, the dynamic optimization handled the temperature profile, and the stochastic optimization the initial activity of enzymes (Į- and ȕ-amylases, ȕ-glucanase and endo-xylanase). The internal loop optimization utilized the kinetic model from Durand et al. (2009), and was solved with the gPROMS 3.2 software package. The stochastic optimization of the outer loop used the Genetic Algorithm tool of MATLAB 7.5 (R2007b) to perform the search process. The success of metaheuristics in carrying out this task is perhaps that they are designed to seek global optimality and their properties are apparently robust in practice, even though they do not yet have a solid theoretical basis (Gosavi, 2003). The complete solution framework was solved on an AMD Athlon 64 X2 Dual Core system with 1GB of RAM. The optimization chosen for this case study is the same as used in Durand et al. (2009), to minimize the sum of concentrations of undesired metabolites (not-hydrolyzed starch, ȕ-glucans, arabinoxylans and dextrins in the liquid phase) at the final time of the batch cycle (considered to be 115 minutes). Uncertainties were considered to be present in the parameters of the kinetic expressions of all reaction, with their values following a normal distribution with mean equal to a nominal value and a standard deviation of 5%. Thus, the number of uncertain parameters considered for this problem was 15. Each access to the inner loop meant 10 cycles of dynamic optimization, this number was found enough to get a representative value of the objective solution. Table 1 shows the results and computational statistics for the solution of the motivating problem. The search space for the outer loop was discretized, resulting in 85800 possible combinations for the decision variables. Table 1. SbO results for the motivating problem Population 16 Generations 10 Obj. Function (best individual) 1.5818 g/L Elapsed computing time 12h 5min 33.6seg Number of possible solutions 85800
812
Number of solutions found Number of unique solutions Best individual found in generation
G.A. Durand et al.
11x16 = 176 88 8
The GA algorithm was run with 10 generations (plus the initial one), each of 16 individuals, thus resulting in 11x16 = 176 times the inner loop was accessed. From those 176 solutions found, there were 88 unique solutions, meaning that several of them are repetitions of the outer loop decision variables values. In the final generation the best solution was present with 2 individuals, each with the following combination of enzymes’ initial activities: Į-amylase: 3.98 x 105 (U/l) ȕ-amylase: 1.10 x 106 (U/l) ȕ-glucanase: 236 (U/l) endo-xylanase: 114,000 (U/l) The objective function mean value for this solution was of 1.5818 g/L of the sum of undesirable metabolites.
5. Conclusions A simulation-based optimization (SbO) strategy has been implemented to solve the dynamic optimization problem of an industrial biological process when parametric uncertainties are considered. The strategy has been able to handle the optimization problem of the beer mashing process, where temperature and initial activities of enzymes have to be carefully chosen in order to reduce the final concentrations of undesirable metabolites. Uncertainties in this problem are in the values of the kinetic parameters, which are difficult to estimate for industrial operating conditions.
Acknowledgement This work was partially supported by “Concejo Nacional de Investigaciones Científicas y Técnicas” and “Universidad Nacional del Sur” of Argentina.
References Cheng, L.; Subrahmanian, E.; Westerberg, A. W. (2004). Multiobjective decisions on capacity planning and inventory control. Industrial and Engineering Chemistry Research, 43, 21922208. Diwekar, U. (2002). Optimization under Uncertainty: An Overview, SIAG/OPT Views-and News, 13 (1), 1-8. Durand, G. A., Corazza, M. L., Blanco, A. M., Corazza, F. C. (2009). Dynamic optimization of the mashing process. Food Control, 20, 1127-1140. Gosavi, A. Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning, Springer; 1 edition (June 30, 2003), ISBN-10: 1402074549, ISBN13: 978-1402074547. gPROMS Model Developer Guide R3.2.0 (2009). Ed. Process System Enterprise Limited (UK). Hardwick, W. A. (1995). Handbook of brewing. New York: Marcel Dekker Inc. Hough, J. S. (1990). Biotecnología de la cerveza y de la malta. Ed. Acribia, Zaragoza, Spain. Jung, J. Y., Blau, G., Pekny, J. F., Reklaitis, G. V. & Eversdyk, D. (2004). A simulation based optimization approach to supply chain management under demand uncertainty. Computers & Chemical Engineering, 28, 2087-2106. Law, A. M. & Kelton, W. D. Simulation Modeling & Analysis, 3rd Ed.; McGraw-Hill: NY, 2000. MATLAB 7.5 (R2007b), User’s Manual; The MathWorks, Inc.: Natick, MA, 2004.
21st European Symposium on Computer Aided Process Engineering — ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Parallel Solution of Large-Scale Dynamic Optimization Problems Carl D. Lairda1 , Angelica V. Wonga1 , Johan Akessonc2 a Artie
McFerrin Department of Chemical Engineering, Texas A&M University, TX, USA of Automatic Control, Lund University, Sweden
c Department
Abstract This paper presents a decomposition strategy applicable to DAE constrained optimization problems. A common solution method for such problems is to apply a direct transcription method and to solve the resulting non-linear program using an interior point algorithm, where the time to solve the linearized KKT system at each iteration is dominating the total solution time. In the proposed method, the structure of the KKT system resulting from a direct collocation scheme for approximating the DAE constraint is exploited in order to distribute the required linear algebra operations on multiple processors. A prototype implementation applied to benchmark models shows promising results. Keywords: dynamic optimization, parallel computing, collocation
1.
Introduction
Optimization of dynamic systems has proven to be an effective method for improving operation and profits in the chemical process industry. The size and complexity of optimization problems continue to grow, while the advances in computing clock rates that we once took for granted have slowed dramatically. Computer chip design companies have instead focused on development of parallel computing architetures, and there is a need for the development of advanced parallel algorithms for dynamic optimization that can utilize these architectures. Furthermore, the successful use of advanced solution approaches within industrial settings requires that these algorithms are interfaced with effective problem formulation tools. Modern object-oriented modeling languages like Modelica and Optimica allow for rapid creation of complex dynamic optimization problems and lessen the burden of model development, optimization problem formulation, and solver interfacing. In this paper we make use of the Modelica-based open source software JModelica.org, [1], to transform high-level descriptions of dynamic optimization problems into algebraic nonlinear programming problems through a direct collocation approach. Applying a nonlinear interior-point method to solve this problem, the dominant computational expense is the solution of the KKT system solved at each iteration to produce the step in the primal and dual variables. The block-banded structure is decomposed by forming a Schur-complement with respect to the state continuity equations. The computational expense varies with the number of state variables and the number of processors used in the decomposition, as seen the parallel scaling results. As expected, the approach is most favorable for problems with fewer state variables than algebraic variables. 1 Corresponding Author: [email protected]. The authors gratefully acknowledge partial financial support from the National Science Foundation (CAREER Grant CBET# 0955205) 2 The author gratefully acknowledges financial support from the Swedish Science Foundation through the grant Lund Center for Control of Complex Engineering Systems (LCCC).
814
2.
Laird et al.
Model Transcription
We consider dynamic optimization problems based on differential algebraic equation (DAE) models on the form min
tf
u
(1)
L(x, y, u) dt
t0
subject to F(x, ˙ x, y, u) = 0, x(t0 ) = x0
(2)
where x˙ ∈ Rnx are the state derivatives, x ∈ Rnx are the states, y ∈ Rny are the algebraic variables and u ∈ Rnu are the control inputs. It is assumed that the DAE is of index 1. The optimization problem is discretized using a simultaneous collocation method based on finite elements, with Radau collocation points. See, e.g., [2] for a recent monograph. Lagrange polynomials are used to approximate the state, algebraic and control input profiles. Using this strategy, the discretized optimal control problem can be written on the form min f (z)
(3a)
s.t. c(z) = 0
(3b)
z
where zT = [zT1 , . . . , zTne ] are the discrietized state, algebraic and input variables, and ⎡
Gz1 R(z1 ) Gz1 + Gz2 .. .
⎢ ⎢ ⎢ ⎢ c(z) = ⎢ ⎢ ⎢ ⎣ Gzn −1 + Gzn e e R(zne )
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
G = I 0 ... 0 G = 0 . . . 0 −I 0
0
(4)
Here, Gzi−1 + Gzi = 0 are the coupling constraints linking individual finite elements in time and ne is the number of finite element. R(zi ) are the DAE residual equations and collocation equations associated with each finite element i. It is important to note that only the state variables are temporally coupled between elements (not the algebraic variables). Therefore, the dimension of these constraints (number of rows in G and G) is dependent on the number of state variables only. It is this property that will be exploited to decompose the problem and develop an efficient parallel solution approach.
3.
Parallel Solution of the Dynamic Optimization Problem
Solution of this large-scale nonlinear programming problem is possible with a number of potential algorithms. The dominant cost of an SQP-based or Interior-Point algorithm is the solution of the linear KKT system at each iteration to find the full step in the primal and dual variables. The structure of the objective and constraints in the optimal control problem induces a block structure within the linear KKT system. This linear system can be decomposed by selecting break-points in time between elements and performing a Schur-complement decomposition with respect to the coupling constraints. For the interior-point algorithm I POPT [3], the linear KKT system (also called the augmented system) solved at each iteration of the optimization algorithm can be written in the following block-bordered structure. For simplicity of notation, the structure is
815
Parallel Solution of Large-Scale Dynamic Optimization Problems written with a break-point at every finite element. ⎤⎡ ⎡ AT1 K1 ⎢ ⎢ K2 AT2 ⎥ ⎥⎢ ⎢ ⎥ ⎢ ⎢ .. .. ⎢ ⎢ . . ⎥ ⎥⎢ ⎢ T ⎦ ⎣ ⎣ Kne Ane A1 A2 . . . Ane −δc I
Δv1 Δv2 .. . Δvne Δvs
⎤
⎡
r1 r2 .. .
⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎦ ⎣ rne rs
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(5)
where ⎡
H1 +δH I K1 = ⎣ G ∇z1 R(z1 )T
G
T
∇z1 R(z1 )
⎤ ⎦, Kk =
!
Hi +δH I ∇zi R(zi )T
∇zi R(zi )
" , i = 2, ..., ne ,
(6) Here, Hi is the Hessian of the Lagrangian for zi , and δH , δc may be zero or positive depending on the need of the algorithm to handle non-convexity and/or singlularity in the Jacobian. The Δνi vectors include the primal and dual variables for element i, and Δνs is the dual variables for the coupling constraints. In this permutation, the coupling constraints (i.e. the Jacobian matrices G and G), contained in the matrices Ai , and their corresponding dual variables have been permuted to the borders of the KKT system. The step in these dual variables can be decoupled from the remaining variables by eliminating the Ai matrices, resulting in the following Schur-complement decomposition,
−δc I − ∑ Ai Ki−1 ATi Δνs = rs − ∑ Ai Ki−1 ri . i
(7)
i
This decomposition allows solution of the KKT system using the following algorithm. Algorithm: Schur-Complement Solve of KKT System 1: for each i in 1, ..., ne 1.1: factor Ki (using MA27 from Harwell Subroutine Library) 2: let S = [−δc I] 3: let rsc = rs 4: for each i in 1, ..., ne 4.1: for each column j in ATi j> 4.1.1: solve the system Ki q< = [ATi ]< j> i < j> 4.1.2: let S< j> = S< j> + Ai qi 4.2: solve the system Ki pi = ri 4.3: let rsc = rsc − Ai pi 5: solve SΔνs = rsc for Δνs 6: for each i in 1, ..., ne 6.1: solve Ki Δνi = ri − ATi Δνs for Δνi
There are several levels of parallelism that can be exploited in this algorithm. If there is one processor available for each element, then Steps 1, 4, and 6 can all be parallelized. Furthermore, if more processors are available, individual column backsolves in Step 4.1 can be parallelized. Also, only a small number of columns in the matrices Ai contain non-zeros, a property that is exploited in Step 4.1.
816
Laird et al.
The basic algorithm outlined above is described with one block for each individual finite element. However, the actual implementation is able to decompose the problem with multiple finite elements per block. For example, a problem with 128 finite elements can be separated into two blocks of 64 finite elements each, 4 blocks of 32 finite elements each, etc. At a minimum, there should be one processor available for each block. The computational time of this algorithm is dominated by either Step 4 (forming the Schur-complement), or Step 5 (solving the Schur-complement). The cost of forming the Schur-complement scales linearly with the number of required backsolves (but is easily parallelized), whereas the cost of solving the Schur-complement using a typical dense linear solver is cubic in the size of the Schur-complement. In previous work we have shown excellent parallel scalability using this strategy for problems with complicating variables[4, 5] where the size of the Schur-complement is determined by the number of coupling variables only. In the approach described here, the size of the Schur-complement increases with the number of states and the number of processors used. As the size of the Schur-complement grows, the increased cost of solving the large Schur-complement system (Step 5) will erode parallel speedup.
4.
Performance Results
The performance of this parallel decomposition approach is a function of the number of state variables (i.e. the dimension of the coupling constraints), and the number of processors used in the decomposition. As we Table 1: Case Study Characteristics increase the number of proCase Study # State Vars. # Algebraic Vars. cessors, we have the potential for greater parallelization, how1 3 97 ever, the size of the Schur2 5 95 complement (and hence the 3 10 90 cost of Step 5) also increases. 4 25 75 Therefore, we test the parallel speedup of this approach on a straightforward scalable problem where we can easily vary the number of state and algebraic variables, as well as the number of processors used in solution. The DAE models used in the benchmarks are composed of compartment models in series, where some compartments are approximated using a steady state assumption. Four separate case studies are explored as we increase the number of processors as indicated in Table 1. Timing results represent ideal parallel timing as the problem was actually solved in serial and the time for Steps 1, 4, and 6 were divided by the number of blocks (available processors). The top plot in Figure 1 shows the idealized speedup using this approach as a function of the number of processors/blocks for each particular case study. Here, we see significant potential for speedup when the number of state variables is outnumbered by the number of algebraic variables. The bottom plot in Figure 1 shows the ratio of the time to solve the Schur-complement over the time to form the Schur-complement in parallel. The deterioration of the overall speedup corresponds to the point where the size of the Schur-complement increases such that the solution time is dominated by the time to solve the Schur-complement. All timing results were obtained on a 3.2 GHz Intel Xeon processor.
5.
Conclusions and Future Work
This paper presents a decomposition approach that is applicable for parallel solution of the linear systems resulting from an interior-point solution of dynamic optimization prob-
Parallel Solution of Large-Scale Dynamic Optimization Problems
817
lems formulated using the simultaneous approach. The dominant cost in this algorithm are Steps 4 (forming the Schur-complement) and 5 (solving the Schur-complement).
For problems with few states and many algebraics, this solu tion approach has the potential for significant speedup, how ever, as the size of the Schur complement increases (more states or processors), parallel speedup is eroded. Nevertheless, this approach can be improved sig nificantly. For blocks Ak of arbitrary structure, the Schur complement may indeed be dense. However, for the
dynamic optimization problem studied here, the structure of the Ak blocks is not arbitrary, and Figure 1: Case Study Timing Results: The top figure the resulting Schur-complement shows speedup results for different numbers of procesis both block structured and may sors and state variables, while the bottom figure shows be sparse. For the case studies the ratio of the time to solve the Schur-complement using 32, 64, and 128 proces- over the time to form the Schur-complement. sors, the sparsity is below 10%, 5%, and 3% respectively. The use of an efficient sparse linear solver or even a parallel dense linear solver will dramatically decrease the time to solve the large Schurcomplement and allow for significantly improved speedup.
References [1]
Johan Åkesson, Karl-Erik Årzén, Magnus Gäfvert, Tove Bergdahl, and Hubertus Tummescheit. Modeling and optimization with Optimica and JModelica.org— languages and tools for solving large-scale dynamic optimization problem. Computers and Chemical Engineering, 34(11):1737–1749, November 2010.
[2]
Lorenz T. Biegler. Nonlinear programming: concepts, algorithms, and applications to chemical processes. SIAM, 2010.
[3]
Andreas Wächter and Lorenz T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):25–58, 2006.
[4]
Zavala, V. M.; Laird, C.D.; Biegler L.T. Interior-point Decomposition Approaches for Parallel Solution of Large-scale Nonlinear Parameter Estimation Problems. Chemical Engineering and Science. 2008, 63, 4834-4845.
[5]
Zhu, Y.; Legg, S.; and Laird, C. D. Optimal Design of Cryogenic Air Separation Columns under Uncertainty. Computers & Chemical Engineering, Volume 34, Issue 9, Selected papers from the 7th International Conference on the Foundations of Computer-Aided Process Design (FOCAPD, 2009, Breckenridge, Colorado, USA., 7 September 2010, Pages 1377-1384.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of simulated moving bed chromatography with fractionation and feedback incorporating an enrichment step Suzhou Li,a Yoshiaki Kawajiri,b Jörg Raisch,a,c Andreas Seidel-Morgensterna,d a
Max-Planck-Institut für Dynamik of komplexer technischer Systeme, Sandtorstraße 1,
D-39106 Magdeburg, Germany, [email protected] b
School of Chemical & Biomolecular Engineering, Georgia Institute of Technology, 311
Ferst Drive, Atlanta, GA 30332, USA, [email protected] c
Fachgebiet Regelungssysteme, Technische Universität Berlin, Einsteinufer 17, D-
10587 Berlin, Germany, [email protected] d
Institut für Verfahrenstechnik, Otto-von-Guericke Universität, Universitätsplatz 2, D-
39106 Magdeburg, Germany, [email protected]
Abstract An enrichment step is proposed to simulated moving bed chromatography with fractionation and feedback (FF-SMB), a new variation of SMB, to concentrate recyclates before they are fed back into the unit. Effectiveness of this operation is evaluated by systematic optimization studies. Case studies reveal that enriching recyclates is advantageous to FF-SMB and provides further improvement in separation performance over the non-enriched case. Keywords: simulated moving bed, fractionation, feedback, enrichment, optimization.
1. Introduction
III
Zo ne
II
Zo
ne
ne
I
IV
Zo
k W Recycle , k
ne Zo
Raffinate Simulated moving bed (SMB) chromatography is Raffinate buffer vessel fractionation valve Raffinate (A) a continuous and efficient separation technology A+B and extensively applied in sugar, petrochemical, and fine chemical industries. Extending SMB to the pharmaceutical industry has also gained Direction of increasing interest. Recently, a novel process Feed (A+B) Desorbent liquid flow and modification, referred to as fractionation and Feed valve port switching feedback SMB (FF-SMB), has been suggested [1]. A schematic representation of an SMB unit realizing fractionation and feedback is shown in A+B Fig. 1. During the operation of FF-SMB, the Extract Extract (B) Extract fractionation valve outlet streams are fractionized into the product buffer vessel and recycle fractions, thus splitting each cycle Fig. 1. Schematic diagram of FF-SMB. (here defined as the time between two successive k and a recycle period of length port switches) into a production period of length W Production
k E , R (see Fig. 2a and b). Over W Production , the outlet stream fulfills a given
purity requirement and is withdrawn as a product. Within the recycle period, the stream is directed to the buffer vessel and collected as an “off-spec” product which is then fed
Optimization of simulated moving bed chromatography with fractionation and feedback incorporating an enrichment step 819 back into the unit alternatingly with the fresh feed. Alternating use of different feed sources results in a distinct feeding regime shown in Fig. 2c, where a specific feeding sequence of using the raffinate buffer vessel first, followed by the original feed, and finally the extract buffer vessel (abbreviated as “RFE”), is chosen for illustration purposes. Such feeding regime divides one complete cycle into three sub-periods, i.e., R , feeding period W Feeding and extract feedback the raffinate feedback period W Feedback
0.8
Recycle E W Recycle
E W Production
B
0.6 0.4
Withdraw
A
0.2 0.0 0.0
0.2
0.4
W
0.6
t / tS [ ]
0.8
1.0
(b) 1.0
Withdraw
0.8
R W Production
Recycle A
R W Recycle
0.6 0.4
B
0.2 0.0 0.0
0.2
0.4
W
0.6
t / tS [ ]
0.8
1.0
Dimensionless feed concentration [-]
(a) 1.0
Dimensionless raffinate concentration [-]
Dimensionless extract concentration [-]
E . period W Feedback (c) 1.0 0.8 0.6
Raffinate buffer vessel feedback R Feedback
0.4 0.2
Feed from feed tank
Extract buffer vessel feedback
W Feeding
E W Feedback
W
A B
0.0 0.0
0.2
0.4
W
0.6
t / tS [ ]
0.8
Fig. 2. Illustration of outlet fractionation and feeding regime of FF-SMB. (a) Extract outlet fractionation; (b) Raffinate outlet fractionation; (c) Feeding sequence “RFE” and feeding scheme. Solid lines: concentration profiles for A; dotted lines: concentration profiles for B. The concentration of each component is normalized with respect to that in the feed tank.
A systematic model-based optimization approach has been employed to evaluate the potential of FF-SMB [2, 3]. In our previous studies, it is found that the “off-spec” fractions (i.e., recyclates) collected in the buffer vessels are partially separated and of favorable compositions. However, compared to the original feed, they are significantly diluted. In this work, a continuous enrichment step is introduced to the recyclates, allowing them to achieve higher feedback concentrations. The resulting more concentrated versions are then fed back into the inlet in a certain feeding sequence. A similar idea was also adopted in Enriched Extract SMB (EE-SMB) [5], where a portion of the extract stream is concentrated and re-injected at the same point of the SMB unit. Using solvent evaporation as an illustrative example, the effectiveness of this operation will be evaluated with the help of the optimization method developed previously for FFSMB. The effect of evaporating rate on the performance achievable by the evaporative FF-SMB is also studied.
2. Mathematic modeling of FF-SMB with enrichment step The equilibrium dispersive model was used to describe the chromatographic columns wCi 1 H wqi wC w 2 Ci u i Dap ,i 0, i A, B (1) wt wz H wt wz 2 with the initial and boundary conditions: wC wC (2) Ci (t , z ) |t 0 0, Dap ,i i u (Ci |z 0 Ciin ) 0, Dap ,i i 0 wz z 0 wz z L
The apparent axial dispersion coefficient Dap ,i was assumed to be the same for both components and calculated by using uL (3) 2N The nonlinear competitive Langmuir isotherms were used to characterize the adsorption behavior of the two components: Dap ,i
1.0
820
S.Li et al.
qi (C A , CB )
H i Ci , i 1 K A C A K B CB
To enrich the “off-spec” fractions, many strategies, such as solvent evaporation and membrane filtration, can be applied. Throughout this paper, the solvent removal by evaporation was used for evaluation purposes. A model with perfectly mixed conditions was assumed to predict the behavior of each buffer k vessel. QEvp was introduced to represent the flow
(4)
A, B k QEvp
k Qin , Cik Recycle from outlet k
Evaporate
k VBuffer
CiBuffer ,k
k Qout
Feed back into the unit
Fig. 3. Illustration of enrichment step
rate of solvent evaporated from the buffer vessel k by evaporating solvent of each buffer (see Fig. 3), for which the following mass balance vessel. equations can be easily derived: k k d (VBuffer CiBuffer , k ) dVBuffer k k k (t )CiBuffer , k (t ) , (t ) QEvp (5) Qink (t )Cik (t ) Qout Qink (t ) Qout dt dt k k Buffer , k ,k with the initial conditions VBuffer |t 0 VBuffer |t 0 CiBuffer , i A, B, k E , R . ,0 , Ci ,0 Within each cycle, the liquid volume recycled from the corresponding outlet to the k k k k (1 W Production )t S Qk , the volume evaporated VEvp QEvp t S and the buffer vessel VRecycle k k W Feedback tS QF . For the other modeling details which volume fed back to the unit VFeedback are the same as those of FF-SMB, the reader is referred to [3].
3. Optimization problem formulation As shown in Section 1, the fractionation and feedback regime offers additional degrees k k and W Feedback , k E , R , and the feeding sequence. Here we of freedom, i.e., W Production k restrict our attention to one case where VRecycle
k k VFeedback VEvp , thus ensuring that each
buffer vessel neither “runs dry” nor overflows. The two feedback periods can then be excluded from the independent variables. The feeding sequence, as done in [3], is also fixed at “RFE” in this paper (see Fig. 2c). The feed throughput maximization problem can be formulated as below: E R maxE QF* QF (1 W Feedback W Feedback ) (6) R mI , mII , mIII , mIV , QF ,W Production ,W Production
s. t. Product purity specifications: PurE
³
1
E 1W Production
³
1
E 1W Production
CBE (W ) dW
(C AE (W ) CBE (W )) dW
t PurE , min , PurR
³
R W Production
0
³
R W Production
0
C AR (W ) dW
(C AR (W ) CBR (W )) dW
QI d Qmax Maximum flow-rate limitation: Feasibility constraints on feedback periods: E R E R 0 d W Feedback d 1, 0 d W Feedback d 1, 0 d W Feedback W Feedback d1 Feasibility requirements on m-values: mI mII ! 0, mI mIV ! 0, mIII mII ! 0, mIII mIV ! 0
t PurR , min
(7) (8)
(9) (10)
Optimization of simulated moving bed chromatography with fractionation and feedback incorporating an enrichment step 821 In the optimization variables, the four dimensionless m-values are defined as the net Q j tS VCol H , j I , II , III , IV . The objective function QF* flow-rate ratios [4]: m j (1 H )VCol denotes the average rate of processing fresh feed of the FF-SMB unit. The sequential approach [2, 3] was used to solve the nonlinear programming (NLP) problem. The partial differential equations in Section 2 was discretized in space by orthogonal collocation on finite elements method, and the resulting system of differential algebraic equations (DAEs) was integrated using DASPK solver. A sequential quadratic programming optimizer E04UCF from the NAG Fortran Library, was chosen for optimization. When the deviation between the concentration profiles at the end of two successive cycles fulfills a pre-specified tolerance, the cyclic steady state was considered to be attained. The gradients of the purity constraints were numerically approximated by finite difference scheme, while the others were determined analytically.
4. Results and discussion A binary separation characterized by nonlinear Langmuir isotherms and performed in a 4-zone laboratory-scale SMB unit was used as a model process. The operating conditions and model parameters are summarized in Table 1.
Table 1. Summarized data of the example process Column parameters and operating conditions
D [cm]
1
Feed concentrations [g/L]
1
L [cm]
10
Qmax [mL/min]
10
H
[-]
N [-]
k Buffer ,0
0.667 V 40
,k
Buffer , k i ,0
C
,i
E , R [mL]
4
A, B, k
0
E , R [g/L]
Adsorption isotherm coefficients in Eq. (4)
*
Maximum QF or QF [mL/min]
H A [-] 5.078 K A [L/g] 0.089 The maximum feed throughput H [-] 5.718 K [L/g] 0.105 B B delivered by the conventional 0.40 SMB (i.e., QF ) and FF-SMB (i.e., QF* ) 5-column SMB (1/1/2/1) 4-column evaporative FF-SMB 0.35 with and without evaporation with respect 4-column FF-SMB 4-column SMB to different purity requirements is shown in 0.30 Fig. 4. Note that in this case study, the 0.25 evaporating rate for both buffer vessels was 0.20 fixed at 0.06 mL/min. Selection of such a 0.15 relatively low value is due to the small 0.10 dimension of the unit under consideration. 0.05 For the 5-column SMB, the configuration of 0.00 1/1/2/1 representing two columns in section 90 91 92 93 94 95 Purity [%] III and one each in other sections was Fig. 4. Comparison of optimum feed considered. It is observed that the 4-column throughout for different operating regimes. SMB is feasible only for the purity of 90%. The evaporation allows the 4-column FF-SMB to achieve a feed throughput comparable to that of the 5-column SMB, and to clearly outperform the non-evaporative scenario. The improvement over the normal FF-SMB becomes more significant as the higher purity is required. For the purity of 95%, the ability of processing fresh feed can be enhanced by 150%, while such an improvement is only 14% for the purity of 90%. The optimal performance of the 4-column FF-SMB with different solvent evaporating rates is shown in Fig. 5, where the minimum purity requirement was specified as 95% for both products. As can be seen in this figure, the improvement of the feed throughput increases linearly with increasing the evaporating rate. At a rate of 0.09 mL/min or
822
S.Li et al.
5. Conclusions
0.25 4-column evaporative FF-SMB
0.20
0.15
5-column SMB (1/1/2/1)
*
Maximum QF [mL/min]
higher, the optimal value exceeds that of the 5-column SMB. It should be pointed out that typically the two components in the fresh feed reach their solubility limits. Care should be taken to avoid excessive solvent evaporation. We have checked that for both case studies, each component after evaporation is more concentrated, but remains lower than its feed concentration.
0.10
0.05
0.00 0
0.03
0.06 0.09 Evaporation rate [mL/min]
0.12
0.15
Fig. 5. Optimal feed throughout of evaporative Considering solvent evaporation as an FF-SMB with different solvent removal rates. example, effectiveness of an enrichment step has been evaluated by systematic optimization studies. The optimal performance of FF-SMB including an enrichment step is compared to that of SMB and FF-SMB. Quantitative results show that significant improvement over the non-enriched case and conventional concept can be achieved. The benefit attainable becomes more pronounced as the evaporating rate increases. It can be expected that the idea is particularly attractive for cases where the process is not operated close to solubility limits of the components. Experimental validation of this operation on FF-SMB is our future work.
Nomenclature C : liquid phase concentration [g/L]; D : column diameter [cm]; H : Henry coefficient [-]; K : adsorption parameter [L/g]; L : column length [cm]; N : number of theoretical plates per column [-]; Pur : product purity [%]; Q : liquid phase flow-rate [mL/min]; q : solid phase concentration [g/L]; t : time [min]; t S : cycle time [min]; u : interstitial liquid velocity [cm/s]; V : liquid volume [mL]; VCol : column volume [mL]; z : axial coordinate [cm]; H : column porosity [-]; W : dimensionless time defined as W
t tS [-]; I , II , III , IV : SMB zone index; Buffer : buffer
vessel; E : extract; Evp : evaporation; F : feed; Feed : original feed tank; Feedback : feedback from buffer vessel; Feeding : feed from original feed; i : component index, i
in : inlet of buffer vessel; k : SMB outlet or buffer vessel index, k
A, B ;
E , R ; out : outlet of buffer
vessel; Production : outlet stream is collected as product; R : raffinate; Recycle : outlet stream is recycled to buffer vessel.
References [1] L.C. Keßler, A. Seidel-Morgenstern, 2008, Improving performance of simulated moving bed chromatography by fractionation and feed-back of outlet streams, J.Chromatogr.A, 1207, 1-2, 55-71. [2] S. Li, Y. Kawajiri, J. Raisch, A. Seidel-Morgenstern, 2010, Optimization of simulated moving bed chromatography with fractionation and feedback: Part I. Fractionation of one outlet, J.Chromatogr.A, 1217, 33, 5337-5348. [3] S. Li, Y. Kawajiri, J. Raisch, A. Seidel-Morgenstern, 2010, Optimization of simulated moving bed chromatography with fractionation and feedback: Part II. Fractionation of both outlets, J.Chromatogr.A, 1217, 33, 5349-5357. [4] M. Mazzotti, G. Storti, M. Morbidelli, 1997, Optimal operation of simulated moving bed units for nonlinear chromatographic separations, J.Chromatogr.A, 769, 1, 3-24. [5] M. Bailly, R.-M. Nicoud, A. Philippe, O. Ludemann-Hombourger, 2004, Method and device for chromatography comprising a concentration step, World Patent WO2004039468.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Tuning a Distillation Column Simulator Kurt E. Häggblom and Ramkrishna K. Ghosh Process Control Laboratory, Dept. of Chemical Engineering, Åbo Akademi University, Biskopsgatan 8, FIN-20500 Åbo, Finland
Abstract A dynamic simulation model for binary distillation is presented. The model is based on mass, component and energy balances as well as pertinent expressions for the thermodynamic properties of the binary mixture. The model includes a total condenser, producing a reflux with a varying degree of subcooling, and a thermosyphon reboiler, which introduces several operating parameters. Parameters related to the operation of the condenser and the reboiler as well as tray efficiencies and heat losses from the column are determined by fitting to experimental data. Keywords: distillation modelling, parameter estimation, data reconciliation, dynamic simulator.
1. Introduction Energy saving has become an extremely important issue in the chemical process industries. Distillation columns, in particular, consume huge amounts of energy. One way of minimizing the energy consumption is improved control. To achieve this, efficient modelling and system identification techniques suited for industrial use and tailored for control design applications are needed. A dynamic simulator is a useful tool for the development of such techniques. The simulator described in this paper is to be used as a research tool to complement a pilot-scale distillation column. Therefore, the behaviour of the simulator is adapted to that of the real column by tuning a number of uncertain operating parameters of the simulator. Due to space limitations, only parameters that can be determined from steady-state data are considered in this paper. Furthermore, the implementation of the simulator in MathWork’s Simulink software is not described.
2. Distillation Model A schematic of a two-product distillation column is shown in Fig. 1. A dynamic model for binary distillation in such a column is presented. The main features of the model are: x Variations in vapour holdup and its internal energy are neglected. It is also assumed that the internal pressure is constant and that all liquid holdups are perfectly mixed. x The enthalpy of vaporization is assumed to depend affinely on the fraction of the light (i.e., the more volatile) component in the mixture. From this it follows that the energy balance for a tray can be reduced to an algebraic expression and the internal
D xD
L
F zF
V
B xB
Figure 1. A distillation column.
824
K.E. Häggblom and R.K. Ghosh
flow rates are not necessarily constant in the chosen flow rate units. Heat losses from the column are allowed, which also affects the internal flows rates. These energy balance effects are quite often neglected in simple models. x Tray compositions are modelled by Murphree’s tray efficiency and a relative volatility, which may depend nonlinearly on the composition. x The condenser is a total condenser, which produces a reflux with a varying degree of subcooling. This effect is usually omitted in published distillation models. x The reboiler is a vertical thermosyphon reboiler, which circulates liquid in addition to the produced vapour. This introduces several operating parameters, which, as fas as the authors know, have not been included in other distillation models. The type of tray efficiency is not crucial; it can easily be changed. The choice of condenser and reboiler types are motivated by the fact that the model describes a distillation colum equipped in this way. Further details of the model, especially those concerning the energy balance and the condenser, are given in Häggblom (1991). 2.1. Distillation Tray A schematic of a tray is shown in Fig. 2. The basic notation is indicated in the figure. The total material balance for tray i is dM i dt
Vi 1 Vi Li 1 Li Fi .
Fi zi
Li–1 xi–1 Vi yi
(1)
At constant pressure, Li depends on other variables indicated by (Betlem et al., 1998) Li f ( M i , xi ,Vi ) . (2)
Vi+1 yi+1
Li xi
Figure 2. A distillation tray.
In practice, we use Francis weir formula (see Foss, 1983), which is independent of Vi . The material balance for the light component is d mi dt
Vi 1 yi 1 Vi yi Li 1xi 1 Li xi Fi zi ,
xi
mi / M i .
(3)
Here, the fraction of light component in the outgoing vapour flow is given by (4) yi Ei yi (1 Ei ) yi 1 , yi D i xi / [1 xi (D i 1)] , where Ei is Murphree’s tray efficiency and D i is the relative volatility, which may depend nonlinearly on xi . As shown in Häggblom (1991), the energy balance can be reduced to the algebraic form Vi hVi Vi 1hVi 1 Li 1qLi 1 hLi 1 Fi qFi hFi Qi . (5) Here, Qi is the heat loss from the tray through conduction, hS is the heat of vaporization of a stream S and qS denotes the thermal condition of the stream such that qS hS is the difference between the specific or molar enthalpy of S and the corresponding enthalpy of a saturated liquid with the same composition as S . This means that q 0 for a subcooled liquid, q 0 for a saturated liquid, and q 1 for a saturated vapour. 2.2. Condenser The balance equations for the condenser, including the reflux drum, are dM 0 dt
V1 L D ,
(6)
Tuning a Distillation Column Simulator
825
d m0 dt
V1 y1 ( L D) xD ,
(7)
dU 0 dt
V1hV1 ( L D)
m0 / M 0 ,
xD
U0 Q0 , M0
qL hL
U0 . M0
(8)
Here, M 0 and m0 are the total and light component holdups, respectively, in the reflux drum. In the equations for tray 1, L0 and x0 are equivalent with L and xD , respectively. The heat removal in the condenser is modelled as (Häggblom, 1991) Q0 kc hV1V1 C , (9) where kc and C are parameters to be determined. 2.3. Reboiler and Column Bottom The reboiler is a vertical thermosyphon reboiler. A schematic of the reboiler and the bottom part of the column is shown in Fig. 3. According to Kister (1992), the shown arrangement, where the circulated liquid directly enters the bottom product sump, makes the reboiler LN work as an ideal stage. However, xN VN+1 yN+1 there is also a possibility that a rLN+1 fraction of the liquid is returned to (1-r)LN+1 the compartment connected to the Lb xN+1 V xb reboiler. This fraction is denoted r . We also define R LN 1 / Lr . (10) Assuming immediate mixing of the flows entering the reboiler compartment, and constant density of the liquid in the compartment, we obtain LN (1 rR ) Lr ,
Lb
xb
B xB
Lr xr
Figure 3. Column bottom and a thermosyphon reboiler.
LN xN rRLr xN 1 , LN rRLr
Mr
dxr dt
Lr ( xb xr ) .
(11)
For the bottom product sump, the total and component material balances give dM B dt
Lb (1 r ) RLr B ,
dmB dt
Lb xb (1 r ) RLr xN 1 BxB ,
xB
mB . MB
(12)
Assuming first order dynamics with the time constant W r for the relationship between the steam flow rate V and the heat input Qr from the condensing steam, we obtain the following set of equations for the thermosyphon reboiler: Wr
d Qr Qr dt
y N 1
hV V , VN 1hVN 1
Er y N 1 (1 Er ) xr ,
Qr , Lr y N 1
1 VN 1 , xr 1 R
RxN 1 (1 R ) y N 1 ,
D N 1xN 1 / [1 xN 1 (D N 1 1)] .
(13) (14)
3. Parameter Tuning Even if correlations for the required thermodynamic properties of the binary mixture are available, there are several unknown parameters. The majority of these parameters can be determined from steady-state data. In this paper, we only consider such parameters.
826
K.E. Häggblom and R.K. Ghosh
Because it is desired that the simulator behaves as an existing distillation column, data from a number of experiments with the column are used for parameter fitting. A subset of these experiments is described in Häggblom and Böling (1998). A drawback for the parameter fitting is that reliable steady-state data are known for only a few significantly different steady states. However, a large number of pseudo steady states are available, where all relevant variables except compositions are essentially steady. Hence, pseudo steady states will also be used for parameters that depend only weakly on composition. 3.1. Condenser Parameters Equations (6) to (9) at steady state can be combined to give the expression 1 qL kc C / ( L D )hL , (15) where hV1 has been replaced by hL because y1 xD at steady state. When, in addition to L , D and xD , the temperature T0 of the reflux as well as expressions for the heat capacity cp ( xD , T ) , the boiling point T ( xD ) , and the heat of vaporization hL ( xD ) are known, the parameter qL can be calculated from qL
1 hL ( xD )
T ( xD )
³
cp ( xD , T )dT .
(16)
T0
It is then straightforward to determine kc and C by linear least-squares regression. 3.2. Heat Input and Loss If the operating conditions of the reboiler are uncertain, the condensation enthalpy of the steam is not known exactly. In addition, the heat loss from the distillation column is uncertain. Both parameters can be determined by fitting to data. Combination of Eq. (5) for every tray with the first two Eqs (13) gives at steady state ( L D)hL LqL hL FqF hF
N
hV V ¦ Qi ,
(17)
i 1
where L D and hL have been substituted for V1 and hV1 , respectively. The parameters hV and 6Qi can now be determined by linear least-squares regression.
3.3. Reboiler Parameters and Tray Efficiencies For given inputs L , V , F , z F and qF (or TF ), it possible to determine the steadystate solution xD and xB from Eqs (1) to (14). The solution can be found by using xD as an iteration variable. In practice, the solution is found by solving the set of equations for xD such that the overall material balance closes at the top of the column. It is assumed that the tray efficiencies are equal for all trays above the feed tray (denoted Et ) and equal also for the rest of the trays (denoted Eb ). The reboiler parameters R , r and Er as well as the tray efficiencies Eb and Et can then be determined by minimizing the sum of squares of the difference between experimental product compositions and those found by solving the above set of equations. Here, only reliable steady-state data are used. 3.4. Overall optimisation with positive flow rate constraints It turned out that the optimisation of the reboiler parameters gave a negative Lb at certain steady states. The optimisation was therefore repeated with Lb ! 0 as a con-
Tuning a Distillation Column Simulator
827
straint. All parameters were, in fact, included in the optimisation. The summed squares of the difference between experimental and calculated reflux temperatures were also included in the objective function.
This means that 60% of the reboiler intake is circulated as liquid and 20% of this is returned to the bottom compartment connected to the reboiler. It is also interesting to note that the optimisation resulted in 100% reboiler efficiency, which means that the reboiler works as an ideal stage. The Murphree tray efficiencies were found to be Eb 0.66 and Er 0.595 . For the heat input and heat loss parameters values close to the expected ones were obtained: 2306 kJ/kg for condensation enthalpy of steam and 10 % total heat loss of the heat input. Fig. 4 shows the simulated product compositions vs. the experimental ones for three types of steady states: high, medium, and low internal flows. Fig. 5 illustrates how well the energy balances in Eqs (15) and (17) match. Here, pseudo steady states are also included. Note that an explicit fit to the energy balances was not part of the optimisation in 3.4.
Acknowledgment
simulation
91.3
91.2 91.1 91
90.9 90.9
91.1
91.2 91.3 experiment
91.4
91.5
Bottoms composition [wt %] High L&V Medium L&V Low L&V
1.4 1.2 1 0.8 0.6 0.4 0.2 0.2
0.4
0.6
0.8 1 experiment
1.2
1.4
Fig. 4. Composition fits. Condenser parameters (xD simulated) 1.07 Pseudo sís True sís Fit
1.065 1.06 1.055 1.05
Financial support from the Academy of Finland, grant number 122286, is gratefully acknowledged.
1.045
References
1.035
1.04
4
5
5
2.5
x 10
6 7 1/[(L+D)*hL] [h/kJ]
8 í6
x 10
Heat input and loss (xD simulated) Pseudo sís True sís Fit
2.4 (L+D)*hLíL*qhLíF*qhF [kJ/h]
B.H.L. Betlem, J.E. Rijnsdorp. and R.F. Azink, 1998, Influence of tray hydraulics on tray column dynamics, Chem. Eng. Sci, Vol. 53, No. 23, pp. 3991–4003. B. Foss, 1983, Composition control of binary distillation columns using multivariable optimal control, Modeling, Identification and Control, Vol. 4, No. 4, pp. 195–216. K.E. Häggblom, 1991, Modeling of flow dynamics for control of distillation columns, Proc. American Control Conf., pp. 785–790. K.E. Häggblom and J.M. Böling, Multimodel identification for control of an ill-conditioned distillation column, J. Process Control, Vol. 8, No. 3, pp. 209–218. H.Z. Kister, 1992, Distillation Design, McGraw-Hill.
91
1.6
simulation
The following set of (rounded) parameter values gave a value for the objective function within 0.02% away from the true optimum: R 0.60 , r 0.20 , Er 1 , Eb 0.66 , Et 0.595 , hV 2306 kJ/kg, 6Qi 23260 kJ/h, kc 0.9955 , C 9570 kJ/h.
High L&V Medium L&V Low L&V
91.4
1íqL
4. Result
Distillate composition [wt %] 91.5
2.3 2.2 2.1 2 1.9 1.8 1.7 1.6 1.5
80
90
100 V [kg/h]
110
120
Fig. 5. Energy balance matches.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Comparative Study of MPC-Based Control Configurations of an Industrial Bioreactor to Produce Ethanol Aarón Romo-Hernándeza, Salvador Hernándeza, Arturo Sánchezb, Héctor Hernández-Escotoa* a
Universidad de Guanajuato, Departamento de Ingeniería Química, Noria Alta s/n, Col.
Noria Alta, Guanajuato, Guanajuato, 36050, México b
Centro de Investigación y Estudios Avanzados, Unidad de Ingeniería Avanzada, Av.
Científica 1145, Col. El Bajío, Zapopan, Jalisco, 45015, México
Abstract For a class of continuous stirred tank bioreactors to produce ethanol at large scale, this work evaluates suitable control configurations in a model predictive control framework. The study makes several assumptions, including a non-square plant, a distinction between control and output variables, and the possible sets of available outputs. Since the outputs are obtained in a discrete form, also the effect of the sampling interval was evaluated. Operational-feasible configurations are discussed. Herein, it is highlighted that a conventional configuration of one input-one control variable driven by one output, is the best to maintain all the three states around their nominal values. Keywords: bioreactor control, control configurations, model predictive control, industrial bioethanol production.
1. Introduction Nowadays, there is a trend on research and technology development on renewable and sustainable energy sources, where bioethanol is a promising option. Current bioethanol industrial plants are based on corn or sugar cane, where the sugar content is fermented to ethanol in a sequence of continuous stirred tank reactors; the operational objective of this process consists on obtaining high ethanol/sugar ratios with reduced concentrations of yeast due to operating restrictions at downstream process equipments (i.e., Meleiro and Filho, 2000). This operational scenario is visualized for prospective second or third generation industrial plants for bioethanol production. As any process, implementing automatic control of the process variables is an option for guaranteeing operational objectives; however, this is hindered due to the fact that corresponding measurements are laboratory analysis of samples taken out at periodic timings. The sampling time and measured outputs depend on the laboratory infrastructure; i.e., on one hand, liquid chromatography is the resorted technique for
A Comparative Study of MPC-Based Control Configurations of an Industrial Bioreactor to Produce Ethanol
829
measuring most of the interest variables; on the other hand, a refractometer is an instrument, less sophisticated and cheaper than a chromatographer, but it only measures sugar. On the particular line of developing control systems for bioreactors to produce ethanol, in the open literature it is reported that bioreactors have been tested and studied using conventional controllers up to advanced ones (i.e., Nagy, 2007); in this work, attention is given to MPC techniques because they inherently manage discrete signals, and their optimal control framework allows the analysis of non-square plants. Then, this work explores the performance of a MPC control system in face of possible control configurations, distinguishing control variables from available output variables, and the effect of the sampling time.
2. The Bioreactor and its MPC Based Control Configurations A continuous stirred tank bioreactor (CSTBR) is considered, where sugarcane syrup is fermented by Saccharomises cerevisae to produce ethanol (Figure 1). As one of a reactors sequence, its input (and output) stream contains sugar, yeast and ethanol. Focusing on production yield variables, the behavior of this class of industrial CSTBRs is described by the following Monod kinetics based model (Meleiro and Filho, 2000), C = rg (C, S, P) + D (Ce C) ,
C(t0 ) = C0 ,
(1a)
S = YS/C rg (C, S, P) + D (Se S) ,
S(t0 ) = S0 ,
(1b)
P = YP/C rg (C, S, P) + D (Pe P) ,
P(t0 ) = P0 ,
(1c)
y(ti ) = [mc C(ti ), mS S(ti ), mP P(ti )]' , rg (C, S, P) = (C, S, P) C ,
mC, mS, mP = 0 or 1,
ti+1 = ti + , n
(1d) m
S P C (C, S, P) = MAX 1 * 1 * . S+k P C
(1f)
The CSTBR state is formed by the concentrations of yeast (C), sugar (S) and ethanol (P); the potential disturbances are the concentrations of yeast (Ce), sugar (Se) and ethanol (Pe) in the input stream; and the dilution rate (D) is the only control input. The output set (y) depends on the infrastructure for sample analysis (define by parameters mC, mS and mP), and this issue is considered as a freedom degree for operational schemes. The current measurement is assumed available at the instant ti; i.e., this can be provided by a state estimator, driven by measurements that reflect the reactor state at the previous instant ti-1. The sampling time () is considered periodic. The model takes into account process inhibition by high concentrations of ethanol and yeast (Eq. 1f). For the model parameter (MAX, k, YS/C, YP/C, P*, C*) values identified by Andrietta et al. (1994), on the basis of experimental data of an industrial system, this process does not present steady state multiplicity, and the involved control problems are either of disturbance rejecting or set-point changing. In this sense, the control problem considered is one of disturbance rejecting to maintain the reactor state around a nominal one ( C , S , P ). Considering the above mentioned system features, a Model Predictive Control approach is resorted to.
A. Romo-Hernández et al.
830
On the step of establishing an appropriate control configuration, measured variables (y) are distinguished from control variables (z): . z(t) = [cc C(t), cS S(t), cP P(t)]' , cC, cS, cP = 0 or 1.
(2)
In a model based optimal control framework, the problem of maintaining more than one control variable around a nominal one, through a number of input lower than the number of control variables, can be formulated. In this sense, combining the sole control input (D) with seven possible sets of control variables (z) and seven outputs (y), results in 49 possible control configurations. For example, denoting a control configuration as (D; z’; y’), for the CSTBR control system (Figure 1), the controller can be driven by a three-output set to maintain either only one control variable in its nominal value (i.e., (D; S; yC, yS, yP)), or more than one control variable around their nominal values (i.e., (D; C, S, P; yC, yS, yP)); on another example, the controller can be driven by only one output (i.e., (D; C, S, P; yS)). Then, the control problem becomes: for a MPC based control system for an industrial CSTBR to produce ethanol (i) determine appropriate control configurations, and (ii) evaluate the effect of the sampling p g time.
Figure 11. MPC C Controll S System ffor a CSTBR to Produce Ethanol P d Eh
3. The MPC Control System and its Performance Evaluation MPC control systems (Figure 1), of different y, z and , are constructed in the MATLAB® MPC toolbox (Bemporad et al., 2006); the controller is based on the linearized version of the CSTBR model (Eq. 1), and the nonlinear model (Eq. 1) forms the actual plant. Disturbances are assumed unmeasured, and constraints are only imposed to D. The tuning parameters are: (i) the number of control intervals (Hc), (ii) the number of prediction intervals (Hp), and (iii) the weights of control variables (w = (wC, wS, wP)), that dictate the accuracy with which each control variable must track its set-point. In order to evaluate the control system performance, the following function was used: nF t C S P J E = { (C C (ti )) 2 + ( S S (ti )) 2 + ( P P(ti )) 2 } , nF = F , i =0 S S S
(3)
A Comparative Study of MPC-Based Control Configurations of an Industrial Bioreactor to Produce Ethanol
831
where nF is the number of intervals in the process evaluation time (tF). The study approach firstly follows the construction of control systems with the different control variables sets, assuming a full output set, and a quasi-continuous measurement ( = 0.05 h); next, their performances are evaluated for different tuning parameter values (HP, HC, w). Finally, selected control configurations are evaluated for different sampling times ( = 0.5, 1 and 2 h), and for different output sets.
4. Performance Evaluation The following nominal operational conditions are considered: D = 0.35 h-1, (Ce, Se, Pe) = (25 g/L, 125 g/L, 1 g/L), whose nominal steady state is ( C , S , P ) = (28.4816 g/L, 19.4962 g/L, 47.9492 g/L). The assumed disturbances are (Ce, Se, Pe) = (1 g/L, 2 g/L, 1 g/L). On the tuning the MPC controllers, for all the control systems, it was found that HC = 3 and HP = 5 provide the better performances; greater horizons do not reflect a significant decrease on JE (Eq. 3). On the evaluation of control configurations with full output set, continuously measured, those oriented on tracking substrate (wS >> wC, wS >> wP) are the best in the sense that their JE values are the lowest. Indeed, the only-sugar controlling ((wC, wS, wP) = (0, 1, 0)) is the best. Yeast tracking results in the greater JE values, and only-yeast controlling ((wC, wS, wP) = (1, 0, 0)) owns the greatest, and is not feasible due to the required D surpasses its upper limit (0.2 < D < 0.5 1/h). In Figure (2a), the behavior of the best control configuration (D; S; yC, yS, yP) is shown for different sampling times ( = 0.05, 1, 2 h). It can be observed that the greater the greater an offset in S, then the greater the corresponding JE value. In Figure (2b), behavior of control configurations for different output sets (D; S; y) is shown; also it is noticed an offset increase in S when the output set contains fewer number of measured states. It can be noticed that although all the control system have offset on S, except those of continuous measurements, this is not considerable on practical framework, and maintain all the reactor state around a nominal one.
5. Conclusions For the class of CSTBR to produce ethanol at large scale, control configurations were evaluated taking into account the availability of measurements, and the size of sampling intervals. Controlling sugar concentration is the best choice for control configurations, and an output set of two or one measurement could be enough to operate the control system with a sampling time of a practical size. Controlling yeast is not feasible for this process.
6. Acknowledgements Financial supports from Consejo Nacional de Ciencia y Tecnología – México (Red de Fuentes de Energía program), Secretaria de Educación Pública – México (PROMEP program), and Universidad de Guanajuato, are acknowledged.
832
A. Romo-Hernández et al.
(a) (b) Figure 2. Performance of control systems
References S. R. Andrietta, 1994, Modelagen, simulacao e controle de fermentacao alcoolica continua em escala industrial, Ph. D. thesis. Brazil: FEA/UNICAMP. A. Bemporad, M. Morari, N. L. Ricker, 2006, Model Predictive Control Toolbox for Use with MATLAB®, Version 2, The Mathworks, Inc. L. A. C. Meleiro, R. M. Filho, 2000, A self-tuning adaptive control applied to an industrial largescale ethanol production, Computers and Chemical Engineering, 24, 2-7, 925-930. Z. K. Nagy, 2007, Model based control of a yeast fermentation bioreactor using optimally designed artificial neural networks, Chemical Engineering Journal, 127, 1-3, 95-109.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Control of an azeotropic distillation process to acetonitrile production Andrea Ruiz Ruiz, Nelson Borda Beltrán, Alexander Leguizamón R., Javier R. Guevara L., Ivan D. Gil C. Grupo de Ingeniería de Sistemas de Proceso- Departamento de Ingeniería Química y Ambiental, Universidad Nacional de Colombia – Sede Bogotá, Carrera 30 45-03, Bogotá, Colombia, [email protected]
Abstract The goal of this work is to evaluate and to simulate an alternative way to produce acetonitrile by means of extractive distillation. This chemical substance is used in pharmaceutics, solvents, antibiotics, vitamin and insulin production, among others. The high demand of this product during several years ago has created the necessity of increasing 5% the industrial production. Therefore, it is necessary to establish the best operating conditions to obtain a high purity product in a cost effective way. Azeotropic distillation has been widely studied in the separation of acetonitrile-water mixtures obtaining purities higher than 99.9% of acetonitrile. Some of the solvents used in this kind of separation include hexyl-amine and butyl acetate. Also, pressure swing distillation has been reported, with the advantage of eliminating the use of a solvent reduces the energy consumption and improves the quality of acetonitrile produced. Extractive distillation is a partial vaporization process in the presence of a non-volatile separating agent with a high boiling point, which generally called solvent or entrainer, and which is added to the azeotropic mixture to alter the relative volatility of the key component with no additional formation of azeotropes. The principle driving extractive distillation is based on the introduction of a selective solvent that interacts differently with each of the components of the original mixture and which generally shows a strong affinity with one of the key components. The purpose of this work is to design and control an extractive distillation process to produce acetonitrile using ethylene-glycol as entrainer. The steady state design involves the selection of the appropriate thermodynamic model and the study of the effect of the main design variables. The plantwide control strategy considers the control of only one temperature on each column in order to be used for wider industrial applications and to provide good product quality control.
Keywords: acetonitrile, azeotropic distillation, process control, ethylene-glycol.
1. Introducción Acetonitrile is a derivative of acetic acid, usually found in aqueous solution, as a secondary product in obtaining acrylonitrile from propylene ammoxidation, or waste streams in extraction processes, chromatography, etc., where also other compounds are found like methanol, benzene, allyl alcohol among others. The concentration level of
834
A. Ruiz et al.
organic impurities in these streams is usually less than 25%, acetonitrile concentration is at least 85% and is acrylonitrile and hydrogen cyanide free [1]. The most important issue for this separation is the homogeneous azeotrope of minimum boiling point at 1 atm (76.5 ° C), which forms acetonitrile with water. Some of the processes used to obtain high purity acetonitrile include solvent extraction combined with a conventional distillation to separate very dilute solutions [2]. The most widely used separation strategy includes a heterogeneous batch azeotropic distillation, using entrainers such as hexylamine or butyl acetate. Also, pressure swing distillatin has been used taking advantage on the dependence of the azeotrope composition with pressure. This process leads to lower costs compared with other types of distillation, since there is no addition or recirculation of a solvent or other substance [3]. The goal of this work is to use extractive distillation with ethylene glycol as entrainer to purify a stream with azeotropic composition of acetonitrile and water. The main operating conditions were established performing a steady state simulation in Aspen Plus®. Then an Aspen Dynamics® simulation was used to establish a control strategy that allows obtain acetonitrile at different operating conditions.
2. .Simulation 2.1. Thermodynamics Acetonitrile-water mixture at atmospheric pressure has a minimum-boiling homogeneous azeotrope at 76.5ºC. The NRTL physical property model is used to describe the nonideality of the liquid phase and the vapor is assumed to be ideal. All NRTL model binary parameters are taken from Aspen Plus database. 2.2. Steady State 2.2.1. Extractive column To determine the extractive column layout parameters the effect of number of stages, molar reflux ratio, solvent and binary mixture feed stage, temperature input solvent and the solvent to feed molar ratio were analyzed. In all the cases the acetonitrile mole fraction on distillate stream as well as condenser and reboiler heat duties were examined because its effect on the operating costs and the efficiency of the separation. Figures 1 and 2 show the effect of reflux ratio and solvent temperature on the acetonitrile mole fraction on the distillate stream, and condenser and reboiler duties. It can be observed that high values of solvent temperature deteriorate the acetonitrile composition, probably because a fraction of the water contained in the solvent inlet stage could be vaporizing as a consequence of the energy of the solvent stream. On the other way, reflux ratio has positive impact when is fixed in low values (R=1) and the acetonitrile concentration diminishes as reflux ratio increases. The main operating conditions established from the different sensitivity analyses are summarized in Table 1. 2.2.2. Recovery column As a second step of the process, an entrainer recovery column was simulated in order to recirculate ethylene-glycol used in the extractive distillation column. The bottoms
Control of an Azeotropic distillation process to acetonitrile production
835
stream from the extractive distillation column is sent to a distillation column which operates under vacuum pressure to prevent the entrainer degradation because of high temperatures. The main parameters for the simulation of the recovery column are reported in Table 1.
Figure 1. Influence of the solvent feed temperature on the composition of distillate at different reflux ratios.
Figure 2. Influence of the solvent feed temperature on the composition of distillate and heat duties. R=1.
Table 1. Extractive column and recovery column design parameters PARAMETERS Number of stages Molar Reflux ratio Solvent to feed molar ratio Feed Stage Bottoms flowrate (kmol/h) Solvent feed stage Inlet solvent temperature °C Condenser pressure (atm) Pressure drop on each stage (atm)
EXTRACTIVE 24 1 1.2 18 154.18 4 124 1 0.01
RECOVERY 12 0.5 NA 7 120 NA NA 0.1 0.001
3. DYNAMIC STATE The main units of the proposed process are the distillation columns, then they are those that require special attention for the steady-state condition. Variables on which it is recommended establishing control in a distillation column are levels of accumulators and condenser and reboiler pressures, temperature and composition. To choose the best control structure, it must be taken into account: volatilities, purity products, reflux ratio, energy cost, column size, feed composition, etc. There are a some secondary variables, which stabilize the process, avoid deviations, such as pressure on
836
A. Ruiz et al.
key units and flows. The flow diagram and the controllers parameters are shown in Figure 3 and Table 2, respectively. 3.1. System disturbance In order to investigate the sensitivity of the different control loops proposed several disturbances were implemented changing input variables 20% above and below the steady state. Taking into account the composition of the azeotropic mixture, it is clear that the value of the acetonitrile concentration in this stream may decrease, and keeping in mind that the steady-state condition establishes that the molar concentration of acetonitrile is 67,45% the following changes were considered: 65% and then 50% of the acetonitrile mole fraction.
Figure 3. Control loops for the entire process. Initially, a basic regulatory control scheme is determined through the various control loops as follows: (1) Reflux drum levels for both columns are controlled by manipulating the distillate valves located in the streams D1A and D2AA. (2) The fresh feed to the extractive column is flow control in order to guarantee the constant flowrate. (3) The top pressures of both columns are controlled by manipulating the corresponding condenser duties. (4) The base level for extractive column is controlled by manipulating the bottoms flow rate.
Control of an Azeotropic distillation process to acetonitrile production
837
(5) The base level for recovery column is controlled by manipulating the bottoms flow rate. (6) The entrainer feed temperature is controlled at 65°C by manipulating the split fraction. (7) Reflux ratios are held constant in each column at their nominal values during disturbances. (8) The reboiler duties of both columns are used to control the temperature in a particular stage of each column. Table 2. Controllers parameters: FS y FM: solvent and feed flow; PCT1 y PCT2: column condenser 1 y 2 pressure; T25, T5, TA: Temperature stage 25; 5; streem FEED; %A: % Valve opening; Qreb y Qc: Reboiler and condenser heat duty; Inv: Inverse; Direc.: Direct. ȗi:Integral time; ȗo: dead time. FLOWRATE PRESSURE (kmol/h) (bar) TYPE FC-1 Kc (%/%) 0.5 ȉi (min) 0.2 Action inv SP 120 PV FS OP %A tș (min) -
ACUMULATORS LEVEL (m)
TEMPERATURE (°C)
FC-2 PC-1 PC-2 LC-1 LC-2 LC-3 LC-4 TC-1 TC-2 0.5 20 20 2 2 2 2 2 8.6 0.2 12 12 2 2 2 2 8.5 11.5 inv direc direc direc direc direc direc inv. inv. 100 1.01 0.1 1.03 1.64 0.06 1.45 120 104.58 FM PCT1 PCT2 top and bottom level TASolv T15 %A Qc Qc %A Qc Qc %A %A Solv Qreb -
TC-3 2,05 18.0 inv. 82.53 T6 Qreb -
Some of the dynamic results presented in Figures 4 and 5 for the solvent flow controller and overhead composition of the extractive distillation column.
Figure 4. FC1 Behavior.
838
A. Ruiz et al.
Figure 5. Variation of mass fraction of acetonitrile (Wac), for compositional changes in distillate stream in the first column.
4. Conclusions At this point, the proposed process seems to be a good alternative to obtain high purity acetonitrile, is recommend study with a little more detail behavior in a dynamic state of the process, as well as the development of another control loop for temperature of the mixture azeotropic feed to the column. The most appropriate control loops for analysis to a process, as described above, must describe specific disturbance variables operation such as flow or concentration. As well as recommendations to continue the work, is the detailed study of the characteristics of this kind of control, this in order to analyze factors control such as proportional gains or integral times, values that the work done previously defined as a constant operation. During all the disturbance process, is necessary to take into account the set point, been careful especially on set the correct value to optimize the analysis. The parameters for these control loops must be taken into consideration, especially the design specifications of the equipment involve in the operation.
References [1] SANJAY P G, Purification and recovery of acetonitrile from waste solvent acetonitrile, Patent No. US 6,483,890B1, Jan 18,2005 [2] REPKE J, KLEIN A, BOGLE D, WOZNY G, Pressure swing Batch distillation for homogeneus azeotropic separation, Chemical Engineering Research and Design, 2007. [3] JENS-UWE R, KLEIN A, Homogeneous azeotropic pressure swing distillation: continuous and batch process, El servier B.V, 2005 [4] ACOSTA J, ARCE A, RODIL E, SOTO A, A thermodynamic study on binary and ternary mixtures of acetonitrile, wáter and butyl acetate, Department of Chemical Engineering, University Of Santiago de Compostela, España, June 2002.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimal Temperature Tracking of a Solid State Fermentation Reactor C. González-Figueredoa, O. R. Ayalab, S. Aguilarb , O. Arocheb, A. Loukianovb, A. Sánchezb . a
Universidad Autónoma de Guadalajara, Departamento de Química, Av. Patria 1201, Col. Lomas del Valle, Zapopan 45129, México. b Centro de Investigación y Estudios Avanzados, Unidad de Ingeniería Avanzada, Av. Científica, Col. El Bajío, Zapopan 45010, México.
Abstract The Agaricus bisporus “Phase II” composting is a complex Solid State Fermentation (SSF) process of great importance in mushroom production. Because production costs are highly sensitive to this SSF, it is desirable to establish suitable control strategies in order to obtain better mushroom yields in shorter operating times. This work proposes a dynamic model composed of 11 ODE’s that describes the behavior of relevant system states. Kinetic and operation parameters were adjusted using parametric sensitivity studies. Experimental temperature trajectories of a pilot plant reactor were used to validate the model. Optimal temperature trajectories were calculated using the steepest descent method with a fixed time formulation, aiming at maximizing the thermophilic fungi and actinobacteria production. The paper concludes presenting the results obtained from a simple input-output control scheme implementation. Keywords: solid state fermentation, optimal temperature control, A. bisporus.
1. Introduction Solid State Fermentations (SSF) are complex processes with growing importance in food, pharma and agro industries for the preparation of added value products [1-3]. The A. bisporus “Phase II” composting is a good example of such processes, in which a suitable substrate is produced for the cultivation of this mushroom. Production yields are highly dependent on this SSF. Therefore it is desirable to establish control strategies capable of obtaining substrates with reproducible “quality” in shorter operating times. “Phase II” is carried out in large fed-batch reactors controlling temperature profiles by manipulating fresh-air inflow and/or recirculation ratio guaranteeing aerobic conditions. These temperature profiles are established, in most cases, empirically with the objective of promoting microbial populations with specific substrate degradation tasks that may favor the growth of the A. bisporus. This is the case of thermophilic fungi, mostly Scytalidiun thermophilae, which degrade cellulose, and some actinobacteria acting over lignocellulosic material. Competitors or inhibitors that may diminish the mushroom production must be eliminated. These broad microorganism families along with their respective roles on the mushroom culture have been identified on industrial composts, as well as on experimental set-ups [4]. Besides the specific microorganisms growth, other complex phenomena take place in a “Phase II” reactor: metabolic heat generation, conductive and radiation heat transfer, heat and mass transfer through evaporation [5], and oxygen, ammonia and carbon dioxide transfer from the reactor bed to the cooling air flow [3].
C. González-Figueredo et al.
840
Temperature profiles in “Phase II” commonly reach high enough values (pasteurization stage) to kill all mesophilic microorganisms, leaving only thermophilic fungi and actinobacteria as principal components of the microflora. However, these profiles may not guarantee optimal yields of the A bisporus. In order to determine optimal temperature trajectories and to design suitable control schemes, a dynamic model consisting of 11 ODE’s is proposed, based on previous studies [6, 7]. A sensitivity analysis was performed to determine the appropriate kinetic and operation parameter values that show consistency with the values reported on literature. Optimal temperature trajectories were calculated using the steepest descent method with a fixed-time formulation, while the cooling air inlet was regulated with a simple input-output control scheme. Finally, the calculated optimal temperature trajectory is compared against the experimental, industrial and controlled trajectories.
2. Experimental set-up and dynamic model In previous studies, a 9 ODE’s dynamic model, based on mass and heat balances, capable of describing the principal states of the SSF fermentation reactor was presented [7]. In this approach, fresh air was continuously and evenly fed to the reactor, considering quasi-stationary oxygen concentration. Because this work seeks to obtain optimal temperature trajectories by manipulating the cooling air inlet, oxygen concentrations cannot be considered constant. Carbon dioxide concentrations on the output gases flow are also important measurements that may offer information about the microorganisms’ growth, therefore, making CO2 another important state variable to consider in the model. Then, the 11-ODE’s model shown on Eqn. 1 is a modified version that includes oxygen and carbon dioxide concentrations as variable states. The corresponding state variables are described in Table 1, while vector functions f(x) and g(x) are presented in Table 2. The air inflow is employed as the control variable u. ݔሶ ൌ ݂ ሺݔሻ ݃ሺݔሻ ή ݑ
(1) Table 1. State Variables
State Variable
ݔଵ
ݔଵ (m) ݔଶ (kg m-3 ) ݔଷ (kg m-3 ) ݔସ (kg m-3 ) ݔହ (kg m-3 ) ( ݔm3 m-3 ) ( ݔm3 m-3 ) ( ଼ݔkg m-3 ) ݔଽ (°C) (kg H2 0 kg-DA-1 ) ݔଵଵ (kg m-3)
Description Compost height Actynobacteria concentration Thermophilics concentration Inhibitors concentration Substrate concentration Volumetric solid mass content Volumetric moisture content Oxygen concentration Temperature Air humidity Carbon dioxide concentration
Initial Value 0.45 0.1 0.1 0.1 100.39 0.1175 0.3958 0.3 28.7 0.016 3.8 · 10-4
The model contains a total of 45 parameters classified as operation and kinetic. Initially, a sensitivity analysis was carried out to determine which kinetic parameters have a larger contribution on the growth and death rates of the microflora. Then, they were adjusted to reproduce the optimal growth temperature range and maximum death temperatures, as described in a previous work [7]. Figure 1 shows the growth and death rates for the three microbial families and their respective optimal temperature thresholds. Operation parameters, such as the reactor dimensions, compost and cooling
Optimal Temperature Tracking of a Solid Substrate Fermentation Reactor
841
air properties, and others such as heat transfer coefficients are based on our experimental rig or previous experimental data. The parameter values are reported on Table 3. Since no reliable methods are available in industry for direct online biomass concentration measuring or substrate consumption rates, reactor temperature profiles were used for model validation. First, open loop simulations were computed for a constant value of the cooling air inflow of u=0.7m3/hr, obtained from the scaling of the operation conditions of industrial reactors. The open loop temperature profile approximates adequately the experimental profile as shown in Fig. 2b, having deviations no bigger than 2 ºC during the first 150 hours (duration of the industrial process). Table 2. Functions: f.- control independent, g.- control dependent i 1
fi (x) Ȟሺݔሻݔ
gi (x) Ͳ
Ri െ
2
Ȟሺݔሻݔ ܴ
Ͳ
ሾߤ െ ߣ ሿݔ
3
Ȟሺݔሻݔ ܴ
Ͳ
ሾߤ െ ߣ ሿݔ
Auxiliar ܩ ሺߙ ݔ ߚݔሻݔଵ ߥ ଼ݔ ݔହ ߤ ሺݔሻ ൌ ߤ௫ǡ ሺݔሻ ܭௌ ݔହ ܭை ଼ݔ īሺݔሻ ൌ െ
ିభǡ
ܽଵǡ ݁ ௫వାଶଷ
ߤ௫ǡ ሺݔሻ ൌ
ͳെ 4
Ȟሺݔሻݔ ܴ
Ͳ
ሾߤ െ ߣ ሿݔ
5
Ȟሺݔሻݔ ܴ
Ͳ
ሾߤ െ ߣ ሿݔ
6
ͳ Ȟሺݔሻݔ ܴ ߙ
Ͳ
െ
7
ͳ Ȟሺݔሻݔ ܴ ߚ
כሻ ߳ȍሺݔሻሺݔଵ െ ݔଵ
8
ܴ
ȍሺݔሻൣ଼ݔǡ െ ଼ݔ൧
9
ȣሺݔሻൣܴ െ ߠ൫ݔ െ ݔǡ ൯൧
ȳሺݔሻሾȌሺݔሻ െ Ȍ כሺݔሻሿ
10
Ͳ
ȳሺݔሻ൫ݔǡ ݔ כെ ʹݔ ൯
ହ
ୀଶ
ସ
ߜߤ ሺݔሻݔ ୀଶ ସ
߮ߤ ሺݔሻݔ ୀଶ ସ
ߞ ߤ ሺݔሻݔ
ିǡ
ߣሺݔሻ ൌ ܽௗǡ ݁ ௫వାଶଷ ͳ Ĭሺݔሻ ൌ ݔܣ ݔܤ ͳ ȍሺݔሻ ൌ ݔܦଵ ĭሺݔሻ ൌ ߫Ĭሺݔሻȍሺݔሻ Ȳሺݔሻ ൌ ሺͳ െ ܵݔଵ ሻݔଽ ݔܭଵ כሻݔ כ Ȳ כሺݔሻ ൌ ሺͳ െ ܵݔଵ ଽ ݔܭଵ
ୀଶ
െ
భǡ ାమǡ ܽଵǡ ି ݁ ௫వାଶଷ ܽଶǡ
כ ݔଵ ൌ
ܲܥ௪ ܲ െ ܲ௪
ଵହǤଶ଼ ଼Ǥଵହି ௫వାଶଷହ
11
ܴ
ȍሺݔሻൣݔଵଵǡ െ ݔଵଵ൧
ସ
ቀȯሺݔሻ ߷ߤ ሺݔሻቁ ݔ
ܲ௪ ൌ ͳͲ
ȯሺݔሻ ൌ
ݔହ݉ ைమ ܭைమ ݔହ
ୀଶ
3. Optimal temperature profile tracking Optimal temperature trajectories were calculated using the steepest descent method with a fixed-time formulation [8], maximizing thermophilic and actinobacteria concentrations whilst minimizing the control effort. The objective function is given by Eqn. 2, using the cooling air input flow as the control variable u. After an iterative tuning procedure, an optimal temperature profile was found for optimization weights values of H2=0.48, H3 =0.29 and R=0.11, and is compared in Fig. 2b with other temperature profiles. The thermophilic and actynomicete concentrations are bigger than
842
C. González-Figueredo et al.
their open loop counterparts, as shown in Fig. 2a, having a concentration ratio of 1.5 of thermophilics with respect of acyinomicetes, as reported in [9]. ௧ୀ௧
Jൌ ௧ ୀ ሺെܪଶ ݔଶ௫ െ ܪଷ ݔଷଶ ܴݑଶ ሻ݀ݐ
(2)
Figure 1. Microbial growth and death rates.
The tracking of the optimal temperature trajectory was carried out with an input-ouput control scheme (Eqns. 3-4), given that the plant has a relative degree of 1. For gain k1=10.5 and bias b=0.7, this control scheme is capable of following the optimal temperature trajectory T* [10], as shown also on Figure 2. ݑൌ ሺെ݂ଽ ሺݔሻ െ ݇ଵ ݁ ܶ כሻΤ݃ଽሺݔሻ ܾ
(3)
݁ ൌ ݔଽ െ ܶ כ
(4)
4. Conclusions The SSF model presented offers good approximations of the experimental temperature profiles observed on a pilot-plant reactor, and can also be used to determine optimal temperature profiles that promote the growth of thermophilic fungi and actinobacteria essential for the A. bisporus cultivation process. A simple input-output control scheme is sufficient to bring the system temperatures to the optimal temperature profiles. Table 3. Kinetic and Operation Parameters Actino. -1
a1 (h ) -1
a2 (h ) b1 (K) b2 (K) ad (h-1 ) bd (K) Ks (kg m-3 ) Ko (kg m-3 ) ȗ (KJ kg-1 )
3.851E+9 3.551E-35 7923 2.555E+4 1.032E+18 1.57E+4 86 0.042 -1.044E4
Therm. 5.53+4 1.32E-36 4041 2.641E+4 9.719E+19 1.68E+4 57 0.045 2.591E4
Inhib. 3.66E+6 1.445E-36 5057 2.565E+4 3.648E+18 1.50E+4 75 0.063 8.944E3
Operation parameters -3 )
2120
-3 )
1000
Į (Kg m ȕ (Kg m
-3)
Ȗ(Kg m į ߳ ߮ Ș ș(KJ m-3 h-1K-1) V (kg m-1 h-1 )
1.0556 0.9580 1.055E-3 -6.1911 0.18 6.10 2.54E 14
A ( Kj kg-comp m-3 °c-1 kg-sol-1 ) B (Kj kg-comp m-3 °c-1 kg-sol-1 )
1419
C D ( m2 ) G ( m h-2) K ( KJ kg-1) O (kg m-3) P (mmHg) S (KJ Kg-1 °c-1 )
0.6207 0.48 1.27E+08 2454.3 0.3 628.33 1.93
3190
Optimal Temperature Tracking of a Solid Substrate Fermentation Reactor
843
Figure 2. SSF microbial concentration and temperature profiles.
References [1] [2] [3] [4] [5] [6]
[7] [8] [9] [10]
G. Viccini D.A. Mitchell, and N. Krieger, “A Model for Converting Solid State Fermentation Growth Profiles Between Absolute and Relative Measurement Bases,” Food Technology and Biotehchnology, vol. 41, 2003, pp.191-201. D.A. Mitchell and O.F. Von Meien, “Mathematical modeling as a tool to investigate the design and operation of the Zymotis packed-bed bioreactor for solid-state fermentation,” Biotechnol Bioeng. Vol. 68, 2000, pp. 127-135. H. Seki, “A New deterministic model for forced-aeration compo sting processes with batch operation,” Transactions of the ASAE,vol. 45, 2002, pp. 1239-1250. J. Kaiser, “Modelling composting as a microbial ecosystem: a simulation approach,” Ecological modelling, vol. 91, 1996, p.25-37. R. Barrena, C. Canovas, and A. Sánchez, “Prediction of temperature and thermal inertia Effect in the maturation stage and stockpiling of a large composting mass,” Waste Management, vol. 26, 2006, pp. 953-959. A. Sánchez, C. Gonzáles-Figueredo, J. Gurubel, L.M De La Torre, and M. Labeaga “optimal Temperature Trajectories in Solid State Fermentation Reactors for Edible Mushroom Growing,” 20th European symposium on computer Aided process Engineering – ESCAPE20, S. Pierucci and G. BUZZI Ferraris, eds., Naples, Italy: ELSEVIER B. V.,2010. C. Gonzales-Figueredo, L.M. De La Torre, and A. Sánchez, “Dynamic Modelling and Experimental Validation of a Solid State Fermentation Reactor,” Computer Applications In Biotechnology - CAB 2010, Leuven, Belgium: IFAC-ELSEVIER, 2010. D.E. Kirk, Optimal Control Theory, Dover, 2004. W.M. Wiegant, “Growth Characteristics of the Thermophilic Fungus Scytalidium Thermophilum in relation to production of mushroom compost.” Applied and Environmental Microbiology, vol. 58, 1992, pp. 1301-1307. O. Aroche, “Optimal Temperature Control of a SSF Reactor (in Spanish)”. M.Sc, Thesis. Unidad de Ingenieria Avanzada, Centro de Investigación y Estudios Avanzados, 2010.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Receding Nonlinear Kalman (RNK) Filter for Nonlinear Constrained State Estimation Raghunathan Rengaswamy,a Shankar Narasimhan,b Vidyashankar Kuppuraj,a a
Department of Chemical Engineering, Texas Tech University, 6th street and canton, Lubbock , TX 79409, USA b Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036, India
Abstract State estimation is an important problem in process operations. For linear dynamical systems, Kalman Filter (KF) results in optimal estimates. Chemical engineering problems are characterized by nonlinear models and constraints on the states. Nonlinearities in these models are handled effectively by the Extended Kalman Filter (EKF), whereas constraints pose more serious problems. Several constrained estimation problems where the EKF approach fails have been reported in the literature. To address this issue, receding horizon approaches such as the Moving Horizon Estimation (MHE) have been proposed. The MHE approach has been shown to provide the most reliable estimates in several example problems; albeit at a high computational price. Unlike the KF, the MHE formulation does not use an explicit predictor-corrector approach. In this paper, we study the following questions in nonlinear constrained state estimation: (i) can the EKF be extended to include a receding horizon in a simple intuitive fashion? (ii) are there any performance gains over an EKF due to a receding horizon? and, (iii) are there any computational gains over the standard MHE through such an extension? A Receding Nonlinear Kalman (RNK) Filter formulation is proposed to answer these questions. The RNK formulation follows a predictor-corrector approach and uses linearization of the state space model for covariance calculation much like the EKF approach. We demonstrate through examples that inclusion of a receding horizon improves performance over the standard EKF approach. We also discuss the computational properties of RNK in comparison with MHE. Keywords: State estimation; Constrained nonlinear estimation; Nonlinear optimization;
1. Introduction State estimation is an important problem in process operations. The Kalman Filter (KF) provides optimal estimates for linear systems. Most realistic chemical engineering systems are nonlinear. The Extended Kalman Filter (EKF) handles nonlinear process and measurement models by resorting to linearization for the propagation of error covariance matrix and Kalman gain computation. In addition to nonlinearity, constraints on state variables such as concentration pose serious problems. Ad-hoc modifications to the EKF to handle these constraints can lead to poor estimation results [1]. The nonlinearity and constraints on states can be effectively handled by posing the state estimation problem in an optimization framework. The Recursive Nonlinear Dynamic Data Reconciliation (RNDDR) [5] solves an optimization problem to yield estimates that obey constraints. The KF, EKF and RNDDR are recursive estimators. For nonlinear constrained problems, receding horizon approaches such as Moving Horizon Estimator
Receding Nonlinear Kalman (RNK) Filter for Nonlinear Constrained State Estimation
845
(MHE) [1,3] generate robust estimates, albeit at a high computational cost. Since MHE solves an optimization problem, nonlinear process and measurement models and constraints are directly handled. The performance of MHE also depends considerably on the approach that is used to approximate the arrival cost [1]. The arrival cost summarizes the uncertainty in the state estimate at the first time instant of the time window under consideration. The decision variables of the optimization problem are the state estimate at the first instant, and the process and measurement noise variables. The use of process noise terms in the decision variables requires the inclusion of the state propagation model in the constraints. The nonlinear state propagation model inside an optimizer loop results in a tremendous computational burden since in most cases the integration of the process model has to be performed numerically. Unlike MHE, our proposed Receding Nonlinear Kalman (RNK) Filter [2] follows a predictor-corrector formulation and the decision variables of the optimization formulation in the correction step are the state variables in the horizon. In the prediction step, an open loop prediction of the state variables is performed and the covariance matrix of errors for the entire horizon is calculated through linearization of the nonlinear process model. In the correction step, an optimization problem is solved to update the predicted state estimates using measurements. As a result, the proposed approach does not require the nonlinear state propagation model within the optimization step. This results in a computationally efficient estimation approach. The proposed RNK estimator is described in section 2 and the efficacy of the proposed estimator is demonstrated through a challenging problem in Section 3. Conclusions are provided in section 4.
2. Receding Nonlinear Kalman Filter (RNK Filter) 2.1.
Preliminaries The nonlinear state space and measurement models where wk and vk 1 are the process and measurement noises respectively are ( k 1) ' t
x k 1
xk
³
f ( x ) dt w k
F ( xk ) wk
k 't
y k 1
g ( x k 1 ) v k 1
x L B d x k d xU B k
(1)
w k | N (0, Q k ) v k 1 | N (0, R k 1 )
2.2. Receding Nonlinear Kalman (RNK) Filter The proposed RNK Filter follows the predictor-corrector framework. The filtered state estimate at k+mth where m is the horizon size is obtained by predicting the state estimates for instants from k+1 to k+mth instant. An optimization problem is solved in the correction step to correct for the predicted instants. Propagation step: Let xk |k , Pk |k be the filtered state and error covariance matrix at kth instant respectively. Starting from kth instant, an open loop prediction of state estimates from k+1 to k+m are obtained by the state space model given in Equation 1. The predicted state estimates are denoted as xˆk 1|k , xˆk 2|k , , xˆk m|k . The augmented state vector at k+mth instant is
Raghunathan Rengaswamy et al.
846
X k m augmented as Xˆ k m|k represented as
> xk 1 , xk 2 ,, xk m @ .
The predicted state estimates are
ª¬ xˆk 1|k , xˆk 2|k , , xˆk m|k º¼ . The block error covariance aug matrix of the augmented state vector is Pk m|k at k+mth instant is given by Ak
§ wf ( x ) · expm ¨ | xk ' t ¸ © wx ¹
Pkaug m |k ( i , j ) Pkaug m |k ( i , i ) Pkaug m |k ( i , j )
E ª¬ ( x i xˆ i | k )( x j xˆ j | k ) T º¼ T Ai 1 Pkaug m | k ( i 1, i 1) Ai 1 Q
(2)
T Pkaug m | k ( i , j 1) A j 1
Correction step: The predicted state estimates are corrected using measurements from k+1 to k+m instants. Let the augmented vectors be T
]
T ª x xˆ T x xˆ º 1 1| | k k k k m k m k ¬« ¼»
Q
ª yk 1 g ( xk 1 ) T yk m g ( xk m ) T º ¬ ¼
T
The augmented covariance matrix for measurement noise is
R aug
ª R pu p «0 « pu p « « «¬ 0 pu p
0 p u p 0 pu p º R pu p 0 pu p »» » » 0 pu p R pu p »¼
The optimization problem for state correction is 1
1
T aug min ] T Pkaug Q m|k ] Q R
xk i ,i
1:m
s.t. xLB d xk i d xUB , i 1: m If [ ,X are Gaussian, the optimal estimates obtained are MLE. The optimal solution results in filtered estimate for k+mth instant and smoothed estimates (same as RTS smoother equations) for the other instants. Covariance correction step: The covariance correction is similar to EKF and involves linearization of the measurement model g(x). The linearized measurement matrix at k+jth instant is denoted as Ck j|k m and the augmented linearized measurement matrix is denoted by The covariance correction is given by
K aug Pkaug m|k m
T
T
aug aug Pkaug C aug Pkaug R aug m|k C m|k C
I K
aug
C aug Pkaug m|k
C aug .
1
For unconstrained linear state and measurement model, the RNK solution is the same as the KF solution for any choice of window size.
Receding Nonlinear Kalman (RNK) Filter for Nonlinear Constrained State Estimation
847
Linear measurement model: When the measurement model is linear, it can be shown that the RNK filter correction step is a Quadratic Programming (QP) problem.
3. Example We demonstrate the performance of the proposed estimator on a nonlinear polystyrene reactor example that is representative of systems that are commonly encountered in chemical engineering [4]. In the original problem there are eight states
X
ª¬Ci , Cs , Cm , O0 , O1 , O2 , T , T j º¼ and two measurements >T , O2 / O1 @ . The state
space model is severely nonlinear and one of the measurements is also nonlinear. The detailed model equations and parameter values can be found in [4]. The nominal values of the state variables are different by orders of magnitude. This leads to scaling issues and makes this problem even more challenging. While we performed several simulation studies, in this section, we present one representative simulation example that demonstrates the robustness of the proposed estimator. In this example, the system is 5 perturbed from steady-state through a pulse disturbance of magnitude 5 u 10 from 300 to 450 seconds in the input variables Qi
and Qc . The states were scaled using the
difference between the upper and lower bounds. The output variables were scaled using their nominal values. The other estimator parameters are chosen as
't 2min, P0 1e4 I (8,8), Q 1e4 I (8,8), R 1e6 I (2, 2) where I(m;m) represents an identity matrix of dimension m u m . In the RNK filter the disturbance variables are
augmented to the state vector. This leads to a ten state variables problem. The aim of the estimator is to reconstruct the ten states using two measurements. The estimates for all the states and disturbances are shown in this Figure 1. It can be seen that all the states are estimated quite well. When the window size was increased from one to three it is seen that the estimation results improve. The simulations were also run for a window size of 10 (computational time of 43.2s/sampling instant), but the performance gains over window size three were marginal. The MHE with integration within the optimizer would be computationally much worse for these types of nonlinear problems. Further, the standard recommendation of window sizes twice the order of the system would require that a problem with window size 20 be solved. This will become even more computationally intractable.
4. Conclusions A new receding horizon RNK filter for constrained nonlinear estimation problems is proposed. The predictor-corrector formulation decouples the numerical integration from optimization loop thereby reducing computational burden. For linear measurement models (with the state propagation model being nonlinear) the proposed estimator results in a QP problem, whereas MHE will be a NLP problem. The efficacy of the proposed approach was demonstrated using a challenging nonlinear chemical engineering example. References: 1. Haseltine, E.L., and Rawlings, J.B. (2005), Critical Evaluation of Extended Kalman Filtering and Moving Horizon Estimation, Industrial and Engineering Chemistry Research, 44, 2451-2460.
848
Raghunathan Rengaswamy et al.
2. Rengaswamy, R., Narasimhan, S., Kuppuraj, V., (2010), Receding-Horizon Nonlinear Kalman Filter for State Estimation, under review, 2011 3. Robertson, D.G., Lee, J.H., and Rawlings, J.B. (1996), A Moving horizon based approach for least squares estimation, AIChE Journal, 42, 2209-2224. 4. Tatiraju, S., Soroush, M., and Ogunnaike, B.A. (1999), Multirate Nonlinear State Estimation with Application to Polymerization Reactor, AIChE Journal, 45, 769-780. 5. Vachhani, P., Rengaswamy, R., Gangawal, V., and Narasimhan, S. (2005), Recursive estimation in constrained nonlinear dynamic systems, AIChE Journal, 51, 946-959.
Figure 1: Results for polystryrene reactor
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Free Radicals Copolymerization Optimization, System: Acrylonitrile-Vinyl Acetate in CSTR S.V. Vallecillo-Gómez, J.C. Tapia-Picazo, A. Bonilla-Petriciolet, G.G. DeAlba-Pérez-de-Gracia. Department of Chemical and Biochemical Engineering, Instituto Tecnológico de Aguascalientes. Adolfo López Mateos 1801, Bona Gens, Aguascalientes, Ags., 20256, México
Abstract In this work, we study the free radicals solution copolymerization, system acrylonitrile (AN) – vinyl acetate (VA), with the objective to find the parameters of characteristic correlations that are used to determine the reaction autoacceleration phenomenon or gel effect. Two gel effect correlations were evaluated for each monomer. Parameters were found using the stochastic global optimization Simulated Annealing method and experimental data in batch reactor. Gel effect correlations with greater adjustment were used to carry out a sensitivity analysis through continuous stirred tank reactor model. A competence between monomers conversion, average molecular weight of the polymer and the polidispersity was observed, which confirms the possibility of carry out a multiobjetive optimization using genetic algorithms as a robust tool. Keywords: copolymerization, Simulated Annealing, gel effect.
1. Introduction Polymer industry is of great importance due to the synthetic polymers applications as daily use materials, construction, special applications and is continuously in the search of scientific and industrial progresses [1]. Polymerization processes are quite complex in nature, the operating variables of a polymerization reactor system influence molecular parameters of the product and, often, conflicting ways [2]. Thus the study of the phenomena that occurs in polymerization reactors is necessary where the mathematical modeling and simulation are useful for this purpose. The fibers are one of the products based on synthetic polymers, whose technology has grown in four generic kinds: nylon, polyester, polyhidrocarbon and polyacrylonitrile [3]. In particular, the polyacrylonitrile is one of the best carbon fiber (CF) precursor, attractive by its low cost, with applications in high technology areas such as aerospace and defense industry [3,4]. However, acrylonitrile homopolymers produce low quality CF and the necessity to incorporate an appropriate acid comonomer during the polymerization arises [3,4]. For copolymer preparation, one method is the free radicals addition polymerization, where continuous stirred tank reactor (CSTR) and tubular reactor are commonly used as industrial equipment [3]. On the other hand, mathematical modeling of copolymerization systems is more complex in comparison with a homo-polymerization, but proper techniques to calculate polymer properties are available in the literature. However, it is common that its application to particular systems presents difficulty by the ignorance of physical and correlation parameters.
850
S. V. Vallecillo-Gomez et al.
In this work, we study the free radicals solution copolymerization, system acrylonitrile (AN) – vinyl acetate (VA), with the objective to find the parameters of characteristic correlations that are used to determine the reaction autoacceleration phenomenon or gel effect, it occurs to short concentrations of solvent. This task was realized employing a stochastic global optimization method and experimental data of a batch laboratory reactor for a batch model based on Keramopoulos & Kiparissides [5]. The gel effect parameters found were used to carry out a sensitivity analysis in CSTR reactor through a model based on Hamer’s work [6]. The obtained results confirm the existence of a competence among optimization objectives, showing the possibility of carrying out a multiobjective optimization.
2. Methodology 2.1. System The studied chemical process considered as reactive species (monomers) AN (M1) and VA (M1), dimethylformamide is the solvent (S) chosen because free radicals copolymerization is carried out in that phase. Initiator is ammonium persulfate (I). Kinetic mechanism for AN-VA copolymerization system includes initiation, propagation, termination by combination and transfer to solvent reactions [3,6-7]. 2.2. Gel effect In a polymerization the phenomenon known as autoacceleration or gel effect arises as a consequence of the viscosity increament in the reaction medium caused by formation of polymer molecules and it is common to short concentrations of solvent. The result is a decrement in the rate constant termination (kt) and a conversion increase [1]. Thus, it is important to consider gel effect on modeling of these systems, since there are no reported values of gel effect parameters for AN monomer. Gel effect is represented as gt for (where E is activation energy, T is temperature and R is gas constant)[6]: ktii = ktiiº exp(-Etii/RT)gtii ;
(1)
i = 1,2
In this study three gel effect correlations established in literature were tested (Table 1), these were chosen because of its capability to represent satisfactorily the experimental data and its simplicity [6]. Ross & Laurence (R&L) and Friis & Hamielec (F&H) models [8-9] were used and tested to determine gel effect correlation parameters for AN monomer. The previous evaluation for VA monomer was not necessary because already are available F&H and Teymour parameters [6,10-11], but the one of greater experimental data adjustment in copolymerization system was determined. Experimental data for AN and AN-VA polymerization systems were obtained. The laboratory batch reactor system for free radicals addition polymerization consisted of a balloon three-neck flask submerged in a temperature-controlled bath, one neck is for the feeding and other for the propeller stirrer. Table 1. Correlations for gel effect calculation
Ross & Laurence [8]
Friis & Hamielec [9]
Teymour [10]
gtii = exp(c1Vf + c2Tf + c3);if Vf > Vfg gtii = exp(c4Vf + c5); if Vf Vfg Vf = Vfpĭp + Vfm1ĭm1 + Vfm2ĭm2 + + Vfsĭs Vf = A1 – A2 (Tf – 273.2)
gtii = exp [-(b1 + b2Tf)xT] gtii = exp(a1xT + a2xT2 + xT = xi (1 - fs); i = 1,2 + a3xT3) fs = S / (M1f + M2f + S)
Free Radicals Copolymerization Optimization, System: Acrylonitrile-Vinyl Acetate in 851 CSTR After stopping reaction, the reaction medium was filtered and the polymer obtained was dried at 80 °C to determine conversion. A nonstationary state model based on Keramopoulos & Kiparissides [5] for batch reactor was used and its solution was made by a 4th order Runge-Kutta method. The objective function used to determine the parameters of gel effect correlations is as follows: § experimental weight of polymer (t ) − calculated weight of polymer (t ) · ¸¸ O.F. = ¦ ¨¨ experimental weight of polymer (t ) © ¹
2
This function was minimized employing the stochastic global optimization Simulated Annealing method [12], using the experimental data. 2.3. Sensitivity Analysis Gel effect parameters found with greater experimental data adjustment were used on a mathematical model based on Hamer [6] developed for copolymerization system in a CSTR type reactor. The mass and energy transient state balances are given by Eqs. (2)-(5) (where V is reactor volume; t, time; q, volumetric flow rate; kk, reaction rate constant k = pij, tij, d; i,j = 1,2; P, total concentration of polymer radicals ending in M1; Q, total concentration of polymer radicals ending in M2; Cp, heat capacity; -ǻHpij, enthalpy of polymerization i,j = 1,2; h, heat transfer coefficient; Ac, heat transfer area; ȡ, solution density; Tc, coolant temperature). The solution technique for the behavior reactor was taken with reference to the same work, using the Quasi Steady State Approximation (QSSA) and the Long Chain Hypothesis (LCH) [13-14] to simplify the model and obtain an analytical solution with expressions like Eq. (6) and Eq. (7). Also the moments technique, that represents the molecular weight distribution as a normal probability function [14], was used to calculate polymer properties as average molecular weights and polydispersity. VdI/dt = (If - I)q – kd IV
(2)
VdM1/dt = (M1f - M1)q – M1(kp11P + kp21Q)V
(3)
VdM2/dt = (M2f - M2)q – M2(kp12P + kp22Q)V
(4)
VdT/dt = (Tf – T)q + (V/ȡCp)[( -ǻHp11kp11PM1) + (-ǻHp21kp21QM1) + (-ǻHp12kp12PM2) + + (-ǻHp22kp22QM2)] – (hAc/ȡCp)(T - Tc)
(5)
P = (kp21M1)Q/( kp12M2)
(6)
Q = {(2fi kd I) /[ kt11(kp21M1/ kp12M2)2 + 2kt12(kp21M1/ kp12M2) + kt22 ]}1/2
(7)
A sensitivity analysis for CSTR reactor was performed using the mentioned AN-VA copolymerization model. The input variables evaluated are temperature, monomers and initiator feed (T, M1f, M2f, If respectively), variable ranges are given in Table 2. Table 2. Ranges of input variables for sensitivity analysis Variable
Range
T, ºC M1f, mol M2f, mol If, mol
40 - 60 0.5 – 4.0 0.04 – 1.0 0.0027 – 0.054
852
S. V. Vallecillo-Gomez et al.
On the other hand, the output variables are polydispersity index (PDI), monomers molar conversion (x1, x2), weight average molecular weight (Mw), number average molecular weight (Mn) and comonomer fraction in the polymer (F2). The considered residence time was 12 hours according to an industrial case study, and using a calculation molar basis of M1f + M2f + Sf = 10.
3. Results and discussion The gel effect model that showed higher accuracy for AN monomer is Ross & Laurence correlation, as shown in Fig. 1. The average percentage errors for this model are 8.98 and 6.85 % (50 and 60 °C, respectively). The evaluation of the previous model on Friis & Hamielec and Teymour equations with regard to VA, shows that both correlate similarly at beginning of reaction but latter the first is better (Fig. 2). 0.8
R&L F&H
16
R&L-F&H R&L-Teymour
0.7
Total molar conversion
Polymer production, grams
18
14 12 10 8 6 4 2
0.6 0.5 0.4 0.3 0.2 0.1 0
0 0
10
20
30
40
50
60
70
80
Reaction time, minutes
Fig. 1 Gel effect models evaluation for AN homopolymerization. Experimental conditions: S/Mf = 1.5, If / Mf = 0.02, Vt = 50 mL, 50 °C 60 °C.
90
0
1.5
3
4.5
6
7.5
Reaction time, hours
Fig. 2 Correlation of gel effect models for AN-VA copolymerization. Experimental conditions: If / Mf = 0.02, T = 50 ºC, VAf = 10%, Sf /MTf = 2 Sf /MTf = 3.
Behavior tendencies were obtained by CSTR model using the best gel effect correlations for free radicals AN-VA copolymerization, R&L for AN monomer and F&H for VA monomer (numerical values of parameters are shown in Table 3). It was observed that temperature improves monomers conversion and comonomer fraction, but an increment in operation temperature increases polidispersity and reduces average molecular weight, as shown for illustrative purposes in Fig. 3. Respect to acrylonitrile feed, an increase in this variable causes a bigger PDI, which is not desirable for defects in polymer uniformity, and increasing the variable M2f produces only negative effect on monomers conversion. Finally, initiator feed behaves the same way as T variable, but using high temperature to increase the conversion is better, because the solution reaction can be processed more easily for the low viscosity at high temperatures. The proportionality effects of analyzed variables are summarized in Table 4, where unfavorable behaviors are in bold. Finally, with the results of the Table 4 we can define for normal AN-VA copolymerization objectives, it is necessary to maintain the reaction temperature, initiator feed and acrylonitrile feed concentration at maximum possible level to increase conversion, looking a balance with high values of Mw and low values of PDI.
Free Radicals Copolymerization Optimization, System: Acrylonitrile-Vinyl Acetate in 853 CSTR Table 3. Gel effect correlation parameters
AN (R&L) c1 = 0.1186 c2 = 26.744 c3 = 0.1046
c4 = 0.0003 c5 = 9.69
A1 = 0.2738 A2 = 0.0021
VA (F&H) b1 = 32.2 b2 = -0.08
Table 4. Relationship between variables for AN-VA copolymerization, CSTR type reactor Input variables? Output variables
x1, x2
Mw, Mn
PDI
F2
T
Į
1/ Į
Į
Į
M1f
Į
Į
Į
Į
M2f
1/ Į
Į
1/ Į
Į
If
Į
1/ Į
Į
Į
Fig. 3 Temperature behavior in AN-VA copolymerization system in CSTR.
4. Conclusions A batch and CSTR reactor mathematical model representing suitably experimental data has been obtained by means of widely accepted models and techniques, for free radicals solution acrylonitrile-vinyl acetate copolymerization, placing especial emphasis on gel effect phenomenon. The CSTR sensitivity analysis realized enabled to verify the existence of competition between input variables on the conversion and product quality properties. This highlights the importance and feasibility of carry out a multiobjective optimization by using genetic algorithms as a robust tool for this type of system [15], on which it is already working and in the CSTR model validation.
References [1] R.J. Young and P.A. Lovell. Chapman-Hall (eds.), Introduction to Polymers, Great Britain, Cambridge, 1991. [2] V. Bhaskar, S.K. Gupta and A.K. Ray, Comput. Chem. Eng., 25 (2001) 391. [3] J.C. Masson. CRC Press-Marcel Dekker Inc. (eds.), Acrylic Fiber Technology and Applications, U.S.A., New York, 1995. [4] R. Devasia, R. Nair and K.N. Ninan, Eur. Polym. J., 38 (2002) 2003.
854
S. V. Vallecillo-Gomez et al.
[5] A. Keramopoulos and C. Kiparissides, Macromol., 35 (2002) 4155. [6] J.W. Hamer, T.A. Akramov and W.H. Ray, Chem. Eng. Sci., 36 (1981) 1897. [7] W.H. Ray, T.L. Douglas and E.W. Godsalve, Macromol., 4 (1971) 166. [8] R.T. Ross and R.L. Laurence, AIChE Symp. Ser., 72 (1976) 74. [9] N. Friis and A.E. Hamielec, ACS Symp. Ser., 24 (1976) 82. [10] F. Teymour, PhD Thesis, University of Wisconsin, Madison, 1989. [11] J.C. Pinto and W.H. Ray, Chem. Eng. Sci., 50 (1995) 715. [12] A. Corana, M. Marchesi, C. Martini and S. Ridella, ACM Trans. Math. Softw., 13 (1987) 262. [13] W.H. Ray, Can. J. Chem. Eng., 47 (1969) 503. [14] W.H. Ray, Macromol., C8 (1972) 1. [15] C.M. Silva, E.C. Biscaia Jr., Comput. Chem. Eng., 27 (2003) 1329.
21th European Symposium on Computer Aided Process Engineering – ESCAPE21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Convex optimization for shape manipulation of multidimensional crystal particles Naim Bajcincaa,b , Ricardo Perla , Kai Sundmachera,c a Max-Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr.1, 39106 Magdeburg, Germany b Technische Universit¨ at Berlin, Einsteinufer 17, 10857 Berlin, Germany c Otto-von-Guericke Universit¨ at Magdeburg, Universit¨atsplatz 2, 39106 Magdeburg, Germany
Abstract A convex formulation of the optimal control problem leading to global optimal solutions and efficient algorithms for crystal shape manipulation is proposed in this paper. These results build on the preceding work [1] of the authors, where several useful properties of the minimum-time trajectories in the state-space have been devised. In particular, it has been shown that minimum time switching trajectories consist of a number of subsequent growth and dissolution sections with different constant supersaturation levels. Such a shaping strategy employs the unequal growth and dissolution rate quantities in order to achieve crystal morphologies which otherwise do not result directly from a pure growth scenario only. The main outcome of the paper are strategies for constructing minimum-time trajectories with an arbitrary number and sequence of switching phases. Optimal trajectories involving a growth and a dissolution phase only are shown to be unique. In all other cases infinity many optimal trajectories exist, all sharing the same globally unique process time scope. The resulting shaping strategy is shown to be dual to the ones proposed in [1].
Keywords: Crystal shape manipulation, convex optimization, optimal control, batch processes, multidimensional crystal particles 1. Introduction & Motivation Crystal shape manipulation is a critical engineering venture in numerous industries. It has been shown that many properties of dispersed phase products are strongly linked to their shape, see [7]. For instance, dissolution rate or catalytic activity may depend on the ratio of the face area of a crystal particle. From the engineering point of view, manipulation of the crystal morphology is therefore essential, [2]. In contrast to the traditional techniques that utilize chemical additives for blocking or promoting of certain crystal faces despite the undesired chemical impurities (see [6]), here, shape manipulation by means of temperature control only is considered. This ideas presented in this article are strongly based on the results of the work in [1], where diverse optimal control strategies for crystal shape manipulation have been proposed. Here we suggest a convenient reformulation of the minimum-time problem as a convex optimization problem, this being a result of a natural modification of constraints in the optimization problem in [1]. Indeed, while the two switching manifolds in state-space have been introduced there for switching between the subsequent trajectory sections, here, instead a switching region is arbitrarily specified to include all switching points in its interior. The content of the paper is of a rather theoretical nature, however, the problem formulation and proposed strategies allow for diverse practical demands. The discussion focuses on shape manipulation of a single crystal particle only, but it has been already shown that the suggested ideas, in principle, can be easily extended to populations of crystal particles, too. 1
856
N. Bajcinca et al.
2. Problem formulation 2.1. Particle shape dynamics Crystallization process involves a natural negative feedback structure [refer to Figure 1] in that any temperature change ΔT distorting the process equilibrium invokes a supersaturation σ of the opposite sign to force the crystal particles to grow (ΔT < 0, σ > 0) or dissolve (ΔT > 0, σ < 0) until the equilibrium con = [1 , · · · , n ]T C mass balance ditions are restored (σ = 0). The three blocks in the simplified model setting σ crystal growth/ super-/under(e.g., nucleation has been neglected) in T dissolution saturation Figure 1 are described by the following Figure 1: Crystallization feedback loop standard set of equations gi (1a) growth: di /dt = kgi σ =: Gi , σ ≥ 0, kgi > 0, gi > 0, i = 1, 2, . . . , n dissolution: di /dt = kdi (−σ)di =: Gi , σ ≤ 0, kdi < 0, di > 0, i = 1, 2, . . . , n (1b) supersaturation: σ := C/Csat − 1, Csat = a0 + a1 T + a2 T 2 , mass balance: C = C0 − ρC /Vt M · ΔVC , VC = VC (1 , 2 , . . . , n )
(1c) (1d)
where = [1 , 2 , . . . , n ]T includes the n-lengths of the multi-dimensional crystal particle, C stands for the concentration, Csat for the saturation concentration, ΔVC = VC − VC,0 for crystal volume change, VC,0 for the initial crystal volume, ρC for crystal mass density, M is the solvent molar mass, and Vt the solvent volume. For notation simplicity we collect the equations (1a-1b) in the vector form d/dt =: G(σ) := [G1 (σ), G2 (σ), . . . , Gn (σ)]T .
(1e)
2.2. Optimal control problem Particle shape manipulation can be seen as a trajectory planing problem in the state-space Rn+ defined by i > 0 for i = 1, . . . , n. Since for a given particle morphology not just any > 0 [componenent-wise!] may make sense, the state-space is, in fact, a subset of Rn+ . Throughout the paper we assume kgi = |kdi |, i = 1, . . . , n, that is, the growth and dissolution vectors are assumed to be unequal for symmetric supersaturation levels. This is mandatory σ ≤ σ ≤ σ max 2 min for to produce a lateral motion in the state-space as a qf1 result of shape manipulation scenarios with a number C = C1 of subsequent growth and dissolution sections. In this qf2 article we primairly focus on switching trajectories of this art only, in order to achieve crystal morphologies Tmin ≤ T ≤ Tmax which do not result directly from a pure growth sceq0 nario. Indeed, the growth rate vector G = d/dt, as C2 > C1 1 defined in (1e), is constrained to lie within an “interval” parameterized by the state-dependent supersatuFigure 2: Rechability analysis ration level σ, which is confined between a minimal and maximal value, see Figure 2. The limiting values of supersaturation may be determined by the temperature constraints Tmin ≤ T ≤ Tmax , as carried out in [1], or may be directly given, i.e. σmin ≤ σ ≤ σmax , as utilized in this article. Notice the corresponding qualitative differences in the reachability region: supersaturation constraints produce a conical region, whereas for temperature constraints the latter warps (dashed indicated for g2 > g1 ). Obviosuly, morphologies outside such reachability regions are attainable only by means of subsequent growth and dissolution sections, this being the focus in the present work. 2
Convex optimization for shape manipulation of multidimensional crystal particles
857
Two results are borrowed from [1]. First, using the minimum principle it has been shown that switched C 0 q1 minimum-time trajectories are composed of a number q0 of straight segments, corresponding to constant superqj saturation sections, as indicated in Figure 3. And secondly, given a trajectory of the crystal particle in the C j q2 state-space, we use the dynamic inversion algorithms qf from [1] to compute the temperature profile T = T (t). q η Ω This turns out to be primarily useful as it converts 1 dynamic optimization problem into a static nonlinear Figure 3: Optimal switching path program. With this said, we are ready to formulate the optimization problem as follows
2
minimize tprocess = t f − t0 subject to
(t)∈Ω,t∈[0,t f ]
σmin ≤ |σ| ≤ σmax (t0 ) = q0 , (t f ) = q f
(2)
In addition to adopting supersaturation constraints, the striking difference with respect to the optimization problems of [1] in (2) is loosening of the two bouncing manifold constraints in the state-space by the switching region Ω, which is ought to host the entire particle trajectory, see Figure 3.
3. Convex optimization solutions 3.1. Convex optimization program Without loss of generality, consider a two-dimensional particle (i.e. n = 2), and a trajectory with η switchings. Let q0 and q f denote the given initial and target particle morphologies in the state-space (1 , 2 ), see Figure 3. The points q j = [q j1 , q j2 ]T , j = 1, 2, . . . , η in the figure denote the switching points, where jumps in the supersaturation (i.e. temperature) profile are exerted. By introducing the decision parameter q := [q1 , . . . , qη ]T := [q11 , q12 , . . . , qη1 , qη2 ]T , and setting qη+1 := q f (see Figure 3), it can be shown that the net process time length τ reads η+1
γ j2
−γ j1
τ(q) = ∑ j=1 K j · (q j1 − q j−1,1 ) γ j2 −γ j1 . (q j2 − q j−1,2 ) γ j2 −γ j1 ,
(3)
where K j = M −γ j1 /2kγ j1 + M −γ j2 /2kγ j2 and M = (kγ j1 /kγ j2 )1/(γ j2 −γ j1 ) with kγ j1 = kg1 , kγ j2 = kg2 , γ j1 = g1 , γ j2 = g2 for a growth phase, and kγ j1 = kd1 , kγ j2 = kd2 , γ j1 = d1 , γ j2 = d2 for a dissolution phase. Clearly, the exponential parameters γ j1 and γ j2 , as well as K j in (3) are all positive constants. 2η Notice that q takes values in Θ ⊆ R+ , which, referring to Figure 3, for a scenario beginning with a growth and ending with a dissolution phase, is defined by 2η
Θ := {q ∈ R+ : q0 ≤ q1 , q2 ≤ q1 , q2 ≤ q3 , . . . , qη ≤ qη−1 , qη ≤ q f }.
(4)
It is an easy exercise to see that this domain is convex and closed. To examine the convexity of the objective function τ = τ(q) in (3), the hessian hess(τ) := ∂2 τ/∂q2 in Θ is to be investigated. Therefore, it is instructive to consider the summing terms in (3) separately. It is obvious that two sort of terms can be discriminated therein: the initial and the final terms, involving two decision parameters [q0 and q f are fixed!], and the terms in between involving four decision parameters. It turns out that [technical details are omitted, due to the limited space!] all but one (2η − 1) eigenvalues of each hessian are nil (= 0), and one is
858
N. Bajcinca et al.
strictly positive, equal to the trace of the hessian term at hand. Moreover, it can be shown that the net hessian hess(τ) posseses itself at least one zero-eigenvalue, i.e. it is positive semi-definite, too, this indicating that the function τ = τ(q) is convex, but not strictly convex! As we shortly see, this is helpful in constructing the optimal trajectories. Referring to the optimization problem (2) the supersaturation level at each switching section is confined to lie within an interval σmin, j ≤ σ j ≤ σmax, j , where for growth phases: σmin, j = σmin , σmax, j = σmax , and for dissolution phases: σmin, j = −σmax , σmax, j = −σmin . Hence, considering equation (1e) (but also Figure 2) it is obvious that each point q j can be associated a cone, as depicted in Figure 3, defined by & ' j) ( j) C j := x ∈ R2+ : x = q j + λ1 G(min + λ2 Gmax , λ1 ≥ 0, λ2 ≥ 0 , j = 0, 1, 2, . . . , η. ( j)
( j)
where Gmin := G( j) (σmin, j ), and Gmax := G( j) (σmax, j ), with G( j) referring to the section j, see (1e). Obviously, q j+1 ∈ C j and the supersaturation constraints of each phase σmin, j ≤ 2η σ j ≤ σmax, j are mapped to linear constraints in R+ ( j) ( j) T Γ( j) (q) := Gmin | − Gmax · (q j − q j−1 ) ≤ 0, j = 1, 2, . . . , η + 1.
(5)
The discussion thus far leads us to the conclusion that the optimization problem (2) can be posed as a convex optimization problem in terms of the decision parameter q ∈ Θ ∩ Ωη as follows τ(q) subject to Γ( j) (q) ≤ 0, j = 1, 2, . . . , η + 1. (6) minimize η q∈Θ∩Ω
3.2. Optimal solutions The optimization problem for η = 1 [a trajectory with one switching only!] and Ω = R2+ turns out to be particularly important, since for a fixed crystallization sequence (e.g., growth phase followed by dissolution) its solutions q = q∗ and τ∗ = τ(q∗ ) turn out to be unique. The rigorous proof of this fact [which we again must omit!] uses the dual Lagrange function of the optimization problem (6), and is basically a consequence of the fact that in this case hess(τ(q)) is strictly positive definite, and, hence, τ = τ(q) is strictly convex. The corresponding two optimal supersaturation levels σ∗+ (growth phase) and σ∗− (dissolution phase) are indeed important computational outcomes, as it turns out that with the sequence interchanged (i.e., now growth follows dissolution), q∗ changes, but τ∗ , σ∗+ and σ∗− don’t. Now consider the case with η > 1, and, again, for simplicity, disregard Ωη by setting 2η Ωη = R+ . As expected, the problem is not strictly convex, therefore, optimal solutions ∗ q = q are no longer unique. Less obvious are probably the outcomes that τ∗ = τ(q∗ ) is identical as in the case η = 1, and, moreover, that in all growth and dissolution phases the corresponding optimal supersaturation levels match exactly to those σ∗+ and σ∗− of η = 1. This holds for any η > 1, independently of the switching sequence. Hence, the unique optimal solution of the minimum-time trajectory with one switching only can be indeed used for construction of all optimal solutions of any number or sequence of switchings. Any continuous piecewise linear trajectory connecting q0 and q f and consisting of a finite number of segments with the two slopes defined by σ∗+ and σ∗− is a minimum-time trajectory. Figures 4 and 5 provide numerical results comparing the convex optimization algorithm (dashed lines) presented in this paper, with those proposed in the preceding work [1] (blackcolored). The task consists in constructing minimum-time trajectories which lie in the area bounded by the two manifolds depicted by thin dashed lines in Figure 4. Figure 5 demonstrates clearly the fact that the global minimum time provided by convex optimization is 4
859
Convex optimization for shape manipulation of multidimensional crystal particles 11
temperature, ◦ C
40
2 , mm
9
7
5
35
30
25
CvxOpt 5 phases 6 phases
20 3 1
1.2
1.4
1.6
1 , mm
1.8
2
0
20
40
60
80
100
120
time, min
Figure 4: Path profile
Figure 5: Temperature profile
notably smaller. Avowedly, it comprises more phases, and, additionally, it does not necessarily bounce at the switching manifolds [indeed, hard to see in the figure], which has been a requisite for the trajectories constructed by the algorithms in [1]. Note that seven has been the minimum number of phases of the convex optimal solution which could be hosted by the area specified by the switching manifolds in the figure. Compared to the counterpart shape manipulation algorithm of [1], the resulting strategy is somehow dual: while in [1] the switching conditions are defined by two concentration values [corresponding to the two bouncing manifolds!], and the optimization problem consists in optimizing the supersaturation levels within its limits, here, the supersaturation itself can take two values only, and the computational task consists in switching at the appropriate concentration level.
4. Conclusions A strategy for crystal shape manipulation of multidimensional crystal particles based on a convex optimization formulation is proposed in this article. Optimal scenarios with subsequent growth and dissolution phases have been investigated. The resulting strategy is a bimodal constant supersaturation control policy, which is popular in practical crystallization engineering. The underlying two supersaturation levels are uniquely defined by problem data. The algorithms are developed for a two-dimensional single crystal particle, however, the extension to a population of crystal particles is basically a solved problem, too, a result which is to be published. On the other hand, shape manipulation for particles of a higher dimension can always be recast as a two-dimensional problem. This is due to the fact that the supersaturation is uniquely determined by the projection of the higher-order trajectory nto a two-dimensional plane, see [1].
References [1] N. Bajcinca, V. de Oliveria, C. Borchert, J. Raisch, and K. Sundmacher (2010): Optimal control solutions for crystal shape manipulation, In Proc. ESCAPE20, Ischia, Italy. [2] D. Patience and J. Rawlings (2001): Particle-shape monitoring and control in crystallization processes, AIChE J., 47(9):2125-2130. [3] R. C. Snyder, S. Studener, and M. F. Doherty (2007): Manipulation of crystal chape by cycles of growth and dissolution, AIChE-J., 53 (6), 1510-1517. [4] A.D. Randolph and M.A. Larson (1998): Theory of particulate processes, Academic Press. [5] S. Boyd, L. Vandenberghe: Convex optimization, Cambridge University Press, 2004. [6] L. Weissbuch, et al. (1995): Acta Cryst. B, 51, 115-148. [7] H.G. Yang, et al. (2008): Nature, 453, 638-641. [8] D. L. Ma, D. K. Tafti, R. D. Braatz (2002): High-resolution simulation of multidimensional crystal growth, Ind. Eng. Chem. Res., 41, 6217-6223.
5
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Worst-Case Observer for Impurities in Enantioseparation by Preferential Crystallization Steffen Hofmanna, Matthias Eickec, Martin Peter Elsnerc, Andreas SeidelMorgensternb,c, Jörg Raischa,c a
Technische Universität Berlin, Einsteinufer 17, 10587 Berlin, Germany Otto-von-Guericke-Universität Magdeburg, 39106 Magdeburg, Germany c Max-Planck Institut für Dynamik Komplexer Technischer Systeme, Sandtorstr. 1, 39106 Magdeburg, Germany b
Abstract In this contribution, we show how a model is used to obtain worst-case estimates for product impurities in preferential crystallization of enantiomers. The estimates can be obtained for either a simple batch process in a single vessel, or for a coupled mode of operation of two vessels. We demonstrate, supported by a case study, how the estimates enable the use of a combined control strategy consisting of continuous feedback control and process stopping when critical impurity constraints are hit. Keywords: Crystallization, Enantiomers, Pure products, Worst-case estimation
1. Introduction Enantiomers are chiral substances which share many physical and chemical properties but can differ, e.g., in their metabolic effects. Often, they are produced as mixtures of two types, but the effects of only one of them, the preferred enantiomer, are desired, and it is necessary to separate it from the undesired, counter enantiomer. Preferential crystallization is an attractive way to do this when the substance is of the conglomerate forming variety. The process is described in [1], where two modes of operation are suggested. Normally, it has to be ensured by the control strategy that the product has a critical, minimum purity, pu, which is the fraction that the preferred enantiomer has on the product. Equivalently, the impurity, imp:=1-pu, must not exceed a critical constraint. Often, this constraint is very low, e.g. 1%, and it is difficult to measure such small impurities online. Therefore, in the following, we suggest a model-based observer, which makes use of available measurements of temperature and liquid mass fractions. It is a worst-case observer in the sense that, even for a class of model uncertainties, the real impurities shall be guaranteed not to exceed the estimates.
2. Model The solid phases, i.e. the crystalline material, are commonly described by number density functions, the crystal size distributions (CSDs). The following standard population balance equation, with initial and boundary conditions, describes the evolution of the CSDs fN,k(z,t), with k out of E1A, E2A, and, in coupled mode, E1B, E2B, where E1A represents “enantiomer E1 in tank A”, etc, and z is a characteristic crystal length: wf N , k ( z , t ) wt
w Gk ( z , t ) f N , k ( z , t ) wz
; f N , k (0, t )
Bk (t ) ; f N , k ( z , 0) Gk (0, t )
f N , k , seed ( z ).
(1)
A Worst-Case Observer for Impurities in Enantioseparation
861
Here, fN,k,seed(z) denotes the CSD of seed crystals, which are, of course, only added for the preferred enantiomer, i.e. E1 in tank A and, in coupled mode, E2 in tank B. For the counter enantiomer, this CSD is identically zero. We model the growth rate as a product of one size-independent term depending on several variables in the process (signified by the explicit time(t)-dependency), and one term which depends only on the crystal size: Gk z , t
G0, k t J z ; J z
1 D z
E
(2)
.
The expressions for the growth and nucleation kinetics that follow are from a recent model [2] of an L-D-threonine-system we do experiments with. The model is semiempiric. Some terms are based on physics, whereas other terms are purely empirical. For example, the term (1+A1ȝ2r) in (4) was added as a first attempt to model heterogenous primary nucleation. The equations apply for both enantiomers in tank A, and, in coupled mode, also in tank B. Thus, k stands for E1A, E2A, E1B or E2B, and Tv stands for temperature TA in tank A, or TB in tank B, as appropriate. Parameters having k as an index may differ between E1 and E2. Where necessary, the index r denotes the respective other enantiomer in the same tank. The size-independent parts of the growth rates, G0,k, depend on the supersaturations Sk, but also directly on the temperature Tv in the respective tank:
G0, k
§ E Ag , k k g 0, k exp ¨ ¨ RT g v ©
· g ¸¸ S k 1 ; S k ¹
ml , k ml , k ml , r mW
wl , k
1
a1 Tv 273.15K a0
wsat , k
,
(3)
where wl,k and wsat,k are the liquid mass fractions and the equilibrium mass fractions, respectively. The total nucleation rate, Bk=Bprim,k+Bsec,k, consists of primary nucleation B prim , k
§ K kb , primTv exp ¨ T ,visc Tv ©
exp k prim ,log ln U / Ceq , k with Ceq , k
§ wl , k wl , r · ¸ exp ¨¨ K w, visc ¹ ©
ln S 3
k
2
§ U · ¸¸ ln ¨¨ ¹ © Ceq , k
1 A P
1
2 1000 wsat , k ¨§ K1 K 2 Tv 273.15 ©
· 7/3 ¸¸ Sk Ceq , k ¹ f
2r 1
; P2 r t ³0
z 2 f N , r z , t dz
(4)
K 3 wl , k wl , r ¸· , ¹
where, if k is E1A, ȝ2r would be the 2nd moment of E2A, etc, and secondary nucleation Bsec, k
§ E kb,sec 0, k exp ¨ Ab ,sec, k ¨ RT g v ©
· bsec n ¸¸ S k 1 P3k P3 ; ¹
P3 k t
³
f
0
z 3 f N , k z , t dz ,
(5)
where ȝ3k is the third moment of fN,k. Additional differential equations, depending on the mode of operation, describe the evolution of the liquid phase, i.e. ml,k, or wl,k. We do not show them here, since the results in the next section build on the model (1) to (5), which does not include them. In coupled mode, these equations link the models of the two tanks. Most of the parameters in (1) to (5) were identified by fitting simulated trajectories to experimental results.
862
S. Hofmann et al.
3. Model uncertainties Of course, the structure of a complex process, like crystallization, cannot be known in every detail. In fact, the presented model, with its empirical terms, involves relatively large uncertainties. Here, however, we do not try to compute worst-case estimates under structural uncertainties. Instead, we concentrate on parametric uncertainties, and choose parameters, whose accurateness is of critical importance for the ability of the model to represent reality. Interval-like uncertainty bounds on these parameters are motivated by the variability of parameter identification results from several batch experiments, like the ones in [2]. We choose growth and nucleation rate constants kg0,k, kb,prim, A1, kb,sec0,k, whereby, in our experience, parameters related to nucleation are generally harder to estimate than the growth rate parameters. Moreover, we consider the solubility relation as critical, and choose a0 as a major source of uncertainty. Note that we do not include all parameters in the case study (section 6). Uncertainties in other parameters, as well as structural uncertainties, can, to a certain extent, be respected by wider intervals for the parameters selected here. Also, we do not consider initial condition uncertainties here. It turns out, that, in the present model, and when using the technique presented in the next section, they are not of predominant importance for the accuracy of the estimates we are interested in. We are able to measure the direct inputs to the crystallization process, i.e. the crystallization temperatures Tv(t), and consider the liquid mass fractions wl,k(t) as measurable as well, see [4] for details. We assume the measurement uncertainties to be also interval-like, i.e., at each sampling instant, the true values are supposed to lie within intervals around the measured values. The widths of the intervals are derived from the measurement noise recorded in several experiments. We use four times the standard deviation (±2) of the recorded noise. Although modelling measurement uncertainties in a worst-case manner may seem very conservative, note that experiments also showed permanent offsets of the same magnitude as the standard deviations.
4. Worst-case nucleation observer The task for worst-case estimation is the following: Given the model equations, whose structure is supposed to be in accordance with the real system, a set of parameters, interval uncertainty bounds for some of the parameters, measurements of the signals TA(t), wl,E1A(t), wl,E2A(t), and, in coupled mode, TB(t), wl,E1B(t), wl,E2B(t), where t=0…tcur, and interval uncertainty bounds for the measurements, determine upper bounds for the quantities ȝ3E2A(tcur), and, in coupled mode, ȝ3E1B(tcur), i.e., for the third moments of the counter enantiomers in tank A (and B), respectively. Of course, we wish the upper bounds to be as low as possible (least conservative). Because of the complex process and the various, positive and negative, feedbacks, computation of the worst-case values may not be trivial. However, as we make use of the liquid mass fraction measurements and the special structure of the process, the task simplifies considerably. In fact, when Tv(t) and wl,k(t) are interpreted as input signals, then crystallization in a tank is fully described by (1) to (5), written one time for enantiomer E1, and another time for E2. In coupled mode, the systems of the two tanks are then independent, and can be simulated independently from each other. Furthermore, the quantities depending on the CSDs (i.e., second and third moments) do only have positive feedback on the evolution of each other (also across E1, E2), because the feedback that growing crystals have via mass consumption from the liquid phase is eliminated in a system description where wl,k(t) are input signals. For that reason, simulation results are also less critically dependent on correct initial conditions of the seed crystals (CSDs).
A Worst-Case Observer for Impurities in Enantioseparation
863
We found that, for the presented model, values for the upper bounds we are interested in can be easily obtained by simulating, separately for each tank, the system (1) to (5) from the initial time t0=0 until the current time tcur, with all uncertain parameters set to either the lower or upper boundary of their uncertainty intervals, as appropriate, and the input signals Tv(t), wl,k(t) taken as sequences of values from appropriate (always lower or always upper) boundaries of the measurement uncertainty intervals (i.e. the intervals around the measurements in which the true values of the respective, measured, variables are supposed to lie in). To see this, we make use of two properties of the model: Firstly, suppose that, at a given time t, all (positive) growth and nucleation rates in a tank, G0,k and Bk, were to be chosen from certain intervals, independently of each other. Then, due to the monotonic growth of both the number and size of crystals, (1), (2), depending on G0,k and Bk, and the purely positive feedback that ȝ2r(t), ȝ3k(t) have via (4), (5) (where A1>0), it is evident that, in order to achieve the maximum possible value for any moment of any of the two enantiomers in the tank at time tcur, it would be necessary, at each time t=0…tcur, to choose the maximum allowed values for both growth and both nucleation rates in the respective tank. Second, for any time and state of the model, there exists exactly one combination of values of all the uncertain parameters and measurements, which does make all Bk and G0,k, with k out of E1A, E2A (and E1B, E2B), take their maximum values at the same time, and the respective parameter values each correspond to one (specific) boundary of the uncertainty intervals. To verify this, one can derive the equations for G0,k and Bk, (3) to (5), w. respect to the uncertain parameters and inputs (kg0,k, kb,prim, kb,sec0,k, a0, A1, wl,k(t), Tv(t)), and check if the signs of the derivatives stay the same for all practical system states, parameter and input values. Due to limited space, we will not show the equations here. Instead, note that the monotonicity of the growth and nucleation rate functions w. respect to the uncertain variables was checked during simulations for the case study (section 6). Also, the fact is quite obvious for some parameters, e.g. kg0,k, kb,prim. Finally, having worst-case estimates for the third moments of the counter-enantiomers, the maximum possible impurities in each tank, e.g. impE1A:=ȝ3E2A/(ȝ3E1A+ȝ3E2A), can be determined, whereby we again make direct use of the measurements of the liquid mass fractions, respectively their values at the boundary of the respective uncertainty boxes.
5. Application for process-stopping One obvious way to use the worst-case impurity estimate(s) for impE1A, and, in coupled mode, impE2B, is to interrupt the process when they exceed a critical impurity bound, i.e. when impE1A impcrit or impE2B impcrit. This strategy, combined with several schemes for feedforward and feedback control, has been demonstrated to be effective for a simple crystallization process [3]. Here, we investigate the use of constant supersaturation control for the time before the respective impurity constraints are hit in coupled operation. Temperature setpoints for thermostats are calculated by solving (3) for Tv, when, for the preferred enantiomers, a supersaturation setpoint SE1A=SE2B=Ssp is given. Under symmetric conditions, and when the liquid exchange is fast enough, supersaturations of the counter enantiomers will also stay approximately constant.
6. Case study The simulated system is the crystallization of L(E1)-/D(E2)-threonine in coupled crystallizers. Table 1 lists the nominal values of the uncertain parameters, taken from [2], and their uncertainty intervals, as well as the measurement uncertainty intervals. Figure 1 shows trajectories of the worst-case impurity estimates, along with trajectories of the
864
S. Hofmann et al.
real impurities, for Ssp=1.15. The ad-hoc process stopping time, affecting both crystallizers, is marked. In Table 2, results for several supersaturation levels, at the instant of stopping (i.e. at time tah, denoted by (ah)), are compared, showing also the actual amounts of the products (ȝ3E1A(ah), ȝ3E2B(ah)), and the productivities, PrE1A(ah):= ȝ3E1A(ah)/tah, PrE2B(ah):= ȝ3E2B(ah)/tah. Worst-case estimates of the impurities are marked by a hat (^). One can notice that the real impurities at stopping time are greatly overestimated. The choice of Ssp has no significant influence on the productivities at stopping time. Table 1: Parameters and uncertainty bounds
Figure 1: Real trajectories of impurities and their worst-case estimates
Table 2: Comparison of results
7. Conclusion We have presented a method to obtain upper bounds for product impurities in the separation of enantiomers by preferential crystallization. In the results of a case study, these bounds turned out to be quite conservative, greatly overestimating the impurities in the nominal system, thus affecting productivity at stopping time. Note that a special class of crystallization processes enable the relatively simple computations adopted here. Computation may get more complex in other cases, for example when the direct effect of temperature on the growth rate (Arrhenius) outweighs the effect via supersaturation, i.e. the growth rate decreases with decreasing temperature. The use of optimal control theory could be investigated for finding worst-case values in such more complex settings. Future work will also include enhanced identification of the model structure and parameters, especially the part of the model pertaining to nucleation, and more accurate uncertainty descriptions. We gratefully acknowledge financial support by the German Research Foundation for this project (DFG SE 586/15-1, DFG RA 516/7-1).
References [1] M.P. Elsner, G. Ziomek, A. Seidel-Morgenstern (2009). AIChE J., Vol 55, No 3., pp. 640-649 [2] M. Eicke, I. Angelov, A. Seidel-Morgenstern, M. P. Elsner (2010). PBM 2010, Berlin [3] N. Bajcinca, S. Hofmann, J. Raisch, K. Sundmacher (2010). ESCAPE 20, Ischia, Naples [4] S. Hofmann, M. Eicke, M. P. Elsner, J. Raisch (2010). WCPT 2010, Nuremberg
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Simulated Annealing Approach for the BiObjective Design and Scheduling of Multipurpose Batch Plants Nelson Chibeles-Martinsa,c, Tânia Pinto-Varelaa,b, Ana Paula Barbósa-Póvoab, A. Q. Novaisa a
Unidade de Modelação e Optimização de Sistemas Energéticos (DMS- INETI), LNEG, Lisboa, Portugal b Centro de Estudos de Gestão, IST, UTL, Av. Rovisco Pais, 1049-101 Lisboa, Portugal c Centro de Matemática e Aplicações, CMA, FCT-UNL, Qta da Torre, 28259-516 Caparica, Portugal
Abstract Like in most real-world problems, the design of multipurpose batch industrial facilities involves multiple objectives, which must be reconciled with a view to maximize profit. The use of this or other equivalent single criterion is the conventional way to evaluate the economic performance of an industrial plant. However, rather than employing one single criterion, plant revenues and costs can be handled separately, thus allowing the decision-maker to gain a better perception of the investment options. This latter approach, which is particularly relevant for a batch facility, is followed in this work, therefore leading to a multi-objective optimization and in turn to the definition of the efficient frontier which is defined as the locus of the optimal solutions so found. The nature and dimension of these problems usually lead to large mixed integer linear program (MILP) formulations that come associated with a high computational burden. In order to overcome this difficulty, a meta-heuristic approach, based on the Simulated Annealing (SA) methodology is developed and a sensitivity analysis performed on the main parameters. The proposed approached is compared with the exact approach, proposed by Pinto et al.(2008a). Keywords: MILP, Simulated Annealing, RTN, Scheduling, Multi-objective.
1. Introduction Nowadays industrial facilities must be prepared to produce a wide variety of products with different processing recipes, so as to satisfy a market demand characterized by variable product specifications and volumes. A high degree of production flexibility must therefore be guaranteed where the utilization of the available resources, such as equipment, raw materials, intermediates and utilities must be conducted in an optimized way. To achieve this goal, plant design optimization must be handled simultaneously with operations scheduling. Usually the economic goal is to maximize global
§
Author to whom correspondence should be addressed, e-mail: [email protected]
866
N. Chibeles-Martins et al
operational revenues, but the follow up and explicit minimization of the costs associated with plant installation can also be justified, since they may reveal important structural and operational aspects, which may influence decision making. The ensuing optimization problems are therefore multi-objective, very complex and the computer burden associated to handle such models tends to increase accordingly. Research in this area has been mainly focused on the use of mathematical programming models (MILP and MINLP). These models when applied to real problems often become intractable. For this reason new approaches should be explored to overcome this drawback (Barbosa-Povoa 2007), which may consist of oriented heuristics, evolutionary algorithms, meta-heuristics, as well as hybrid methods. In this paper the work presented by Chibeles-Martins et al. (2010) is extended in order to apply the Simulated Annealing meta-heuristic approach to the bi-objective design and scheduling of multipurpose batch plants. The algorithm maximizes the operational revenue while minimizing installation costs and the results are compared to a previous work, where the use of mathematical programming methods was explored (Pinto et al. 2008b). Two examples are solved where the performance of the heuristic method versus the exact algorithm are analyzed. In all cases, the plant topology, scheduling, process equipment design and storage profiles are determined.
2. Design Problem The optimal plant design and scheduling can be analyzed along the Pareto frontier, where the cost minimization and revenue maximization is taken into account, by solving the following problem: Given: Process description, through a RTN representation; The maximal amount of each type of resource available, its characteristics and unit cost; time horizon of planning; demand over the time horizon (production range); task and resources operating cost data and equipment, as well as connection suitability. Determine: the amount of each resource used; process scheduling; the optimal plant topology together with the associated design for all equipment and connectivity required.
3. Modelling framework The meta-heuristic approach developed in this paper uses the Simulated Annealing algorithm as proposed by Kirkpatrick et al. (1983) and Cerny et al. (1985), but adds up several adaptations to improve the algorithm's efficiency and effectiveness. These adaptations explore the characteristics of the problem in study and allow the algorithm to perform a wider exploration of the efficient frontier instead of converging to a local optimal solution. The main goal of this approach is to obtain an approximation of the efficient frontier as exhaustive as possible. It is desirable that all efficient plant topologies should be represented in the Pareto frontier and that for each topology a reasonable number of efficient solutions should be included. The decision maker will then be able to select a more adequate compromise solution with the full awareness of the efficient alternatives for the plant, especially in situations where a tight budget is limitative and is fundamental to know the characteristics of the efficient solutions with a cost similar to the budget. MILP approaches can become very inefficient when the intension is the detailed exploration of the efficient region, due to intractability issues and the current approach tries to overcome this. Simulated Annealing can be classified as a Local Search Meta-Heuristic. The proposed algorithm is initialized with a random solution and improved iteratively. For each iteration, the algorithm chooses a solution from the neighborhood of the current one.
A SA approach for the Bi-objective Design and Scheduling of Multipurpose Batch Plants
867
Differently from the classical SA algorithm, the one proposed has a multi-start procedure which allows the exploration of different regions of the efficient frontier. In order to prevent an early stop of the algorithm on a local optimum, a mechanism based on the Metropolis Algorithm was incorporated, as well as the tuning of some parameters undertaken, to guarantee efficiency and effectiveness. In particular a sensitivity analysis is performed on the following parameters: initial temperature; number of restarts; cooling schedule; stop criterion. Other features are tailored based, such as: objective function; initial solution generation; neighborhood function. In what follows, si represents the current solution, s’i the randomly generated neighbor solution, f1(s) and f2(s), respectively, the revenue and the cost of solution s, Pac the probability of accepting a neighbor solution, and T1i and T2i the temperatures associated respectively to objective functions f1(s) and f2(s) at iteration i. The algorithm considers the symmetric values of the cost function, so both functions have the same optimization direction. During the algorithm run the non-dominated solutions are stored in the Pareto array. These solutions are sorted, from the highest to the lowest revenue values. Due to the fact that the problem is bi-objective all solutions in the Pareto array will be automatically sorted accordingly to cost. In each iteration the algorithm verifies if the solution s’i is non-dominated, comparing it with the solutions stored in the Pareto array. If necessary, this solution s’i is added to the array which is corrected and re-sorted using an Insertion Sort Algorithm (Cormen et al (2009)). If the solution is non-dominated then the probability of acceptance Pac = 1 else Pac = Πκ Δk , where Δk = 1 if fk(s’i) is greater than fk(si) and Δk = exp((fk(s’i) - fk(si)) / Tki) otherwise. The Metropolis Algorithm decides if the neighborhood solution is accepted as the current solution using the probability of acceptance Pac. The algorithm developed for this work uses a Geometric Cooling Scheduling (Ti+1=αTi) for both Temperatures. When the T1i and T2i are both smaller than a value close to zero the algorithm restarts. T1i and T2i are reset to their initial values and a new initial solution is randomly generated. Solutions are treated as matrices [n x H], where n represents the number of equipment units and H the time horizon. For each cell, the algorithm controls if a task starts at that moment and how much it is going to process. A neighbor solution s' is then generated from the current solution by randomly selecting a small increase or decrease in the batch size. However, some special cases can happen: if a task ends processing at a batch level below a pre-fixed operating value, that task is removed from the solution; in the neighbor generation phase, if the selected equipment unit is not in use, the algorithm tries to assign it a task that will process a small random quantity of product, but sufficiently above the pre-fixed operating level. The proposed algorithm is explored and compared with an exact approach, which uses the RTN adapted to the design of multipurpose facilities (Pinto et al. (2008a)) and the εconstraint to obtain the Pareto frontier.
4. Examples All the examples presented were solved in a Intel 4, Q9550, 2.83 GHz, 3.5 GB RAM. The MILP approach used GAMS 23.3/CPLEX 11. 4.1. Example 1 An industrial process operating in a non-periodic mode over a time horizon of 12 hours produces three final products, S4, S5 and S6, with capacities between [0:80] tonnes for S4 and S5, and [0:60] for S6, and using two raw-materials, S1 and S2. For the production tasks there are three reactors, R1, R2 and R3. For all equipments, capacities
868
N. Chibeles-Martins et al
are defined in a continuous range between an upper and lower bound. Each vessel is suitable to store only one material. In Figure 1 the Pareto frontier is showed for the two approaches. The solutions from SA were divided into three different regions, with each region presenting a different plant topology. Different solutions can still occur for one given topology, as a result of different processing scheduling and equipment designs. For each solution the gap between the SA cost objective function and the exact value of the cost is less than 5%.
Figure 1 – Pareto Frontier obtained through MILP and SA – Example 1
4.2. Example 2 Like in the previous example, the Pareto frontier is obtained taking into account the trade-off of the two objectives (revenue maximization and cost minimization). The design and scheduling of a multipurpose batch plant is obtained, with the production of [0; 170] tonnes of products S5, [0; 166] of S9 and S10, [0; 270] of S6 and [0; 143] of S11. Three raw materials, S1, S2 and S7, are used over the horizon of 24 h. Materials S5 and S6 are both intermediate and final products. There are six main reactors available (R1 to R6), and nine dedicated vessels. As before, Figure 2 shows the exact Pareto frontier and the SA approximation, where now four different regions are identified, each one with a different topology. Again, the cost gap obtained for the two approaches is less than 5%. Table 1 shows the number of solutions generated by both methodologies and the total CPU time used. In the first example the SA approach presents an improved performance of about 66.7% (3.6 s vs 10.9 s), while in the second this is 94% (3666.8 s vs 234.8 s).
5. Conclusions In this paper, the design of multipurpose batch plants is explored, where different aspects are considered: plant topology, equipment design, scheduling, storage policies and profiles. A bi-objective Simulated Annealing Algorithm is proposed and the results are compared to the ε-constraint MILP approach, with the SA being found to achieve a good approximation to the exact efficient Pareto Frontier in a shorter time. The high characterization level necessary for the MILP formulations results in high computational burdens. Consequently, the detailed exploration of the Pareto Frontier using exact approaches can become very inefficient. The proposed methodology, although an approximation, allows a perspective of all possible combinations of revenue versus cost, and their corresponding plant design and scheduling, for each feasible plant topology with substantially lesser computational time.
A SA approach for the Bi-objective Design and Scheduling of Multipurpose Batch Plants
869
By comparison with the MILP approaches, the scope offered by Meta-Heuristics is clearly demonstrated in dealing with highly dimensional and complex problems, where MILP approaches are inefficient due to problem intractability.
Figure 2 – Pareto Frontier obtained through MILP and SA – Example 2 Table 1 – Computational Results
N. of solutions Example 1
7
Example 2
10
MILP
CPU (s)
N. of solutions
10,9
509
3666,8
250
CPU (s) SA
3,6 234,8
Acknowledgment: The authors gratefully acknowledge the financial support from the Portuguese Science Foundation (FCT) under project PTDC/SEN-ENR/102869/2008.
6. References Barbosa-Povoa, A. P. (2007), "A critical review on the design and retrofit of batch plants.", Computers & Chemical Engineering , 31(7): 833-855. Cerny, V.,(1985) "A thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm." Journal of Optimization Theory and Applications,45: 41-51. Chibeles-Martins, N., T. Pinto-Varela, A. P. Barbosa-Póvoa and A. Q. Novais, (2010), A MetaHeuristics Approach for the Design and Scheduling of Multipurpose Batch Plants. 20th European Symposium on Computer Aided Process Engineering – ESCAPE20, Elsevier. Cormen, T. H., Leiserson, C. E., Rivest, R. L and Stein, C (2009) "Insertion Sort", Introduction to Algorithms, 3rd Ed., MIT Press Kirkpatrick, C. D. G. J. S., and M. P. Vecchi. (1983), "Optimization by Simulated Annealing " 220 (4598): 671 - 680. Pinto, T., A. Barbosa-Povoa and A. Q. Novais. (2008a) "Design of multipurpose batch plants: A comparative analysis between the STN, m-STN, and RTN representations and formulations." Industrial & Engineering Chemistry Research, 47(16): 6025-6044. Pinto, T. R., A. Barbosa-Povoa and A. Q. Novais (2008b). "Multi-objective Design of Multipurpose Batch Facilities Using Economic Assesments." 18th European Symposium on Computer Aided Process Engineering (ESCAPE-18), Elsevier.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Robust Logistics Network Modeling and Design against Uncertainties Yoshiaki Shimizu,a Hideaki Fushimi,a Takeshi Wadab a b
Toyohashi University of Technology, Toyohashi, Aichi 441-8580, Japan
Osaka Prefectural College of Technology, Neyagawa, Osaka 572-8572, Japan
Abstract In design of logistics networks, it is inevitable to consider various forward factors that will often change from their nominal values. With this point of view, this study formulates a global logistics network design problem under uncertainties and presents a solution method in terms of the stochastic programming known as recourse model. To reduce efforts to solve the resulting problem, a scenario-base approach is adopted. Furthermore, we present a method to support a decision-making on robust design under noisy environment by virtue of the knowledge from multi-objective optimization. Numerical experiments revealed the validity of the proposed method through comparison with the reference results. Keywords: Logistics, Stochastic programming, Recourse model, Hybrid tabu search, Robust design.
1. Introduction Due to the recent global production and unstable business climate, companies are required to respond more adaptively and quickly for the customer demands and market changes. Having a keen awareness of effective unification of supply chain and business alliance, it is of special importance to design, and manage logistics associated with various forward factors such as customer demand, raw material price, currency exchanges, and so on. Since it is extremely difficult to forecast correctly these parameters at the designing phase, a countermeasure against the uncertainties is to consider them at the operation phase. With this point of view, in this study, we present a solution method of the uncertain global logistics network design problem in terms of the stochastic programming known as recourse model. There, we adopted a scenario-base approach instead of considering infinite realizations of the uncertain parameters to depress computation load without losing the essence of the problem. Furthermore, we present a method to support decision-making on robust design under noisy environment by virtue of the knowledge from multi-objective optimization. The aim of this study is to extend our generic idea (Shimizu & Wada, 2004; Shimizu, Matsuda & Wada, 2006) to cope with real world applications that are inseparable from unstable and unpredictable circumstance.
Robust Logistics Network Modeling and Design against Uncertainties
871
2. Problem Formulation Let us consider a hierarchical logistics network whose members are composed of plants, distribution centers (DCs), wholesaler, retailer and customers. The objective function of the problem is the total cost composed of fixed-charge for opening the DC, purchase costs from wholesalers, production costs at plants, handling costs at DC, and transportation costs. On the other hand, the constraints are imposed on demand satisfaction of every customer, upper bound of holding capacity at each DC, input-output balance at each DC, available amount of product at each plant. In addition, binary decision variables are introduced for selection of open/close of DCs while non-negative real variables for transport amount. In global and strategic planning of logistics, it is indispensable to take into account the following uncertain parameters involved in the foregoing deterministic model. They are unit purchase price from wholesaler j, Bj (j אJ); unit production cost at plant i, Ci (i א I); demand of customer m, Dm (m אM); transportation cost between members per unit amount, Er (r, i, j); fixed-charge cost for opening DC k, Fk (k אK); unit handling ij cost at DC k, Hk (k); unit selling price to retailer l, Rl (l אL); and unit selling price to customer m, Sm (m). Presently, we notice the decisions on the locations of DC corresponds to the hard decision variables that are impossible to change once decided even if the situation might change while the transportation amounts to the soft ones possible to change according to the changed situation. Moreover, we can separate the terms of objective function into two terms, i.e., the fixed-charge cost which is a function of the hard variables and a set of the operating costs a function of the soft variables. The former is viewed as a goal for the decision made now while the later according to the consequence in future. Then, we can naturally take the stochastic programming model known as recourse model. It is described generally as follows (Kall and Wallace, 1994). (p.1)
Min
g i ( y ) d 0, (i 1, , m) ޓޓޓ ° f1 ( y ) ([ f 2 ( x([ ), [ )] subject to ® h j ( y, x([ )) d 0, ( j 1, , k ) °[ ;ޓޓޓޓޓޓޓޓޓޓޓ ¯
In this paradigm, y is a vector of decisions that we must take now, and x(ȟ) is a vector of decision chosen for each possible realization of uncertain parameters ȟאȄ in future. Moreover, ([] denotes the expectation of the value in brackets. This recourse model is commonly extended into such a two-stage model that majorly directs to y in the upper level while x in the lower level. According to Santoso et al. (2005), a formulation for logistic problem is given as follows. (p.2) Min f ( y ) F T y ([Q( y, [ )] subject to y Y , [ ; y
where y denotes a binary variable vector that decides the DC status (open/close). The first and second terms of the objective function are the fixed-charge and the expected value of the operating cost, respectively. Here, let's note the decision variables are only y in the upper level problem.
Y. Shimizu et al.
872
The operating cost itself becomes the Start Set plausible scenarios objective function at the lower level. In Evolutionary search practice, Q(y, ȟ) denotes the optimal value of Select a scenario (DC location) the following problem. No Ax Bz t D, (1) Exhausted all Graph algorithm ? ( product delivery) (2) Gx d Uy, Yes T T subject to (3) Tx 0 , Compute expected value Min q x h z No x,z (4) Wx d P, Converge ? Yes Conventional x, z R two-step method where x and z are flow amounts between the Stop members in the deterministic model. Fig.1 Flow chart of the proposed algorithm The second term of the objective function hTz corresponds to an adjustment cost incurred to meet demand satisfaction. This can be interpreted as the outsourcing cost for unsatisfied demand from wholesaler and sales cost for retailer in the case of shortage and surplus production, respectively. On the other hand, each constraint stands for customer demand requirement (Eq.(1)), capacity constraint at DCs (Eq.(2)), the material balance regarding the flow amounts (Eq.(3)) and upper bound of production at plants (Eq.(4)), respectively. In this two-stage stochastic model, the first-stage decides now y or the configuration while the second-stage x and z or the processing and transporting amounts of product ranging from plants to customers in an optimal fashion based upon the configuration and the realized uncertain parameters in advance. The objective is to minimize current investment costs FTy and expected future operating costs E[Q(y, ȟ)].
3. Scenario-based Solution Method Since the basic idea of HybTS (Shimizu & Wada, 2004) refers to a similar two-level method as depicted in the left hand half of Fig.1, we can straightforwardly apply it to the present stochastic programming problem. However, we will notice the difficulty to practically evaluate the expected value associated with uncertain parameters. To overcome this problem, sample average approximation is reported (Santoso et al., 2005). Thereat, evaluating the optimality gap based on the average and variance values over the appropriately chosen samples, the final decision is made. Mathematically, this approach presents a meaningful approximation method that makes it possible to reach the optimal ever. However, computational effort expands rapidly according to the increase in number of uncertain parameters. Consequently, serious problems are left behind for real-world applications. We can resolve this problem practically if we notice the fact that independent deviation of every uncertain parameter would seldom occur. Generally speaking, parameters of logistic problems have somewhat mutual correlations, and their realization can be categorized into finite patterns given by the rate of variation to the nominal value. For the conception design, therefore, it is enough as well as efficient to adopt a scenariobased approach mentioned below. For this purpose, we need to set up a typical set of realization of uncertain parameters and its occurrence probability. N Then the computation of expected value is given by ([Q( y, [ )] ¦ p Q( y, [ n ). n 1
n
Robust Logistics Network Modeling and Design against Uncertainties
873
Here [ n is a parameter realization of the n-th scenario, and pn its occurrence probability of the scenario. Presently, [ n is given by [ n (Cin , B nj , H kn , Rln , Dmn , S mn , E rnij ). After all, we have the algorithm outlined below (See also Fig.1). Step 1: Decide the initial location of DC following the algorithm developed specially for the present problem. Then, go to Step 3. Step 2: Update the location of DC by the sophisticated tabu search developed previously. Step 3: Solve the distribution problem for every scenario one after another at the lowerlevel under the prescribed DC location. N Step 4: From the above results, calculate the expected value by ¦ pnQ( y, [ n ). Step 5: If the stopping condition is satisfied, then stop. n 1 Otherwise, evaluate the tentative solution and go back to Step 2. In Step 3, since the lower level linear programming problem has a block diagonal structure per scenario, we can independently solve the problem per each block. Moreover, we can transform each problem into a minimum cost flow problem just by adjusting equally the total amounts of inlet flow and the outlet flow as max{¦ Pi max , ¦ Dm } instead of ¦ Dm . Hence, we can solve the resulting problem i
m
m
effectively by the similar approach (Shimizu and Wada, 2004).
4. Robust Design In the above formulation, we know the occurrence probabilities will affect the result considerably. It is desirable, therefore, to derive a robust solution that is less insensitive to the happening of these values. To cope with this problem, we propose a plain multiobjective approach mentioned below. First, calculate the weighted average expected operating cost over the scenarios as QDv ( y )
N
N
n 1
n 1
¦ p n Q( y, [ n ). Then, we obtain the variance as v( y ) ¦ p n (Q( y, [ n ) QDv ( y )) 2 .
Eventually, we modify the objective function by the weighted sum of the original term and the new variance term as ~f ( y ) D ( F T y QDv ( y )) (1 D )v( y ) where Į denotes the weighting coefficient. Using this modified objective function, we can derive the robust solution that depends on the values of weight.
5. Numerical Experiment and Discussions To verify the effectiveness of the proposed approach, we provided a few benchmark problems whose problem sizes are shown in Table 1. System parameters are given randomly within their upper and lower bounded values. We set each scenario by prescribing the direction (plus or minus) and the amount of deviations from the nominal values of parameters. We compared our results with the reference solution that is deterministically obtained using the parameters that are calculated as the weighed sum average using the occurrence probabilities, i.e., [ R ¦ n pn[ n . As illustrated in Fig.2 when N=5, from
Y. Shimizu et al.
874
the achievement rates shown at (p.2), we can see the advantage of the recourse over the prescribed approach. Furthermore, we reveal the favor from comparing the results when each decision is applied to the case where the respective object scenario (from s1 to s5) has occurred certainly or with probability 1. There, a merit is calculated as an achievement rate to the optimal value under each scenario. Moreover, due to using the predetermined probability, the reference solution may not necessarily become feasible for every scenario. In contrast, the proposed solution can be always feasible since it must be feasible solution all over the scenarios (Refer to Fig.1). For reference design For proposed design
Data ID D-S D-M (a/b) D-L
|I| 3 5 10
|J| 1 3 5
|K| 30 50 70
|L| 1 3 5
|M| 50 100 200
N 3 5 8
Figure 3 shows a trade-off relation between the expected profit and the robustness or variance. As supposed naturally, this trade-off curve locates between the maximum and minimum profit curves, and the robust strategy can be accomplished at the expense of the expected profit. It is very helpful for DM to decide his/her final decision referring to the observation of such profile of the curve.
Rate of profit to the optimal value [-]
Table 1 Problem size of benchmark problem
s1(30%)
s2(10%)
s3(20%)
s4(25%)
s5(15%)
(p.2)
Realized scenario
Fig.2 Comparison of achievement rates per scenario (number in the parentheses denotes the occurrence probability of each scenario)
6. Conclusions Noticing the similarity of problem structure between recourse model in the stochastic programming and our hierarchical solution method, we have developed a new two-stage stochastic model available for the global logistics optimization under uncertainties. To reduce the effort to solve the problem, we adopted a scenario-base approach and Fig.3 Profiles of profits for the degree of proposed a method to support decisionrobustness making on robust design under noisy environment. We validated the rationale of the proposed approach through numerical experiments. Future studies should be devoted to increase the confidence of evaluation toward supporting decision suitable at the tactical level.
References Y. Shimizu & T. Wada, 2004, Proc. 8th PSE, 612-617, China Y. Shimizu, S. Matsuda & T. Wada, 2006, Proc. 16th ESCAPE, 2051-2056, Germany T. Santoso, S. Ahmed, M. Goetschalckx & A.Shapiro, 2005, Europ. J. of Oper. Res., 167, 96-115 P. Kall and S. W. Wallace, 1994, Stochastic Programming, John Wiley & Sons, Chichester
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Operating Procedure Synthesis Subject to Restricted State Transition Using Differential Evolution Yoshiaki Shimizu Toyohashi University of Technology, Toyohashi, Aichi 441-8580, Japan
Abstract To obtain a near optimal solution for operating procedure synthesis under safety constraint, in this paper, we intended to apply a meta-heuristic method termed differential evolution (DE). Compared with the previous approaches, we have shown DE’s ability to derive such an operating sequence that is adaptive to various practical requirements and serves easy operation manner. Since the algorithm is straightforward and flexible to manage various conditions appearing in real-world applications, we can claim great promise of the proposed approach. In a case study, we applied the proposed method to safety mixing operation of acrylic acid synthesis process and validated its adaptability under various significant practical conditions. Keywords: Operating procedure synthesis, Safety constraint, Differential evolution, Piece-Wise constant, Mixing vessel process
1. Introduction By virtue of advanced progress of meta-heuristic optimization methods, in this paper, we focus our attention on an operating procedure synthesis (OPS) in chemical processes. It can be often considered for plant start up, normal operations and/or shutdown. These operations must swiftly transit a system from an initial state to a desired final state without violating the prescribed safety constraints. Hence the problem can be readily formulated as a time optimal control problem with state inequality constraints. When we try to cope with this problem by a certain variational approach like Maximum Principle, we encounter great difficulty associated with state inequality constraints and singularity of control variables besides dimensional problem. In recent studies, a new approach known as the differential-algebraic equation (DAE) optimization has been called many attentions. However, it needs complicated procedures and vast calculations. Under such understanding, the aim of this paper is to facilitate a wide application of simulation-based optimization methods like DE for OPS problems. Finally, numerical experiments are provided to validate its manifold effects of the proposed approach.
2. Problem statement Concerning with the OPS, a two-layer method that combined simulated annealing (SA) and artificial-intelligence search method known as A* is proposed first and then modified in terms of tabu search (Asprey et al., 1999; Suzuki and Batres, 2009). For
Y.Shimizu
876
deriving the primitive operating procedure in a dynamic environment, however, it is unsuitable to take such an ad hoc policy applied by the two layer approach. Here, let’s notice these controlling actions are operated discontinuously along time or space coordinate. Especially, piece-wise constant control (PWCC) is such a class that is easy to implement in real operation and suitable for evolutionary optimization. Along with such idea, a flexible and extensive method proposed recently (Shimizu, 2009) will be applied readily to the OPS problem mentioned below. To concern with the OPS problem generally, we first consider a system whose dynamics are described by
x
f ( x(t )), u (t ), t ), ޓx(t 0 )
xo
(1)
where x(t) denotes an n-dimensional state vector, u(t) an r-dimensional control vector, f () an n-dimensional vector function and t a time or time analog. Assuming a scalar control variable u(t), for simplicity of description, the control policy of PWCC is prescribed by a set of pair (vi ,W i ) , where vi is the value of u(t) at i-th interval and W i is the i-th switching point. Then, the continuous problem can be transformed into the following 2N-1 dimensional optimization problem where v and W are the vector forms of the sequences of vi (i=1,2,…,N) and W i (i=1,2,…,N-1), respectively. Here, N denotes the number of division that controls the fineness of the control policy. (p.1)
Max ޓJ v ,W
(W i 1 d t W i ) ޓޓޓޓޓޓ x f ( x(t ), vi ), ޓ (2) K ( x(W N ),W N ) subject to °® x(W i1 ) x(W i1 ), x(W 0 ) x0 ޓޓޓޓޓޓޓ ° u d v d u ޓޓ (i 1,, N ) ޓޓޓޓޓޓ i i i ¯
where u and u are lower and upper bound vectors of u(t), respectively. As mentioned already, a typical OPS problem refers to the time optimal control problem with state inequality constraints, which is described as follows. (p.2)
Eq.(2)
Min J= W N subject to °® g ( x(t ), u (t )) ҇ 0, ޓW d t d W N 0 ° x(W ) ¯ N
xf
(3) (4)
Here, besides the saturation condition on control, state transition is restricted within the limited space from certain safety reasons. This requires us to augment the additional algebraic inequality equations such as Eq.(3). Moreover, let us note the additional boundary condition of the state variables at the final time Eq.(4) (end condition).
3. Evolutionary approach 3.1. Simulation-based optimization For solving (p.1) or (p.2), it is straightforward to apply DE (Storn and Price, 1997) in the following manner. Here, decision variables of DE are given by a sequence of control values vi and/or their holding intervals zi (W i W i 1 ) / H , (i 1, , N ) . Here,
Operating Procedure Synthesis Subject to Restricted State Transition Using Differential 877 Evolution Operating Procedure Synthesis Subject to Restricted State Transition Using Differential Evolution H is a differential time unit. Since the interval length zi must be integer, it should be rounded-off to satisfy this condition. Once these values are prescribed, we can evaluate the objective function directly after solving the differential equations numerically. Then, we can utilize simply this value as the fitness of DE. Being the final time is not given a priori causes pretty difficulties for the conventional variation methods while only an increase in the number of decision variables by one for the simulation-based approach,. However, this simulation-based approach is usually impossible to always satisfy the end conditions and/or the state inequality constraints. To cope with this problem, we use the penalty function method by augmenting the boundary condition into the objective function as follows. (p.3)
Min t f P x(t f ) x f / x f x0
where || || denote a certain norm of vector, and P a penalty coefficient whose value should be increased adaptively. Regarding the state inequality constraints, it is also enough to degrade the fitness value Fit (v,W ) according to the degree of violation of the state-inequality condition in the course of the transition as Fit c(v, z )
N
Fit (v, z ) Pg ¦ G i ( g ) , where Pg denotes the i 1
penalty coefficient and G i takes one when anyone of the state inequality constraints are violated at the i-th interval, and otherwise zero. 3.2. Insensitive control against parameter deviations Regarding the safety operation, it is extremely important to note the existence of uncertainty in the model. By applying sensitivity approach (Shimizu, 1979), we can derive a robust control policy against the uncertain parameters. For this purpose, we need to derive sensitivity equations with respect to system parameters and/or initial conditions and add these sensitivity equations to the original system (augmented system). Finally, after prescribing a certain norm of the sensitivity vector in an appropriate manner, such insensitivity condition is weighted to the original objective function, i.e., J c w1 J w2 S , where J and S represent the original objective and the sensitivity term, respectively and wi (i=1,2) is the weighting coefficient to adjust the trade-off between these two goals, i.e., J and S. PI
4. Application to acrylic acid synthesis process
Fin1 Fin2
T
P, V, T
3
D
T The system under study is a part of a process for the PC Fin T Fout synthesis of acrylic acid which involves mixing of propylene, steam, and air. The mixing vessel is a stirred-tank with three inlet valves for propylene, air, Fig.1 Mixing process under study and steam and one outlet valve for the mixture (Fig.1). The outlet flow is regulated with a local controller. The reactor in that process operates with an air-propylene mixture avoiding the flammability region. In this system, the valve operations should not take the process through the flammable range.
Y.Shimizu
878
Under mild assumptions, the completely stirred mixing vessel model mentioned above is given by taking material balance for each component (1: steam, 2: air, 3: propylene). dmi m (5) Fini i Fout , mi (t0 ) mi 0 (i 1, 2, 3) dt mt where mi and mt are mass of i-species and total mass, respectively. Moreover, Fin i and Fout denote inlet flow rate of species i and out let flow rate, respectively. Once Fin and Fout (t ) are given, Fout (t 1) is determined by the controller as follows. i (6) Fout (t 1) Fout (t ) max{K G ( P(t ) Pset ), 0} where Pset represents set point of pressure and KG gain of controller. The model is solved using Runge-Kutta from time t0 to tf per every interval iteratively. The flammability boundary imposes a state inequality constraint given as Eq.(3) in (p.2), and is approximated correctly by 6-order polynomial equation from experimental data reported in the literature (Zebetakis, 1965). Using this equation, the safety constraint is described as follows. (7)
i 1
where ai (i 1,,6) and b are constants .
5. Numerical Results and Discussions Eventually, we need to solve the following problem.
0.20 0.15 0.10
Flammable region 0.05 0.00 0.00
subject to Eqs.(5)~(7).
Using the parameters appeared in the literature (Suzuki and Batres, 2009), we carried out the following numerical experiments. Moreover, we set the parameters of DE/rand/1/bin/ as follows: population size N p , (5~7)×decision variable #; stopping condition, (5~8) N 㨜 2; differential coefficient, 0.8; crossover rate, 0.5.
0.10
0.20
0.30
0.40
Steam: weight %[-]
(a) Trajectory of the state Mass flow rate [kg/s]
(p.4) Min tf
0.25
Propylene : weight %[-]
6
m1 (t ) t ¦ ai m3 (t )i b, t [t0 , t f ]
0.1 0.08 0.06
v1 v2 v3
0.04 0.02
(1) Comparison of start-up operations 0 0 10 20 30 40 50 Here, we solved a variety of problems by Time sequence [-] (5 [sec]/sequence) combining the switching numbers and the (b) Profile of valve operations operation modes, i.e., synchronous or asynchronous valve operations. Under the Fig.2 Profile of the result nominal boundary condition (NBC), we have (N = 3, Asynchronous mode) the results as shown in Table 1. Every result obtained there does not violate the flammable region at all. Also, it outperforms the
a1=9.285*E2, a2= 2.6754*E4, a3=3.9886*E5, a4= 3.259*E6, a5=1.386*E7, a6=2.4*E7, b= 12.661
Operating Procedure Synthesis Subject to Restricted State Transition Using Differential 879 Evolution previous results (Asprey et al., 1999; Suzuki and Batres, 2009) regarding the total operation time and simplicity of operation or quite less switching actions. Moreover, normalized root mean square error (NRMSE) of the end condition is much less than the previous ones. Here, noticing the CPU time to derive the solution and simplicity of operation manner in practice, we know N = 3 or 4 is necessary and sufficient to operate this process properly. Figures 3 (a) and (b) depict the state trajectory and valve operations, respectively for N = 3 in asynchronous mode. Table 1 Results for the combined switching number and mode (NBC) Switching # N 2 3 4
Operation time [s] Synchro. Asynchro. 275 230 235 220 230 210
CPU time [s] Synchro. Asynchro. 34 263 73 653 175 791
NRMSE [-] Synchro. Asynchro. -7 -7 <1.0 <1.0 1.0-5 <1.0-7 4.37 1.0-5 <1.0-7 1.86
(2) Robust operation Since the initial conditions of start-up operation often deviate from their nominal values, we tried to obtain the robust operation against these deviations. Then, we applied the approach stated in section 3.2 as follows. ” (p.5) Min J c w1 J w2 S w1t f w2 {¦ i yi (t f ) 2 }1/ 2 subject to “The Augment System The more we weigh the sensitivity term than the original objective, the more closely we can reach the end conditions despite the deviation of initial conditions. Thereat, we can see a trade-off relation between the robustness and the operation time.
6. Conclusion To cope with the operating procedure synthesis (OPS) problem under safety conditions, we focus our attention on the time-optimal control problem with the piece-wise constant control. Then we employed a meta-heuristic method termed DE for its optimization. Finally, we have shown the prospects and ability of the proposed approach for OPS problem through various applications. Compared with the previous methods, we have confirmed it can provide much better solutions for various problems. In future studies, we need to analyze the existence of many local optima through comparison with other evolutionary methods.
References S. P. Asprey, R. Batres, T. Fuchino and Y. Naka, 1999, Ind. Eng. Chem. Res., 38, 6, 2364-2374 Y. Shimizu, 2009, J. Chem. Eng. Japan, 42, 4, 265-273 Y. Shimizu, 1979, Annual Report, Research Reactor Institute, Kyoto Univ., 12, 23-32 M. Suzuki and R. Batres, 2009, Proc. ICROS-SICE International Joint Conference, Japan, 5199-5202 R. Storn and K. Price, 1997, Journal of Global Optimization, 11, 341-359 M. G. Zabetakis, 1965, USBM, Bull. No.627
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
MILP Formulation for Resource-Constrained Project Scheduling Problems Thomas S. Kyriakidis a, Georgios M. Kopanos b, Michael C. Georgiadis a,c a
Department of Engineering Informatics & Telecommunications, University of Western
Macedonia, Karamanli and Lygeris Street, Kozani 50100, Greece b
Department of Chemical Engineering, Universitat Politecnica de Catalunya, Av.
Diagonal 647, Barcelona 08028, Spain c
Department of Chemical Engineering, Aristotle University of Thessaloniki,
Thessaloniki 54124, Greece
Abstract This work presents a new Mixed-Integer Linear Programming Model (MILP) for the deterministic single-mode Resource Constrained Scheduling Problem (RCPSP) with renewable and non-renewable resources. The overall modeling approach relies on the Resource Task Network (RTN) process representation, a network representation concept used in process scheduling problems, based on continuous time models. First, we propose the new RTN-based network representation method and a MILP continuous-time formulation, and then we efficiently transform it into a set of constraints describing precedence relations, different types of resources and multiple objectives. Finally, the applicability and the efficiency of the proposed formulation is illustrated by considering three example projects with 12, 16 and 20 activities under the most commonly addressed objective, makespan minimization. Keywords: Resource-Constrained Project Scheduling, Programming, Resource-Task Network, Project Management
Mixed-Integer
Linear
1. Introduction Project scheduling involves the construction of a precedence and resource feasible time schedule which identifies the individual planned starting and completion times of the activities. Minimizing the project makespan is undoubtedly the most popular time-based objective function discussed in the project scheduling literature, Hartmann (2010). The proposed formulation supports two resource types, renewable and non-renewable. The former are periodically renewed, but their quantity is limited over each time period and may differ from one period to the next. While the constraints on availability for the latter only concern total consumption over the entire project duration. Two representations have been commonly used to capture project networks, the
MILP Formulation for Resource-Constrained Project Scheduling Problems
881
Activity-on-Arc (AoA), which is event-based, and Activity-on-Node (AoN), which is activity-based. Both diagramming methods depict the precedence relations between activities, which is only part of the project network. The network representation method proposed in Section 2 of this paper incorporates resource demand as well as more complex logical relations between activities. The MILP formulation and the solution method proposed is a general mathematical model for optimal deterministic scheduling of single-mode Resource Constrained Project Scheduling Problems (RCPSP). The formulation is based on the Resource Task Network (RTN) with continuous-time representation by Schilling and Pantelides (1996).
2. Network representation for the RCPSP The RTN process representation although simple, can describe a very wide variety of process scheduling problems and has been extensively used in the process scheduling literature. Its bipartite directed graph for general processes consists of resources, represented as rectangles and tasks/activities, represented as circles. A task/activity consumes and/or produces a set of resources that can be anything from raw materials, intermediate and final products to manpower and processing equipment. In order to use the RTN representation effectively, we need to introduce a new resource type, called logical. These logical resources are treated like physical, with a few modifications to some of their coefficients: the activity consumption and production coefficients ( P ri and P ri ), and the minimum/maximum excess level ( Rrmin / Rrmax ). Besides the physical resource requirements, each activity needs to be assigned to one logical resource unit. This quantity is produced by the activity’s immediate predecessor(s) in equal quantities and is addressed by properly adjusting the production coefficients ( P ri ) for the preceding activities and the consumption coefficient ( P ri ) for the activity with the assigned logical resource, as shown in the illustrative example of Fig. 1. P LR , Activity1 1 4 Activity1 R1 1/4
Activity2
1/4
Activity3
1/4
Activity4
LR
1/4
1
R2
Activity5
P LR , Activity 2
14
P LR , Activity3
14
P LR , Activity 4
14
P LR , Activity5
1
Fig. 1. Illustrative example of logical resource usage.
The project end is modeled as a logical resource with no output activities. This resource can only be produced and not consumed. Completion of the project, calls for the production of this logical resource in the required quantities.
3. MILP Formulation for the RCPSP The proposed mathematical model is based on the time-slot synchronized formulation
T.S. Kyriakidis et al.
882
introduced by Schilling and Pantelides (1996), where the variable time horizon H is divided in T slots with variable time duration. Although, the RTN representation is simple and elegant for process scheduling, it can become even more simplified when it comes to project scheduling. We set the number of time slots T equal to the number of activities, in case the worst scenario is realized, with only one activity executed at each slot. We assume that no more than one instance of an activity can be executed at any time point. This assumption converts the Nit variable from integer to binary, since the only possible values are 0 and 1, depending on whether none or one instance of the activity is being executed. Additionally, resource consumption and production does not depend on the size of the activity, so the corresponding size-dependent coefficients’ values are 0, vri v ri 0 . Activities can be executed only once over the total duration of the project. If an activity is to be executed again, we represent it as a new one, as it is bound to have different precedence constraints. Otherwise it would have been included in the first one, requiring double input and producing double output resources. 3.1. Decision Variables To model the problem, we will use the following variables: H denotes the time horizon, where IJt is the duration of time slot t, expresses the excess quantity of resource r at time point t, Rrt yit = 1 if activity i starts at t, or 0 otherwise, and
y it
= 1 if activity i is active over both slots t and t-1, or 0 otherwise.
The binary variable y it replaces the number of instances starting at time t variable Nit. Additionally, we can simplify the model, by setting y i ,1
0 , since no task can be active
over t = 0. 3.2. Constraints Schilling and Pantelides (1996) distinguish 4 types of constraints: (i) timing, (ii) excess resource balances, (iii) excess resource capacity, and (iv) slot constraints. In this work, we have modified these constraints for project scheduling problems. 3.2.1. Timing Constraints The total duration of all time slots must be equal to the time horizon: T
¦W
(1)
H
t
t 1
An activity i starting at time t may span over a number of consecutive slots. The sum of the durations of these slots must be equal to the duration și of the activity:
¦ y T
t 1
it
y it W t
T
t 1 y , i Ti ¦ it
(2)
MILP Formulation for Resource-Constrained Project Scheduling Problems
883
The nonlinearities involved in constraint (2) can be removed using standard techniques. We define new continuous variables Wlinit { yit y it W t through: W min y it y it d Wlinit d min W max , T i y it y it , i, t (3)
@
>
W t W max 1 y it y it d Wlinit d W t , i, t
(4)
where IJmin and IJmax are the minimum and maximum slot duration, respectively. The maximum slot duration can be set equal to the value of the greatest activity duration T imax . After applying this linearization technique, constraint (2) becomes linear: T
¦Wlin
T
T i ¦ y it , i
it
t 1
(5)
t 1
3.2.2. Slot Constraints Variable y i ,t 1 can take a value of 1 only if activity i started at an earlier time slot (< t+1) and is still active over t+1. For an activity to start at an earlier time one of the variables yit or y it must take a value of 1. These constraints can be combined to:
y it y it t y i ,t 1 , i, t >1, T 1@
(6)
As aforementioned an activity can be executed only once over the time horizon H: T
¦y
it
(7)
1
t 1
3.2.3. Excess Resource Balances The amount of a resource r at the start of slot t is equal to the excess amount Rr ,t 1 over the previous time slot, adjusted by any amounts consumed by starting activities or produced by ending activities interacting with r:
R r ,t
R r ,t 1
¦ >P iI r
ri
@
y it P ri y i ,t 1 y i ,t 1 y it , r , t >1, T 1@
(8)
The first term P ri yit in the summation represents the amount of resource r consumed by starting activities, while the second one corresponds to the amount produced. For t 0 , Rr,0 is the quantity of resource r initially available Rrinitial . To deal with renewable resources, we set the production coefficient of all activities requiring it, equal to the consumption coefficient. Thus we express that the amount of renewable resources consumed (used) by an activity, is released upon its completion. 3.2.4. Excess Resource Capacity Constraints The excess amount of each resource r must lie within given lower and upper bounds, Rrmin and Rrmax . Moreover, the bounds on target resource inventories may be different: , , r rt r max R min r t (9) min d R d R Rr , Final d Rr ,T 1 d Rrmax (10) , Final , r
T.S. Kyriakidis et al.
884
3.3. Objective Function The objective function for our formulation aims at minimizing the makespan: Minimize >H @
(11)
4. Computational Results The proposed MILP formulation was used to solve 3 single-mode RCPSP, with 12, 16 and 20 activities, utilizing 4 renewable resources, with an Intel Core 2 Quad Q8300 @ 2.5GHz and 4GB RAM. Table 1. Computational Results Number of Activities 12 16 20
CPU s 2.79 20.34 49.54
Number of Equations
Binary Variables
Continuous Variables
Nodes
Makespan
847 1447 2207
276 496 780
364 612 924
440 1983 3427
51 73 92
As Table 1 shows, the MILP formulation managed to find the optimal solution for each instance. As the number of activities increases, so does the CPU time.
5. Conclusions An efficient MILP model for the single-mode Resource Constrained Scheduling Problem (RCPSP) is proposed and has been used to solve project scheduling problems with 12, 16 and 20 activities requiring 4 renewable resources. The computational results and the similarities between process and project scheduling problems (such as initial and target inventories, required resource types and precedence relations) suggest that exchanging solution techniques between the two research fields seems useful and promising. Similarities between these two research areas are also explored by Colvin and Maravelias (2010), on stochastic scheduling of clinical trials in the pharmaceutical research and development pipeline.
References S. Hartmann, D. Briskorn, 2010, A survey of variants and extensions of the resource-constrained project scheduling problem, European Journal of Operational Research, 207, 1-14 M. Colvin, C.T. Maravelias, 2010. Modeling methods and a branch and cut algorithm for pharmaceutical clinical trial planning using stochastic programming, European Journal of Operational Research, 203, 205-215. G. Schilling, C. Pantelides, 1996, A simple continuous-time process scheduling formulation and a novel solution algorithm, Computers and Chemical Engineering, 20, S1221-S1226 J. C. Zapata, B. M. Hodge, G. V. Reklaitis, 2008, The Multimode Resource Constrained Multiproject Scheduling Problem: Alternative Formulations, AICHE Journal, 54, 8, 21012119
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Self-learning of fault diagnosis identification José Luis de la Mataa, Manuel Rodrígueza a
Technical University of Madrid, José Gutiérrez Abascal 2, Madrid 28006, Spain.
Abstract A good and early fault detection and isolation system along with efficient alarm management and fine sensor validation systems are very important in today’s complex process plants, specially in terms of safety enhancement and costs reduction. This paper presents a methodology for fault characterization. This is a self-learning approach developed in two phases. An initial, learning phase, where the simulation of process units, without and with different faults, will let the system (in an automated way) to detect the key variables that characterize the faults. This will be used in a second (on line) phase, where these key variables will be monitored in order to diagnose possible faults. Using this scheme the faults will be diagnosed and isolated in an early stage where the fault still has not turned into a failure. Keywords: sensor validation, distributed simulation, modeling.
1. Introduction The increase in complexity and reliability of current industrial processes implies the need of a more demanding methodology (theoretical and practical) related to the supervision, control and Abnormal Situation Management. A good and early detection of faults, an efficient alarm management procedure or a fine sensor validation process help to avoid turning faults into failures and thus plant shutdowns and potential accidents. In this paper an integrated approach will be presented to manage the fault diagnosis and identification problem. Most of the learning-based methods that address the fault diagnosis problem are based on statistic indicators [1,2,3] some of them incorporating artificial intelligence procedures [4,5,6]. Besides, many of these methods fail during plant transitions to another stationary state, although some work have been developed concerning transitions [7,8]. The methodology presented in this paper is based on the time simulation of the process in order to identify the key variables (which ones and its evolution) that characterize a fault. The purpose is to have patterns of variables associated to each fault. The rest of the paper is organized as follows. Next section introduces the learning methodology for fault characterization, the following section applies the presented procedure to an industrial example, finally, last section draws some conclusions including the limits of the proposed approach and how they can be overcome.
2. Methodology The main idea of this methodology is the comparison between two time simulations of the same process: one faulty and the other properly working. Through this comparison the key variables of the fault are identified and create a pattern that characterizes the fault. This methodology can be applied to different situations such as normal working conditions (plant steady state), start up, shut down or set point changes. This is so
J.L. de la Mata et al.
886
because once a model of the process has been developed, its simulation in different conditions is an easy task to carry out. 2.1. Overall learning process The first step of the method is to simulate the reference system. This means that we have to simulate the system as we want it to work, i.e., without any failure or fault. Then, the model of the system is modified introducing a fault (of interest) and the simulation is run again. The time evolution of the variables of the system in both cases is stored. With these data, we obtain the residuals (below described) and certain parameters that characterize the behavior of the faulty system. These parameters are linked to the fault and so, the fault is learned. This process is repeated until there are no faults to simulate. Note that the learning process is carried out off-line. Then, when the process is working, an on-line generation of residuals is performed. The residuals are generated in order to get their parameters and find out if they correspond to a fault already learned. If a new fault happens, it can be added to the detection system memory, in other words, it can be learned using the new generated residuals. 2.2. Residuals generation Let X R , X F R m×n be the Reference Values Matrix and the Faulty Values Matrix respectively, where m N is the number of samples considered and n N the number of variables of the process that are tracked. The columns of these matrices are the time evolution of the variables of the process in a reference state for the first matrix and in a faulty state for the second. It should be noticed that for each fault the matrix of faulty values XF changes, but XR remain the same. This means that the matrix XR is the same for all of the faults. Let R R m×n be the Residuals Matrix. It represents the difference between the reference values and the faulty values of the tracked variables of the process. It is obtained as follows
R= XR XF
(1)
This difference provides the real deviation from their reference value of each variable at any time. However, in order to compare the residuals of each variable, it is better to use a dimensionless parameter. Let D R m×n be the Dimensionless Residuals Matrix, it is given by
dij = rij / xijR
; i = 1, 2,…, m ;
j = 1, 2,…, n
(2)
The elements of this matrix are the quotient between each residual with its reference value at this time. As we are calculating the quotient between two physical magnitudes with the same units (the residual and the reference value) the value obtained is a dimensionless value, as we want to. 2.3. Parameters to characterize a fault The most relevant parameters of every residual are listed below: • Higher maximum. It is the residual with the higher value at any time. Note that all of the residuals, according to eq. (2), are positive (dij > 0). • Lower maximum time. It is the residual that reaches its maximum first. • Higher initial slope. It is the residual that varies the most at the very beginning.
Self-learning of fault diagnosis identification
887
Higher steady-state value. It is the residual with the higher value when the system reaches its steady state or when the simulation ends up. It is important that the residuals chosen have a fast response so residuals with high time delays or slow responses are not considered to form the pattern. Every fault is characterized using these four parameters (each of which is related to a process variable), so when they are identified in an ongoing process, the fault can be detected and isolated before it turns into a failure. •
3. Industrial example 3.1. Process description The process consists of a jacketed reactor, a cooler and a storage tank. An exothermic reaction of isomerization in liquid phase is carried out in the reactor. Cooling water is poured through the jacket to keep reactor temperature. The cooler is a shell and tube heat exchanger with one shell pass and two tubes passes. There are also five control loops controlling the reactor level and temperature, cooler outlet temperature, tank level and recycle flow. The P&ID of the process is shown in Fig. 1.
CW
Valve V-05
1
Feed
FY 01
cw 2
Valve V-01 CW
TY 01
TIC 01
LT 01 cw 1
Reactor R-01
TT 01
FIC 01
6
FT 01 LT 02
LIC 01
LY 01
LY 02
Tank T-01
2
Pump P-01
LIC 02
3
Valve V-02 Valve V-03
CW
TY 02
4
TIC 02
Pump P-02 cw 4
cw 3
Cooler C-01
5
Out
Valve V-04
CW
TT 02
Fig. 1: Process P&I diagram.
3.2. Model description The reactor and the tank are modeled using mass and energy balances. In both cases perfect mixing is considered. The cooler has been modeled using a discretization of its internal flow dynamics. The five control loops are considered as PIs and they are tuned using the Ziegler-Nichols method. The overall mathematical system consists of 41 EDOs, 54 constitutive equations and 10 flow constraints. There are 43 variables of interest in the process concerning flows, molar fractions and temperatures. 3.3. Fault analysis 3.3.1. Fouling in the reactor jacket This is a particularly common and severe problem because the heat transfer decreases and the reaction can run away. The time evolution of the residuals is depicted in Fig. 2. Notice that only the most relevant residuals are shown in the figure. There are two plots, the first one (left) shows the residuals until they reach the steady state and the second one (right) shows the first three minutes of the simulation. The parameters of this fault are:
J.L. de la Mata et al.
888
• Higher maximum: Cooling water flow. • Lower maximum time: B molar fraction in the reactor. • Higher initial slope: Jacket temperature. • Higher steady-state value: Cooling water flow. Note that in the steady state variables such as reactor temperature and molar fractions in the reactor go back to zero residual. This is because the control system keeps the reactor variables under control. However, the flow to the jacket and the jacket temperature have a non-zero residual in steady state. This information is consistent with the fault described.
xA R xB R
: A molar fraction in the reactor
TR
: Reactor temperature
TJ
: Jacket temperature
hR
: Reactor level
FULL SIMULATION
0.18
xR B
0.16
TR
: B molar fraction in the reactor
TR
Fcw
: Cooling water flow to the jacket
TJ
hR
hR 0.01
Fout
0.12 Residuals
: Reactor outlet flow
xR B
0.012
FCW
0.1 0.08
Fout
xR A
0.014
TJ
0.14
FIRST 30 3 MINUTES
0.016
xR A
Residuals
0.2
Fout FCW
0.008 0.006
0.06 0.004 0.04 0.002
0.02 0
0
5 500 Time (min)
0
10 1000
1 2 10 20 Time (min)
0
3 30
Fig. 2: Fouling in the reactor jacket residuals. FULL SIMULATION 1
3 MINUTES FIRST 30
1.4
xR A
xR B
xR A
xR B
1.2
TR
A molar fraction in the reactor
hR
:
B molar fraction in the reactor
Fout
TR TJ
:
Reactor temperature
:
Jacket temperature
hR
:
Reactor level
Fout
: :
Reactor outlet flow Cooling water flow to the jacket
FCW
0.6
TJ
1
Residuals
:
Residuals
xA R xB R
Fcw
TR
TJ
0.8
hR Fout
0.8
FCW
0.6
0.4 0.4 0.2 0.2
0
0
5 500 Time (min)
10 1000
0
0
1 2 10 20 Time (min)
330
Fig. 3. Residuals of a fault in the reactor level sensor.
3.3.2. Fault in the reactor level sensor The faulty sensor provides a level lower than the real. The time evolution of the residuals is shown in Fig. 3 and its parameters are: • Higher maximum: Reactor outlet flow.
Self-learning of fault diagnosis identification
• • •
889
Lower maximum time: Reactor outlet flow. Higher initial slope: Reactor outlet flow. Higher steady-state value: Cooling water flow.
The residuals shown in Fig. 3 are consistent with the fault described. At first the outlet flow varies a lot to compensate the difference from the level set point (control system action) and then, the cooling water flow also increases to compensate the increment of volume in the reactor. 3.3.3. On-line detection Comparing figures 2 and 3 is obvious that the residuals of both faults are different and the parameters used to characterize them also differ. So if during normal operation of the plant we detect the pattern of any of these faults, we can identify them and corrective actions can be performed.
4. Conclusions and further work In this paper a new methodology for fault detection has been presented. This methodology is based on characterizing the fault type associating it to a set (pattern) of variables. This pattern has the variables and their time evolution. It is a self-learning approach as it can be programmed to generate every potential fault in the system and compute the (automatically) residuals and, following the given criteria, establish the corresponding pattern to that fault. When used online it allows an early fault detection and besides it continues with the learning process which is run in the background. This will generate new patterns as well as will update existing (non working properly) ones. The methodology has been applied to an industrial case. The ongoing work is to apply the methodology to an industrial process and integrate it with a sensor validation procedure that will permit to discriminate bad plant data from true faults. The methodology will also be integrated with the functional methodology developed Dhigraphs that provides an easy way to follow how the status of the overall plant is changing and what the consequences of the fault will be (if not fixed on time).
References A. Kulkarni, V.K. Jayaraman, B.D. Kulkarni (2005). Knowledge incorporated support vector machines to detect faults in Tennessee Eastman Process. Computers and Chemical Engineering, 29, 2128-2133. X. Bin He, W. Wang, Y. Pu Yang, Y. Hong Yang (2009). Variable-weighted Fisher discriminant analysis for process fault diagnosis. Journal of Process Control. S. Yoon, J.F. MacGregor (2001). Fault diagnosis with multivariate statistical models part I: using steady state fault signatures. Journal of process control, 11, 387- 400. L. Rokach (2007). Genetic algorithm-based feature set partitioning for classification problems. N. Vora, Tambe S.S., Kulkarni B.D. (1997). Counterpropagation neural Networks for fault detection and diagnosis. Computers and Chemical Engineering, 21, 2, 177-185. M. Yong, X. Zheng, Y. Zheng, S. Youxian, W. Zheng (2007). Fault diagnosis based on Fuzzy Support Vector Machine with parameter tuning and Feature Selection. Chinese Journal of Chemical Engineering, 15, 2, 233-239. A. Sundarraman, R. Srinivasan (2003). Monitoring transitions in chemical plants using enhanced trend analysis. Computers and chemical engineering, 27, 1455-1472. R. Srinivasan, P. Viswanathan, H. Vedam, A. Nochur (2005). A framework for managing transitions in chemical plants. Computers and chemical engineering, 29, 305- 322.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Complex Network Optimization in FMCG Ali Mehdizadeha, Nilay Shaha, Peter M.M. Bongersb, Cristhian Almeida-Riverab a
Centre for Process Systems Engineering, Department of Chemical Engineering,
Imperial College London Technology and Medicine, United Kingdom b
Unilever Research Vlaardingen, Oliver van Noortlaan 120, POBox 114, 3130 AC
Vlaardingen, TheNeherlands
Abstract Quantitative supply chain modelling has contributed substantially in fields such as automotive, logistic, hardware, etc. However, these methods and optimization have not been employed widely in the food industry despite all the potential benefits they may bring to this sector. The objective of this work is to develop a supply chain model for the food industry based on the relationships between raw material quality, processing conditions and final product quality and customer satisfaction. Moreover, based on the developed model, we have determined key factors in the whole chain that are most likely to affect customer satisfaction and consequently overall demand. Furthermore, optimum conditions to maximize overall profit have been determined. There are two main quality indicators to be optimized which are considered in the model: microbiological and nutrient content. At this stage, the model takes two different processing types into account (air-drying and freeze-drying routes) and the interrelationship between quality and processing conditions have been developed. Thus, this model enables finding the optimum processing conditions to obtain maximum quality with minimum cost. Keywords: food processing, quality optimization, supply chain model
1. Introduction In the FMCG (Fast Moving Consumer Goods) sector the product improvement process is based on fundamental research and development, which is the key factor of technology development. Technology development uses product innovation to improve the existing or create a new product and services, and the process innovation is to reduce the cost of the production. Supply chain management is the research area that develops such a technique, which finds the bottleneck and optimizes all the steps from supplier to distribution. The supply chain is a series of logistic internal and external activities between companies to increase the efficiency by sharing information between entities to gain customer satisfaction at the lowest possible costing [1]. The model covered in this contribution considers two types of processes types: air drying and freeze drying routes. At each state of each processing route, the quality of certain product indicators must be controlled and measured. Two main indicators are considered in this model: microbiological content and nutrient content. Microbiological content captures the time dependent effects of the quality of the product, while nutrient content captures the static effects. As a result they both together can produce a good
Comples Network Global Optimization in FMCG
891
indicator of product quality and by having them optimized in the model we can optimise a representation of product quality. Quality control and food supply chain have not been widely studied. With only a limited number of publications in the form of scientific and industrial working papers there are no structural models in the food product and process quality modelling [2]. However there are some case studies in the improvement of quality of tomato ketchup [3-5]. In this contribution, we have developed a methodology to increase the quality of the products and relate the quality with supply chain model optimization in a quantitative way to minimize the cost. 2. Mathematical Model There are three systems to be mathematically modelled: Manufacturing system: this system models the production, reengineering and quality of the product. Logistics system: models distribution and transportation of the product. Inventory system: hedge against the uncertainties of supply and demand. The current model focuses exclusively on the processing part, manufacturing and inventory systems. However, the scope of the model will be extended to sourcing and logistics in the future. These systems are fed with the initial data and provide feedback to the planning and scheduling echelons, which could later be used as an input to increase the accuracy of the results.
2.1. Modelling quality propagation (linear) The first step is to model and calculate the quality of the product at each state. The following linear model has been introduced to measure the quality, which is only affected by the process:
qout
Aprocess B process qin C process u Tprocess
where qout represents the quality of the product after each task of the specific indicator. The qout at each state becomes an input quality of the next state. In this model different processes will have different effects. Figure 1 shows the effect of the process on the quality. During a task the product goes through the process described by the constant parameters A, B and C. The quality of the product could also be affected by the process degrees of freedom (e.g. temperature, processing time), u.
A. Mehdizadeh et al.
892
Figure 1. Modelling quality and process
This model tries to linearize the nonlinear model as shown in Figure 2. In the model, C represents the slope of the linear model. Figure 2 (left) depicts the model without input quality (B=0). On the other hand Figure 2 (right) shows the graphical shape of the model with input quality by setting the initial point to be B.q (or A=0). As shown in the figures, quality should depreciate as we progress through tasks. Also the slope (C) of the graph should reduce as B is negative and causes downward movement in y-axis and reduction in slope.
Figure 2. Left: linearity of the model, B=0; right: Linearity of the model, A=0
In addition to the two processing types, blending and mixing are considered in the model. Usually the end product with quality that best matches the consumer’s expectation should be chosen, which might be achieved by taking a portion of each processing type.
q AB
x A q A xB qB
A correction must be applied to this linear model for converting a nonlinear model to a linear one will most likely generate an inaccurate result. Therefore, the following calculation is applied: ݍ ൌ ݔ ݍ כ ሺͳ െ ݔ ሻ ݍ כ ܯሺͳ െ ݕ ሻ ܿ ݎݎ ǡ݇ ݔ ൌ ܣ݂ݏ݊݅ݐܿܽݎ݂݈ܾ݀݊݁݀݁ݓ݈݈݂ܽݐ݁ݏ ݕ ൌ ͳ ݄݇݁ݐ݂݅ݕ݈݊௧ ݔ݂݁ݑ݈ܽݒ ݅݊݁ݏ݄ܿݏ
Comples Network Global Optimization in FMCG
893
ݍ ݍ െ ܯሺͳ െ ݕ ሻ
where qABK are data points. This set of constraints is applied to each allowed blend fraction and implies that the quality calculated with a correction is at the best equal to the quality calculated at the blended fraction without correction factor. The sign of this constraint changes depending on the shape of the curve of the nonlinear model (e.g. concave, convex or sinusoidal). The objective of the supply chain model is to develop a new model for the current production line to meet consumer demand, quality and quantity. This model has to meet the demand in highest quality with the lowest possible costing, and parallel to this maximizing the revenue. Cost optimization concentrates on minimizing the cost of production, transportation, etc. Revenue maximization, on the other hand, includes sales and marketing factors. This objective would generate results for the following year adjustments to increase the profitability. The main challenge in importing quality models into supply chain models is to measure the quality factors and use them quantitatively. One of the ways is to minimize the wastage to reduce the disposal cost. Adding the quality factor and maximizing the quality of the product will reduce the wastage and, subsequently, costing.
3. Representative results Regression is required to fit a linear quality model. The result of the regression will be fed back to planning and scheduling step to update the data and as results generate a more accurate outcome. The quality of the product reduces as we progress in states, after each task. Thus, for one state after a given task for each processing type and for one state after blending/mixing the following results are obtained. Firstly, the results of each of the processing types are presented in Figure 3, where a linear quality model with minimum u of 0.5 and maximum of 1 is presented. By using the blending/mixing processing type, these two processes are combined to make the most out of each processing type. The two processes are mixed with the addition of the correction factor. The correction factor will convert the produced linear model to the real nonlinear model, as depicted in Figure 4. Six different curves are shown in the figure, where each curve takes a portion of each processing types. The proportions of mixing are: [(0, 1), (0.2, 0.8), (0.4, 0.6), (0.6, 0.4), (0.8, 0.2), (1, 0)]. This figure represents the mixed quality of both processing types.
U Figure 3. Linear quality for processing types
U Figure 4. Blending results with correction factor
894
A. Mehdizadeh et al.
4. Conclusions A methodology is developed to model and optimise the quality of the current product to match the consumer requirement. The modelling of food quality control in industry has not been widely studied; on the other hand implementation in supply chain is the other challenge we have been facing in this paper. The quality factors have been studied in the linear model and a correction has been added to increase the accuracy of the result. This quality model has been implemented into the production system of our supply chain model. This model in the future will be extended to address transportation and logistics systems.
References [1] Shah, N. (2005) Process industry supply chains: Advances and challenges, European Symposium on Computer-Aided Process Engineering-14, 37th European Symposium of the Working Party on Computer-Aided Process Engineering, Volume 18, 2004, Pages 123-138 [2] Costa A.I.A, Dekker M and Jongen W.M.F. (2002) Quality function deployment in the food industry [3] Dekker M and Linnemann A.R. (1998) Product Development in the Food Industry. In: W.M.F. Jongen and M.T.G. Meulenberg, Editors, Innovation of Food Production Systems: Product Quality and Consumer Acceptance, Wageningen Pers, Wageningen, pp. 67–86. [4] Costa, A.I.A. (1996) ‘Development of Methodologies for Quality Modelling: An Application on Tomato Ketchup', MSc Thesis, Department of Agrotechnology and Food Sciences— Integrated Food Technology, Wageningen University, Wageningen. [5] Vries, E. de (1996) ‘Quality Function Deployment: Modelling the Quality of Ketchup', MSc Thesis, Department of Agrotechnology and Food Sciences—Integrated Food Technology, Wageningen University, Wageningen.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Freshwater Production by MSF Desalination Process: Coping with Variable Demand by Flexible Design and Operation Ebrahim A. Hawaidi and Iqbal M. Mujtaba * School of Engineering Design &Technology, University of Bradford, West Yorkshire BD7 1DP, UK, E-mail: [email protected]
Abstract In general, freshwater demand varies throughout the day resulting in dynamic operating conditions of processes (such as desalination) making freshwater. This work investigates how the design and operation of an MSF desalination process are to be adjusted in order to cope with a variable demand of freshwater throughout the day. To avoid dynamic changes in operating conditions of the process and restrict these changes only at discrete times, a storage tank is added at the end of MSF process. For different process configurations (design) in term of number of flash stages, the operation parameters such as make-up and brine recycle flow rates are optimized at discrete time intervals (based on the storage tank level which is monitored dynamically and maintained within a limit) while the total operating cost is minimised. It is observed that the storage tank adds significant flexibility to the operation and maintenance of the process while coping with the variable freshwater demand. Keywords: MSF process, storage tank, variable freshwater demand, optimization
1. Introduction The shortage of freshwater is not a temporary problem in a specific area or in one country, but a long-term and substantial problem concerning the survival of human beings and development of society in most countries (Buros, 2000). Among various desalination processes, the multistage flash (MSF) desalination process (Fig. 1) is a major source of fresh water around the world (EL-Dessouky and Ettouney, 2002). In general, the selection of optimal design and operation of a MSF desalination process is aimed at reducing energy and operation costs such as steam, electric power, antiscale, etc. Most recently, a number of authors (Tanvir and Mujtaba, 2008; Hawaidi and Mujtaba, 2010) focused on optimal design and operation of MSF processes based on fixed freshwater demand 24 hrs a day, 7 days a week and 365 days a year. However, in reality the demand (Alvisi et al., 2007) and also the seawater temperatures (Yasunaga et al., 2008) vary throughout the day and throughout the year. With the design and operating conditions being fixed, the freshwater production varies considerably with the variation of the seawater temperature, producing more freshwater at night (low seawater temperature) than during the day (high temperature) (Tanvir and Mujtaba, 2008). Unfortunately, this goes exactly counter to demand, which is greater during peak hours (morning, noon and evening) than after mid night. For different designs (MSF configurations) and with varying seawater temperatures, this work shows how the operation is to be optimally adjusted to maintain a variable demand of freshwater throughout the day at low cost. An intermediate storage tank (Fig. 1) between the plant and the client is added for increased flexibility in design and
896
Hawaidi and Mujtaba
operation in terms of maintenance of the MSF process throughout the day. For different design, the total operation cost of the MSF process is selected to minimize, while optimizing the operating parameters such as make-up and brine recycle flow rate at discrete time intervals based on the storage tank level. The software gPROMS models builder 2.3.4 is used in this work for model development and optimization.
Fig.1 A typical MSF desalination process with storage tank
2. Dynamic Seawater Temperature and Freshwater Consumption Profiles Fig. 2 (a and b) shows the variation of seawater temperatures (˚C) and the variable demand of freshwater (Flow_out) with time. These data were fitted using the following correlations: Tseawater = −2 × 10−16 t 6 + 6 × 10 −6 t 5 − 0.0003t 4 + 0.00032t 3 + 0.007t 2 − 0.1037t + 28.918 (1) Flow _ out = −0.00059t 5 + 0.0355t 4 − 0.757t 3 + 6.646t 2 − 17.56t + 40.88
(2)
Fig. 2 (a) Seawater temperature and (b) Fresh water consumption profiles
3. Storage Tank Level Constraints In Fig. 3, the storage tank is assumed to operate without any control on the level, and therefore h goes above the limit hmax or below the limit hmin during the operation of MSF process. At any time, this violation ( v1 , v2 ) of safe operation can be defined as:
v1 =
( h ( t )− hmax )2 0
if h > hmax if h< hmax
and
v2 =
( h ( t )− hmin )2 0
if h< hmin if h > hmin
.
tf
Fig. 3 A typical storage tank level profiles
Fig. 4 Violations during the process
Freshwater Production by MSF Desalination Process: Coping with Variable demand by Flexible Design and Operation
897
A typical plot of v1 and v2 versus time t is shown in Fig. 4. The total accumulated tf
violation for the entire period can be calculated using: VT = ³ (v1 (t ) + v2 (t )) dt . t =0
dVT 2 2 (3) Therefore, = v1 (t ) + v2 (t ) = ( h(t ) − hmax ) + ( h(t ) − hmin ) dt This equation is added to the overall process model equations. Also an additional terminal constraint 0 ≤ VT ≤ ε is added to the optimisation formulations, where ε is a very small finite positive number (10-6). The above constraint will ensure that h(t) will always be equal or less than hmax and equal or above hmin throughout 24 hrs thus making both v1 and v2 close to zero in equation (3).
4. Optimization Problem Formulation The optimization problem is described below. Given: the MSF plant configurations, fixed design specification of each stage, volume of the storage tank, seawater flow, seawater temperature, top brine temperature (TBT) and fresh water demand profile. Determine: the optimum recycled brine flow rate (R), make-up seawater (F) at different intervals within 24 hrs. Minimize: the total operating cost (TOC). Subject to: process constraints. The Optimization Problem (OP) is described mathematically for any interval as: OP Min TOC R,F
s.t.
(model equations) TBT = TBT * = 90 0C 0 ≤ VT ≤ ε (2 × 106 ) RL ≤ R ≤ RU (6.55 × 106 )
(2 × 106 ) FL ≤ F ≤ FU (6.55 × 106 ) Total Operating Cost (TOC) is composed of many components as shown in Fig. 5 (Helal et al., 2003). The MSF process model is given in Hawaidi and Mujtaba (2010) and the dynamic tank model is given in equations 5- 6. Note the overall process model is dynamic. dM (5) Mass balance: = Flow in - Flow out dt Relation between liquid level and houldup: M = ρ Ah (6)
TOC (Total Annual Operating Cost) = C1 + C2 + C3 + C4 + C5 C1 (steam cost) =8000 × Wsteam × [(Tsteam − 40) / 85] × 0.00415 C 2 (chemical cost) =8000 × [ D j / 1000] × 0.025 C 3 (power cost) =8000 × [ D j / 1000] × 0.109 C 4 (maintenance and spares cost) =8000 × [ D j / 1000] × 0.082 C 5 (labour cost) =8000 × [ D j / 1000] × 0.1
Fig. 5 Cost Functions (all costs are in $/year)
898
Hawaidi and Mujtaba
This optimisation problem minimises the total operating cost while considering variable seawater temperature change (Eq. 1) and customer demand (Eq. 2) and optimises R and F over 3 intervals (within 24 hr period) meeting the storage tank height constraints. The overall process model leads to a coupled system of differential and algebraic equations. The transient states which would occur in the MSF process due to change in seawater temperature and R and F have been ignored in this work. It is assumed that the lengths of these unsteady states are small.
5. Case Study In this work, the actual freshwater demand at any time is 5 times more than that shown in Fig. 2. The storage tank has diameter (D =18m), and aspect ratio: L/D = 0.5. The feed seawater flow rate is 1.13 ×107 kg/hr with salinity 5.7 wt%. The rejection section consists of three stages and the number of stages in the recovery section varies from case to case. The initial value of ‘h’ is 0.5 meter. Three intervals within 24 hr is considered within which F and R are optimised together with the interval lengths. Table 2 summarises the cost components and total operating cost on a daily basis. Fig. 6 represents the optimum recycle flow rate (R) and make-up flow rate (F) at discrete time intervals. Fig. 7 illustrates the variations of steam temperature and consumption profile. The dynamic storage tank levels for different cases are shown in Fig. 8. For, number of stages, N=16, Fig. 9 shows the freshwater production and consumption profiles. Table 2 Summary of optimisation results N 16 15 14
C1 ($/d) 10175.18 10929.27 11816.78
C2 ($/d) 609.80 609.67 609.52
C3 ($/d) 2697.86 2697.28 2696.62
C4 ($/d) 2029.58 2029.15 2028.65
C5 ($/d) 2475.10 2474.57 2473.97
TOC ($/d) 17987.52 18739.95 19625.55
Fig. 6 Optimum seawater makeup and brine recycle flow rates throughout profiles
Fig. 7 Steam temperature and consumption profile As the total number of stages, decreases the total operating cost is increased (Table 2) due to higher F and R and steam consumption (Figs. 6 and 7). Although there is a decrease (only slightly) in chemicals, maintenances & spares and labour costs, the
Freshwater Production by MSF Desalination Process: Coping with Variable demand by Flexible Design and Operation
899
contribution of the steam cost is relatively higher. The intermediate storage tank adds the operational flexibility e.g. maintenance could be carried out without interrupting the production of water or full plant shut-downs at any time throughout the day just by adjusting the number of stages. The optimised interval lengths are found to be 0-6 hr, 619 hr, 19-24 hr (Figs. 6-7) respectively within which R and F are optimised. The results show that when the freshwater demand is more than the freshwater production rate (Fig. 9), the storage tank level falls down (Fig. 8) and in order to maintain the demand the makeup and brine recycle flow rates need to be increased (Fig. 7). The opposite happens when freshwater consumption rate is less than the freshwater production rate and consequently the tank level increases. The highest tank level ‘h’ is noted at 8 am and the lowest level at 23 pm.
Fig.8 Storage tank level profiles at different number of stages
Fig.9 Freshwater production and consumption profile (N=16)
6. Conclusion For a given design, an optimal operation strategy for an MSF desalination process subject to variable seawater temperature is outlined and demonstrated in this work to cope with variable freshwater demand. An intermediate storage tank is considered between the MSF process and the client to add flexibility in meeting the customer demand. A steady state process model for the MSF process coupled with a dynamic model for the storage tank is developed within gPROMS modeling software. The intermediate storage tank helps to avoid dynamic changes in operating conditions of the process and restricts these changes only at discrete times. In this work, three discrete time intervals are used. However, more than 3 intervals could have been used. For several process configuration (the design), the operation of the MSF desalination process at discrete time intervals are optimized, while minimizing the total operation costs. Although the optimisation results show an increase in total operating costs with decreasing number of MSF stages, the intermediate storage tank adds flexible scheduling and maintenance opportunity of individual flash stages and makes it possible to meet variable freshwater demand with varying seawater temperatures without interrupting or fully shutting down the plant at any-time during the day.
References A. M. Helal,. A. M. El-Nashar,. E. Al-Katheeri and S. Al-Malek, (2003), Desalination, 154, 4366. E. A. M. Hawaidi and I. M. Mujtaba, (2010), Chemical Engineering Journal, (available on line). H. T. EL-Dessouky and H. M. Ettouney, (2002), Amsterdam: Elsevier Science Ltd. K. Yasunaga , M. Fujita, T. Ushiyama , K. Yoneyama ,Y. N. Takayabu and M. Yoshizaki, (2008), SOLA, 4, 97ԟ100. M. S. Tanvir and I. M. Mujtaba, (2008), Desalination, 222, 419- 430. O. K. Buros, (2000), Desalination Association, Riyadh, Saudi Arabia S. Alvisi, M. Franchini and A. Marinelli, (2007), Journal of Hydroinformatics, 91, 39-50. S. F. Mussati, P. A. Aguirre, N. J. Scenna, (2004), Desalination 166, 339- 345.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimal run length in Factory operations to reduce overall costs Peter Bongersa,b, Cristhian Almeida-Riveraa a
Structured Materials & Process Science, Unilever Research, Olivier van Noortlaan 120, 1330AC, Vlaardingen, The Netherlands. [email protected] b Technical University of Eindhoven, Department of Chemical Engineering and Chemistry, PO Box 513, 5600 MB Eindhoven, The Netherlands
Abstract Unilever as a fast moving consumer goods company needs to have its supply chain deliver a reduction of inventory, increased flexibility of producing ‘on demand’ whilst reducing the overall costs. Furthermore, as intermediate products are made batch-wise, a minimum batch size is needed when manufacturing the product. The batch vessels have a limited capacity, as well as a minimum quantity that should be inside the vessel to ensure proper mixing. Excess intermediate products need to go to waste, resulting in increased costs for the products. As such, factories prefer to work with fixed batch sizes. The trade-off between short run length and frequent changeovers and fixed batch sizes in the factory, inventory level, on shelf availability and overall costs is not straight forward. In this paper we present a MILP model to assist the trade-off and compare the results with actual data. Keywords: run-length, MILP, factory operations, batch size, ice cream
1. Introduction Unilever as a fast moving consumer goods company needs to have their products available when the shopper wants to buy them (on shelf availability), to have product propositions affordable for the consumers (value for money), as well as to be innovative in its product portfolio. In order to have the products available on shelf, a large inventory of products would be required. However, substantial costs are associated with product inventory. At large product inventories, novel product innovations are either costly due to a write-off of obsolete products, or, having a long lead time before the old inventory has been sold. On the other hand, the factories manufacturing the products have to be efficient. The overall factory efficiency is increased when the products are manufactured continuously for longer and the number of changeovers from one product to the next is minimal. In operational terms, the economic batch quantity (EBQ) is used to determine the minimal production quantity of a product (SKU) [1]: EBQ =
2 ⋅ ChangeOverCosts ⋅ TotalDemand StorageCosts
Optimal run length in Factory operations to reduce overall costs
901
It is well known that in theory significant lower costs can be obtained when: (i) seasonality, or unsteadiness, of the demand is relevant and taken into consideration, and (ii) additional cost drivers (e.g. changeover costs, storage costs, etc.) are taken into account. Such models are well described in [2-5]. One aspect that is often overlooked is local constraints in the manufacturing sites. In most of the FMCG factories, the final product is made from intermediate product(s). The final product manufacture is often continuous, whilst the intermediate products are made batch-wise. Thus, a minimum batch size is needed when manufacturing the product. The batch vessels have a limited capacity, as well as a minimum quantity that should be inside the vessel to ensure proper processing (e.g. mixing, heating or cooling). Excess intermediate products need to go to waste, resulting in increased costs for the products. In addition, factories prefer to work with fixed batch sizes. 2. Problem description Two models have been formulated, representing the mid planning of an ice cream production and warehousing. The following nomenclature has been used for the baseline model and for the extended model: Indices: lines l, products p, weeks w Sets: lines L, products P, weeks W Parameters AvailableTimewl available production time in week w on line l [hr] ChangeOverCostpl change-over cost from product p on line l [€] ChangeOverTimepl change-over time from product p on line l [hr] Demandwp demand in week w for product p [liton] IWCC inflation on working capital [€/liton/week] ProductionRatepl production rate for product p on line l [liton/hr] StorageCostp storage costs for product p [€/liton/week] InitialInventoryp initial inventory of product p [liton] IncrementRunlengthpl incremental runlength for product p on line l [liton] MaxIntegerRunlengthpl maximum number of batches for product p on line l [#] MinIntegerRunlengthpl minimum number of batches for product p on line l [#] Variables Production(w,p,l) Indicates amount of product p being manufactured on line l, week w, [#] Inventory(w,p) Indicates amount of products p in week w, being held in inventory [liton] Integer variables Isproduction(w,p,l) Indicates if in week w, product p is being manufactured on line l [0 or 1] ProductionSize(w,p,l) Indicates the size of the batch in week w, for product p being manufactured on line l [0 .. MaxIntegerRunlengthpl]
2.1. Base-line model When a desired quantity (Production) of product p has been manufactured in week w on a line l with production rate (ProductionRate), a certain time is needed for cleaning and change-over (ChangeOverTime) of the line. The binary variable IsProduction indicates whether the product has been manufactured during that week. When a product pi can not
Peter Bongers and Cristhian Almeida-Rivera
902
be produced on line lj, ProductionRate(pi,lj)=0. To prevent a zero denominator, the term ε (ε> 0) has been added. For each week w and line l, the total production time and change-over time need to be less than the available time. Two constraints are needed to force the binary variable IsProduction= 0 when there is no l Production(w,p,l) AvailableTime(w,l) ≥ ¦ + ¦ IsProduction(w,p,l)*ChangeOverTime)(p,l) ∀w, ∀ p ProductionRate(p,l) + İ p production and IsProduction=1 when there is production: M ⋅ Production(w,p,l) ≥ IsProduction(w,p,l)
and
Production(w,p,l) ≤ M ⋅ IsProduction(w,p,l) ∀w,p,l
where M is a sufficient large number. The Demand of product p in week w will be fulfilled by either the production of the products in week w, or consumed from the inventory upto week w-1. Any excess of production will be stored in the inventory. Inventory( w, p ) = InitialInventory(p) + Inventory( w − 1, p ) +
¦ Production(w,p,l) − Demand (w, p)
∀w, p
l
Based on the above equations and constraints, the following MILP has been formulated and will be solved: min[TotalStorageCost + TotalIWCCost + TotalChangeOverCost]
with the total costs estimated as follows:
¦¦ StorageCost ( p) ⋅ Inventory(w, p) TotalChangeOverCost = ¦¦¦ ChangeOverCost ( p, l ) ⋅ IsProduction(w, p, l ) w p l TotalStorageCost =
w
TotalIWCCost = IWCC
p
¦¦ Inventory(w, p) w
p
2.2. Extended model In the formulation of the base-line model the production quantity can have any value. As intermediate products are made batch-wise, a minimum batch size is needed when manufacturing the product. The batch vessels have a limited capacity, as well as a minimum quantity that should be inside the vessel to ensure proper processing. As any excess intermediate products need to be discarded to waste and as factories prefer to operate with fixed batch sizes, an extended model has been formulated to accommodate these constraints. The production quantity of a product p during week w on line l has to satisfy certain size rules with respect to the equipment: (i) a minimum quantity; (ii) a maximum quantity and (iii) fixed increments. Suppose the product is manufactured using batch vessels for either storage or intermediate product manufacture and the size of the vessel is IncrementRunLength. A production of product p on line l in week w should only happen if a minimum amount of
Optimal run length in Factory operations to reduce overall costs
903
the product will be produced: IncrementRunLength*MinIntegerRunLength. Likewise, the maximum amount will be IncrementRunLength*MaxIntegerRunLength. The minimum production quantity is determined by the binary variable IsProduction and the minimum amount to produce, Production(w,p,l) ≥ IsProduction(w,p,l) ⋅ IncrementRunLength(p,l) ⋅ MinIntegerRunlength( p, l ) ∀w,p,l
Likewise, the maximum prduction quantity is determined by the binary variable IsProduction and the maximum amount to produce. Production(w,p,l) ≤ IsProduction(w,p,l) ⋅ IncrementRunLength(p,l) ⋅ MaxIntegerRunlength( p, l ) ∀w,p,l
The incremental production quantity is determined by the integer variable ProductionBatchSize and the incremental quantity IncrementRunLength, Production(w,p,l) = ProductionSize(w,p,l) ⋅ IncrementRunLength(p,l) ∀w,p,l
3. Results In order to evaluate the model, a set of data from an ice cream factory has been used [6]. This sub-set of the ice cream factory consists of 50 SKU’s that can be manufactured on two packing lines with different characteristics (e.g. line speed, change-over time/cost, etc.). Storage costs and inflation on working capital are fixed by SKU and independent of the packing lines. For each of the SKU’s there is a minimun and an incremental batch size of the intermediate product (given by the size of the storage vessels). The maximum batch size is given by cleaning rules. For each of the SKU’s a monthy demand forecast is available. Figure 1 shows the differences in production quantities for (i) the actual production; (ii) the base-line model; and (iii) the extended model with the batch constraints. The horizontal lines in the graph represent the size of the storage vessel (8 tonnes) for this SKU. It can be seen that the base-line model generates quantities that are deviating from this amount, as can be expected. The extended model generates the desired quantities. Furthermore, the extended model generates production earlier, which should lead to higher costs. 80 72 64 56
amount
48
optimal batch constraint ebq
40 32 24 16 8 0 1 3 5 7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 week
Figure 1. Comparison between the actual production, base-line model and extended model
Peter Bongers and Cristhian Almeida-Rivera
904
TotalCost TotalStorageCost TotalIWCCost TotalChangeOverCost
base-line model -52% -73% -73% -6%
extended model -36% -49% -49% -8%
Table 1 Cost comparison of the extended model and the base-line model with respect to actual production data. Negative values denote a reduction with respect to the actual production data.
As expected, by setting fixed increments of the production quantities the total costs are increased (Table 1) with respect to the base-line model results. The increase of storage costs (from -73% to -49%) as a consequence of the larger increments is not compensated by a reduction in changeover costs (from -6% to -8%). The model results for one SKU are compared with the actual production data (Table 2). For this specific SKU the EBQ is 72. We observe that the factory is running with even larger batch sizes to reduce their change-over time and costs. The model predicts that smaller batch size and more changeovers leads to lower overall costs (small ‘pain’ for the factory, large ‘gain’ for the company). actual data Extended model
batch sizes 100-150 49-105
number of times produced in year 8 12
Table 2 Production comparison between the extended model and actual production data
4. Conclusions In this paper we presented a MILP model to optimise the trade-off between short run length and frequent changeovers in the factory and inventory level, on shelf availability with respect to overall costs. The model is extended to deal with the complicating factor of constraint of fixed batch sizes for the intermediate products. The final results are compared with actual data and concluded that by implementing this extension lower overall costs can be obtained. 5. References [1] [2] [3] [4] [5] [6]
Grubbstrom, R.W. (1995) Modelling production opportunities – an historical overview. Int. J. Production Economics, vol.41, p.1-14. Jain, A., P.K.Jain, L.P.Singh (2006). An integrated scheme for process planning and scheduling in FMS. Int. J. Adv. Manuf. Technol., vol.30, p.1111-1118. Kallrath, J. (2002). Combined strategic and operational planning – an MILP success story in chemical industry. OR spectrum, vol.24, p.315-341 Neumann, K., C. Schwindt, Trautmann (2002) Advanced production scheduling for batch plants in process industries. OR spectrum, vol.24, p.251-279 Verderame, P.M. and C.A. Floudas (2009). Operational planning framework for multisire production and distribution networks. Computers and Chemical Engineering, vol.33, p.1036-1050. Bongers, P.M.M., Bakker, B.H. (2008). Validation of an Ice cream factory operations model. Computer Aided Process Engineering, 29, 375-380
21st European Symposium on Computer Aided Process Engineering - ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Batch sizing in multi-stage, multi-product batch production systems Norbert Trautmanna , Philipp Baumanna , Nadine Sanera , Tobias Sch¨afera a
University of Bern, Department of Business Administration, Sch¨ utzenmattstrasse 14, CH-3012 Bern
Abstract We study the problem of computing a minimum-makespan schedule for a multi-stage, multiproduct batch-production system. Due to the computational intractability of this problem, various decomposition approaches have been proposed in the literature. We refer to the decomposition into a batching subproblem and a batch-scheduling subproblem proposed in [7]. The former subproblem consists in computing a set of batches with fixed sizes, and the latter subproblem consists in allocating the production equipment to the processing of these batches over time. In [7] the batching subproblem is solved such that the batch size is constant for all executions of a task. In this paper, we show that under this restriction, the schedule obtained may not be optimal even if both sub-problems are solved to optimality. We propose an LP model for adapting the batch sizes such that optimal schedules can be computed with this decomposition approach. We report on computational results for the Westenberger-Kallrath sample process [2]. Keywords: batch production, scheduling, decomposition, MILP, case study
1. Introduction In a batch production system, value is added to materials by successive transformation tasks such as unification, separation, or chemical reactions. We consider the case of multi-product systems, which are usually chosen in make-to-order production when the product range spans over different product families with relatively small demands. As each product family requires a specific plant configuration, short customer lead times and a high resource utilization can only be attained by minimizing the production makespan. In batch production, the total requirements for the final products and the intermediates are split into batches. To process a batch, first the inputs are loaded into a processing unit, then a transformation process (called task in the following) is executed, and finally the output is unloaded from the unit. We consider multi-purpose processing units, which can operate different tasks. The minimum and maximum filling levels of a processing unit give rise to a lower and an upper bound on the batch size. The duration of a task depends on the processing unit used, but not on the batch size. In general, the intermediates can be stored in storage facilities of finite capacity; some intermediates, however, are perishable and must be consumed immediately after production. The planning problem studied in this paper consists in computing a feasible schedule such that a given demand is fulfilled and the production makespan is minimized. For a review of integrative solution approaches to this problem see [6]. In the present paper, we focus on decomposition approaches, which have been proposed e.g. in [4,5,7]. Neumann et al. [7] decompose the problem into a batching and a batch-scheduling subproblem. Batching determines the batch size, the input/output proportions, and the number of executions of each task; batch scheduling allocates the processing units and storage facilities over time to the processing of the corresponding batches. Such decomposition strongly decreases the CPU time required for computing a good feasible solution.
906
N. Traut m ann et al. 4.00
U2 U1
4.00
2.67 2.67 2.67 0
3
6
9
U1 12
15
1.00
U2
18
7.00
3.00 3.00 2.00 0
3
6
9
12
15
18
Fig. 1. Motivating example: constant (left) and individual batch sizes (right). The task operated by unit U1 (limits [2, 3]) produces the intermediate required by the task operated by unit U2 (limits [1, 7]) Approach I
Approach DE
Approach DI Batching (cf. [7])
Integrative approach (cf. [1])
Batching (cf. [7])
Modification of batch sizes (this paper)
Batch scheduling (cf. [1])
Batch scheduling (cf. [1])
Fig. 2. Alternative solution approaches
The batching formulation in [7] implies equal batch size (strategy E) and equal input/output proportions for all executions of a task. In the present paper, we show by means of an experimental study that therefore, even when both sub-problems are solved to optimality, the makespan obtained may be longer than in an optimal solution obtained by an integrative approach. We address the question whether this gap can be reduced by computing an individual batch size (strategy I) for each execution of a task, such that some large-size batches and some small-size batches are formed (cf. Fig. 1). Our computational results for 14 instances of the Westenberger-Kallrath sample process [2] indicate that strategy I allows to compute an optimal solution in cases where strategy E fails to compute an optimal solution. The remainder of this paper is organized as follows. In Section 2, we differentiate the alternative solution approaches compared in this paper. In Section 3, we implement batching strategy I as an LP. In Section 4, we report on the computational results.
2. Alternative Solution Approaches In this paper, we compare the following alternative solution approaches (cf. Fig. 2): – Integrative approach (approach I): we solve the problem by using the integrative formulation of [1], which is based on the model of [3]. The schedule obtained represents an optimal solution to the instance at hand. – Decomposition approach with batching strategy E (approach DE): we solve the batching subproblem as proposed in [7] as a mixed-binary linear program, and we solve the corresponding batch-scheduling subproblem using the formulation of [1]. Both subproblems are solved to optimality. – Decomposition approach with batching strategy I (approach DI): first, we solve the batching subproblem as proposed in [7] as a mixed-binary linear program. The solution obtained defines the number of executions, the batch size, and the input/output proportions of each task. Then, we solve the LP presented in Section 3 for computing appropriate individual batch sizes for each execution of each task. Eventually, we solve the corresponding batch-scheduling subproblem using the formulation of [1]. The LP and both sub-problems are solved to optimality.
Batch sizing in multi-stage, multi-product batch production systems Table 1 Sample process: initial and maximum stock levels of storable intermediates S2 S3 S4 S5 S7 S8 Initial/maximum stock 10/30 10/30 0/15 10/30 0/10 0/10
907
S9 0/10
S12 0/10
S14 0/10
3. LP Model for Batching Strategy I In this section, we formulate batching strategy I as a linear program (LP). Let T denote the set of tasks, P the set of products, and P p the set of perishable products. The solution of the batching subproblem as formulated in [7] provides for each task τ ∈ T the number of executions ετ , the batch size βτ , and the input/output proportions ατ π of product π (with ατ π < 0 for input products and ατ π > 0 for output products). Let β τ and β τ denote the minimum and the maximum batch size of task τ , respectively. Furthermore, let Tπ− and Tπ+ be the set of tasks producing and consuming product π. We aim at computing individual batch sizes βˆτμ of the tasks τ ∈ T , where μ refers to the μ-th execution (μ = 1, . . . , ετ ) of task τ . The optimization problem reads as follows: ⎧ τ −1 ε ⎪ ⎪ ⎪ ⎪ Max. (ετ − μ) βˆτμ ⎪ ⎪ ⎪ ⎪ τ ∈T μ=1 ⎪ ⎪ ⎨ s.t. β ≤ βˆμ ≤ β (τ ∈ T ; μ = 1, . . . , ετ ) (1) τ τ τ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
ατ π βˆτμ + ατ π βˆτμ = 0 (π ∈ P p ; (τ, τ ) ∈ Tπ− × Tπ+ ; μ = 1, . . . , ετ ) (2) ετ ετ ατ π ατ π βτ (π ∈ P ) (3) βˆτμ =
τ ∈Tπ+
μ=1
τ ∈Tπ+
μ=1
The objective is to maximize the weighted sum of the batch sizes; the weight of the μ-th execution of task τ is ετ − μ. Thus, in an optimal solution the batch size of the μ-th execution will be greater than or equal to the batch size of the μ + 1-th execution; moreover, a combination of large batch sizes and small batch sizes for a task results in a higher objective function value than a combination of medium-sized batches. By (1), the batch sizes must be chosen within the prescribed bounds. By (2), for each perishable product, the amounts produced and consumed by the corresponding tasks must coincide. By (3), the total amount produced of each intermediate and each final product must be the same as in the original solution of the batching subproblem. 4. Computational Results For our experimental study, we used the Westenberger-Kallrath sample production process [2]; the state-task-network is shown in Fig. 3. Operating modes of unit U2 may be changed such that the yield of state S3 varies between 20% and 70%; state S4 is considered as a by-product. Table 1 displays the initial and the maximum stock levels of the non-perishable intermediates; for comparison reasons, we used the same initial levels as in [1]. Tables 2 and 3 provide the minimum and the maximum batch sizes of each processing unit, and the suitable processing units and corresponding processing times of each task. States S6, S10, S11 and S13 represent perishable intermediates, and states S15–S19 are considered as final products. The individual problem instances (cf. Table 4) are defined by the demand vector; the instances used here have been proposed in [1]. We applied the three approaches on a HP Z600 workstation with 2 Intel Hex-Core CPU using AMPL and Gurobi 3. Table 4 indicates the makespans obtained and the CPU times [sec] required by the three alternative approaches. For 11 of the 14 instances,
908
N. Traut m ann et al.
31% 100% − x
S1
T1
S2
S4
T2
x ∈ [20%, 70%]
T3
69%
S5
S3
T7 T6 T5 T4
S9 S8 S7 S6
T12 T11 T10
S14 S13 S12 S11 S10
S19
T16
S18
T15
S17
T14
S16
T13
S15
50%
50%
T9 T8
T17
Fig. 3. State-task network of the Westenberger-Kallrath sample production process Table 2 Sample process: minimum and maximum batch sizes U1 U2 U3 Minimum/maximum batch size 3/10 5/20 4/10
U4 4/10
U5 4/10
U6 3/7
U7 3/7
U8 4/12
U9 4/12
Table 3 Suitable processing units and processing times T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 Processing units U1 U2 U3 U4 U4 U4 U4 U5 U5 U6 U7 U6 U7 U6 U7 U8 U9 U8 U9 U8 U8 U9 U8 U9 8 12 12 12 12 Processing times 4 8 4 8 8 8 8 12 12 8 10 10 12 12 12 8 12 8 12 Table 4 Computational results Instance No Demand 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(20, 20, 20, 0, 0) (20, 20, 0, 20, 0) (20, 20, 0, 0, 20) (20, 0, 20, 20, 0) (20, 0, 20, 0, 20) (20, 0, 0, 20, 20) (0, 20, 20, 20, 0) (0, 20, 20, 0, 20) (0, 20, 0, 20, 20) (0, 0, 20, 20, 20) (10, 10, 20, 20, 30) (30, 20, 20, 10, 10) (10, 20, 30, 20, 10) (18, 18, 18, 18, 18)
Approach I makespan tCPU 28 16.23 30 17.60 30 44.34 28 31.03 28 38.60 35 72.19 30 32.26 30 34.01 37 131.69 37 151.80 42 485.36 36 423.79 38 463.44 38 679.97
Approach DE makespan tCPU 30 2.20 39 8.78 40 18.63 32 7.93 32 8.81 41 30.83 36 15.51 36 22.31 42 43.09 41 34.85 48 247.12 38 20.94 42 52.92 44 166.17
Approach DI makespan tCPU 28 3.10 33 18.33 36 9.14 32 18.49 32 18.56 38 70.15 33 29.55 34 57.55 40 61.40 39 49.03 44 348.73 38 58.77 40 133.12 40 853.07
a smaller makespan is obtained by approach DI than by approach DE. Figures 4 to 6 visualize the schedules obtained by the three approaches for problem instance 1. References [1] F Bl¨ omer and HO G¨ unther. Scheduling of a multi-product batch process in the chemical industry. Computers in Industry, 36:245–259, 1998. [2] J Kallrath. Planning and scheduling in the process industry. OR Spectrum, 24:219–250, 2002. [3] E Kondili, CC Pantelides, and RWH Sargent. A general algorithm for short–term scheduling of batch operations — I. MILP formulation. Comput Chem Eng, 17:211–227, 1993. [4] GM Kopanos, L Puigjaner, and MC Georgiadis. A bi-level decomposition methodology for scheduling batch chemical production facilities. In RM de Brito Alves, CAO de Nascimento, and EC Biscaia, editors, 10th Int Symp on Process Systems Engineering, pages 681–686. Elsevier, 2009. [5] C Maravelias. A decomposition framework for the scheduling of batch processes. Comput Chem Eng, 30:407–420, 2006. [6] C M´ endez, J Cerd´ a, I Grossmann, I Harjunkoski, and M Fahl. State-of-the-art review of optimization methods for short-term scheduling of batch processes. Comput Chem Eng, 30:913–946, 2006. [7] K Neumann, C Schwindt, and N Trautmann. Advanced production scheduling for batch plants in process industries. OR Spectrum, 24:251–279, 2002.
Batch sizing in multi-stage, multi-product batch production systems
10 T14
U9
10 T13 3 T10
U8 U7 7 T10
U6 U5 10 T5
U4
10 T9
4.49 T3 19.86 T2 10 T1
U3 14.49 T2 9.86 10 T1 T1
U2 4.49 T1
U1
0
2
4
10 T8
6
4 T4
10 T3
8
10
10 T14
8 T15
10 T9
909
12 T15
10 T13
10 T8
6 T4
14.49 T2
12
14
16
18
20
22
24
26
28
30
Fig. 4. Schedule obtained with approach I 10 T13
U9 10 T14
U8
10 T15 5 T10
5 T10
U7
10 T13
10 T15
10 T14
U6 U5 10 T5
U4
10 T9
10 T8
5 T4
7.25 T3 16.67 T2
U3 16.67 T2 10 10 T1 T1
U2 10 T1
U1
0
2
4
16.67 T2 10 T1 6
8
10 T8 5 T4 7.25 T3
10
12
14
16
10 T9
18
20
22
24
26
28
30
28
30
Fig. 5. Schedule obtained with approach DE
U9
10 T14
10 T13
10 T8
10 T8
8 T15
U8 3 T10 7 T10
U7 U6 10 T9 10 T5
U5 U4 U3 U2 10 T1
U1
0
10 T1 2
20 T2
4
10 T1
10 T1 6
4.49 T3 20 T2
8
10 T13
10 T3
10
4 T4
12 T15
10 T14
10 T9
6 T4
10 T2
12
14
16
18
20
Fig. 6. Schedule obtained with approach DI
22
24
26
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
'HFLVLRQ6XSSRUW6\VWHPIRU0XOWLSURGXFW3LSHOLQH DQG,QYHQWRU\0DQDJHPHQW6\VWHPV 6XVDQD5HOYDVD $QD3DXOD)'%DUERVD3yYRDD+HQULTXH$0DWRVE3HGUR 3LQWRD D
&(*,6787/$Y5RYLVFR3DLV/LVERD3RUWXJDO &34,6787/$Y5RYLVFR3DLV/LVERD3RUWXJDO
&RUUHVSRQGLQJDXWKRUVXVDQDLFU#LVWXWOSW E
$EVWUDFW 7KHRLOLQGXVWU\IDFHVILHUFHDQGJURZLQJFRPSHWLWLRQ3URFHVVLQGLFDWRUVDQGPDUJLQV¶ PD[LPL]DWLRQ UHTXLUH UREXVW GHFLVLRQ PHWKRGV ,Q WKLV SDSHU ZH SUHVHQW D GHFLVLRQ VXSSRUWV\VWHPWRROWKDWDGGUHVVHVWKHVFKHGXOLQJRIRSHUDWLRQVLQDV\VWHPFRPSULVLQJD PXOWLSURGXFWSLSHOLQHWKDWFRQQHFWVRQHUHILQHU\WRDGLVWULEXWLRQFHQWUHWKDWVXSSOLHVD ORFDOPDUNHW1RVLPLODUWRROLVDYDLODEOHLQWKHPDUNHW7KHGHFLVLRQVXSSRUWV\VWHPLVD IUDPHZRUNLQFOXGLQJDXVHULQWHUIDFHGHYHORSHGRQ6HDP)UDPHZRUNIURP-%RVV DQG XVLQJWKH0,/3EDVHGVROXWLRQVWUDWHJ\UHSRUWHGLQ5HOYDV 7KHV\VWHPLVEHLQJ GHYHORSHGLQFROODERUDWLRQRID3RUWXJXHVHRLOSURGXFWVGLVWULEXWLRQFRPSDQ\ .H\ZRUGV0,/30XOWLSURGXFW3LSHOLQH&DVH6WXG\6RIWZDUHWRRO
,QWURGXFWLRQ 3LSHOLQHV KDYH ORQJ EHHQ UHFRJQL]HG DV PDMRU VROXWLRQV IRU WKH WUDQVSRUWDWLRQ RI RLO SURGXFWVGXHWRLWVHIILFLHQWDQGUHOLDEOHRSHUDWLRQDWORZFRVW,QWKH(8WKHGHQVLW\ RIRLOSLSHOLQHVZDVRINPNPFRUUHVSRQGLQJWRPRUHWKDQNPRIOLQHV DV VWDWHG LQ WKH (XURVWDW 3RFNHW %RRN UHSUHVHQWLQJ D WUDQVSRUWDWLRQ RI PLOOLRQNPRIRLODQGRLOSURGXFWV1RUHODQG 7KHRSHUDWLRQVVFKHGXOLQJRIWKLV W\SH RI HTXLSPHQW LV D FRPSOH[ SUREOHP +RZHYHU FRPSDQLHV VWLOO PDLQWDLQ WULDO DQG HUURUPHWKRGVEDVHGRQVSUHDGVKHHWVWRSHUIRUPWKHVFKHGXOLQJRIVXFKRSHUDWLRQVVLQFH FRPPHUFLDO WRROV VWLOO UHPDLQ VFDUFH 7KLV OHDYHV WKH RSSRUWXQLW\ IRU WKH DFDGHPLF FRPPXQLW\WRGHYHORSDGYDQFHGWRROVZLWKUHDOZRUOGDSSOLFDELOLW\ 7KHSLSHOLQHVFKHGXOLQJSUREOHPKDVEHHQDGGUHVVHGVLQFHWKHODWH¶V7KHSLRQHHU ZRUNV E\ 6KDK DQG 6DVLNXPDU HW DO ILUVW SURSRVHG VROXWLRQ DSSURDFKHV IRU WKLV SUREOHP 6KDK XVHG D 0L[HG ,QWHJHU /LQHDU 3URJUDPPLQJ 0,/3 DSSURDFK ZKHUHDV6DVLNXPDUHWDOXVHGDNQRZOHGJHEDVHGKHXULVWLFVHDUFK 0DJDWmR$UUXGDDQG1HYHV DQG5HMRZVNLDQG3LQWR HPSOR\HGGLVFUHWH UHSUHVHQWDWLRQ DSSURDFKHV WR PRGHO WKH SLSHOLQHV WKURXJK 0,/3 7KH IRUPHU DXWKRUV XVHG GHFRPSRVLWLRQ WHFKQLTXHV 7KH ODWWHU DXWKRUV VWXGLHG GLIIHUHQW VFHQDULRV IRU SURGXFWGHPDQGRQDUHDOZRUOGSUREOHPEXWKDYLQJKLJKFRPSXWDWLRQDOHIIRUWV &DIDURDQG&HUGi SURSRVHGDQRYHOFRQWLQXRXV0,/3IRUPXODWLRQWKDWH[SORUHVD VHTXHQWLDOVSDWLDOWUDQVSRUWDWLRQDORQJWKHSLSHOLQH7KLVIUDPHZRUNSURYHGWRVROYHWKH UHDO ZRUOG VFHQDULR RI WKH ZRUN RI 5HMRZVNL DQG 3LQWR LQ OHVV FRPSXWDWLRQDO WLPH7KHVDPHDXWKRUVSUHVHQWHGLQD PXOWLSHULRGDSSURDFKDQG PDNLQJXVHRI UROOLQJKRUL]RQVUHGXFLQJWKHLPSDFWRIWKHRSHUDWLRQDOODJDGGHGE\WKHSLSHOLQH 5HOYDV HW DO SURSRVHG D FRQWLQXRXV WLPH EDVH 0,/3 PRGHO WKDW DGGUHVVHV DQ LQWHJUDWHGV\VWHPFRPSULVLQJDSLSHOLQHDQGRSHUDWLRQVPDQDJHPHQWDWWKHGHVWLQDWLRQ
'HFLVLRQ6XSSRUW6\VWHPIRU0XOWLSURGXFW3LSHOLQHDQG,QYHQWRU\ 0DQDJHPHQW6\VWHPV
911
GLVWULEXWLRQFHQWUH6LQFHVFKHGXOLQJSODQVPXVWEHIOH[LEOHWRDFFRPPRGDWHXQIRUHVHHQ VLWXDWLRQV 5HOYDV HW DO SURSRVHG D UHDFWLYH VFKHGXOLQJ SURFHGXUH IRU SLSHOLQH V\VWHPVWDFNOLQJFRPPRQVLWXDWLRQV7KH\DOVRSURSRVHGDVHTXHQFLQJKHXULVWLF WKDWGHYHORSVVFHQDULREDVHGSURGXFWSXPSLQJVHTXHQFHVUHGXFLQJPRGHOFRPSOH[LW\ ,QWKHSUHVHQWZRUNWKHDXWKRUVXVHDVEDVHWKHLUSUHYLRXVSXEOLVKHGZRUNDQGGHYHORSD V\VWHPWRROWKDWFDQVROYHUHDOSLSHOLQHVSODQQLQJDQGVFKHGXOLQJSUREOHPVDFFRXQWLQJ IRUWKHGHVWLQDWLRQFRQVWUDLQWV$UHDO3RUWXJXHVHFDVHVWXG\LVXVHGDVWHVWEDVH
2SHUDWLQJ6\VWHP )LJ VKRZV WKH RSHUDWLQJ V\VWHP WKDW FRPSULVHV D SLSHOLQH WUDQVSRUWLQJ VHYHUDO RLO SURGXFWV EHWZHHQ RQH UHILQHU\ DQG RQH GLVWULEXWLRQ FHQWUH 7KH SXPSLQJ WDVN KDV WR UHVSHFWSRVVLEOHVHTXHQFHVRISURGXFWVDQGIORZUDWHOLPLWV(DFKSURGXFWKDVDIL[HGVHW RIVWRULQJWDQNV(DFKQHZEDWFKRISURGXFWKDVWRVHWWOHIRUDPLQLPXPSHULRG&OLHQWV DUH VDWLVILHG RQ VLWH DQG IRUHFDVWV DUH DYDLODEOH RQ D GDLO\ EDVLV $ SLSHOLQH VFKHGXOH VKRXOG VDWLVI\ DOO WKH GHPDQGV DQG LQFOXGHV SURGXFWV¶ VHTXHQFH SXPSLQJ IORZUDWHV YROXPHV WLPLQJ LVVXHV DQG SLSHOLQH VWRSSDJHV 7KH LQYHQWRU\ PDQDJHPHQW LQFOXGHV SLSHOLQHRXWSXWDOORFDWLRQVLQYHQWRU\SURILOHVVHWWOLQJSHULRGVDQGRXWSXWVPDQDJHPHQW
,QSXWV
,QLWLDO FRQGLWLRQV
0DUNHW IRUHFDVWV
6FHQDULR 3DUDPHWHUV
'HFLVLRQ6XSSRUW7RRO 'HFLGH'HWDLO /HYHO
,QLWLDOLVDWLRQ +HXULVWLF
$7)0,/3 'LVUXSWLRQ"
'7)0,/3 ,QSXWV
'LVUXSWLRQ 'DWD
5HDFWLYH 6FKHGXOLQJ
2XWSXWV
3LSHOLQH 6FKHGXOH
,QYHQWRU\ 0DQDJHPHQW
)LJ0XOWLSURGXFWSLSHOLQHRSHUDWLQJV\VWHP)LJ6FKHGXOLQJIUDPHZRUNDUFKLWHFWXUH
6FKHGXOLQJ)UDPHZRUN )UDPHZRUN$UFKLWHFWXUH )LJ UHSUHVHQWV WKH DUFKLWHFWXUH RI WKH VFKHGXOLQJ IUDPHZRUN DV ZHOO DV WKH UHTXLUHG LQSXWV DQG RXWSXWV REWDLQHG DQG WKH LQWHUDFWLRQ ZLWK WKH 0,/3 DOJRULWKP 7KH LQSXWV UHTXLUHGKDYHWKUHHRULJLQVL LQLWLDOFRQGLWLRQVLL PDUNHWIRUHFDVWVDQGLLL RSHUDWLRQDO SDUDPHWHUV7KHLQLWLDOFRQGLWLRQVLQFOXGHWKHLQLWLDOFRQWHQWVRIWKHSLSHOLQHDVZHOODV LQLWLDOLQYHQWRU\IRUHDFKSURGXFW7KHPDUNHWIRUHFDVWVFRPSULVHWKHGDLO\GHPDQGE\ SURGXFW7KHRSHUDWLRQDOSDUDPHWHUVLQFOXGHWLPHKRUL]RQH[WHQWPD[LPXPQXPEHURI EDWFKHV WR SXPS IORZUDWH OLPLWV DQG SLSHOLQH VWRSSDJHV LQIRUPDWLRQ )URP WKLV SRLQW WKH KHXULVWLF 5HOYDV HW DO LV UXQ 7KH KHXULVWLF XVHV WKH LQLWLDO FRQGLWLRQV DQG PDUNHWIRUHFDVWVWRDQDO\]HWKHFXUUHQWV\VWHPVLWXDWLRQWRREWDLQSDUDPHWHUVWRGHYHORS VHTXHQFHVRISURGXFWVWKDWDLPWRUHSOHQLVKWKHSURGXFWVVXSSOLHGWRFOLHQWV 7KH 0,/3 PRGHOV DUH WKHQ UXQ XVLQJ RSHUDWLRQDO LQSXWV DQG WKH KHXULVWLFV¶ RXWFRPHV 7KHPRGHOVUHVXOWVSURYLGHWKHSLSHOLQHVFKHGXOHIRUWKHFXUUHQWWLPHKRUL]RQZKHUHWKH LQYHQWRU\ PDQDJHPHQW LV DOVR FRQVLGHUHG 7KH VFKHGXOLQJ IUDPHZRUN LQFOXGHV WZR 0,/3 PRGHOV L WKH $JJUHJDWHG 7DQNV )RUPXODWLRQ $7) DQG LL WKH 'HWDLOHG 7DQNV )RUPXODWLRQ'7) $7)5HOYDVHWDO PRGHOVWKHDYDLODEOHVWRUDJHFDSDFLW\DW
912
65HOYDVHWDO
WKHWDQNIDUPLQDQDJJUHJDWHGPDQQHULHHDFKSURGXFWLVVWRUHGLQDVLQJOHWDQNZKLFK KDV DV PD[LPXP FDSDFLW\ WKH FRUUHVSRQGLQJ VXP RI LQGLYLGXDO WDQNV FDSDFLW\ '7) PRGHOVHDFKWDQNLQGLYLGXDOO\HQODUJLQJWKHPRGHOVL]H5HOYDV 'XULQJ WKH WLPH KRUL]RQ GLVUXSWLRQV RU XQIRUHVHHQ VLWXDWLRQV PD\ RFFXU ,Q WKLV FDVH QHZ LQSXWV DUH SURYLGHG IRU HDFK XQSUHGLFWHG VLWXDWLRQ DQG WKH UHDFWLYH VFKHGXOLQJ DSSURDFK LV XVHG $GGLWLRQDOO\ WKH REMHFWLYH IXQFWLRQ LQFOXGHV SHQDOL]LQJ WHUPV WKDW JXDUDQWHH WKDW WKH QHZ VROXWLRQ KDV WKH PLQLPXP FKDQJHV ZKHQ FRPSDUHG WR WKH SUHYLRXV PLQLPL]LQJ WKH VFKHGXOLQJ QHUYRXVQHVV 7KH PRGHOV DQG WKH UHDFWLYH VFKHGXOLQJ KDYH EHHQ ZULWWHQ DQG WHVWHG XVLQJ *$06 *HQHUDO $OJHEUDLF 0RGHOOLQJ 6\VWHP DQG DUH VROYHG WKURXJK D VXLWDEOH DOJRULWKP VXFK DV &3/(; 7KH XVHU FDQ VHOHFW WKH OHYHO RI GHWDLO WR REWDLQ UHJDUGLQJ WKH LQYHQWRU\ PDQDJHPHQW FKRRVLQJ EHWZHHQWKHWZR0,/3PRGHOVUHIHUUHG7KHDUFKLWHFWXUHSUHVHQWHGLQ)LJZDVEXLOW XVLQJWKHRSWLPL]DWLRQWRROVDERYHGHVFULEHGDQGGHYHORSHGE\WKHDXWKRUV7KHFXUUHQW REMHFWLYH LV WR V\VWHPDWL]H DQG VLPSOLI\ WKHLU XVDJH IRU VFKHGXOHUV ZLWK UHGXFHG RSWLPL]DWLRQNQRZOHGJHWRJDLQSRWHQWLDODPRQJRWKHUFRPSDQLHV 3URWRW\SH'HYHORSPHQW 6HYHUDO DVSHFWV ZHUH WDNHQ LQWR DFFRXQW WR GHYHORS D SURWRW\SH IRU WKH SUHVHQWHG IUDPHZRUNVXFKDVWKHLQWHUIDFHW\SHOLEUDULHVWREHXVHGDQGSURJUDPPLQJODQJXDJH 7R LPSOHPHQW D UREXVW SURWRW\SH RQH KDV WR FRQVLGHU XVHUVHUYHU RSHUDWLQJ V\VWHP 26 PHPRU\DQGFRPSXWDWLRQDOFDSDFLW\LQWHUQHWRU(WKHUQHWXVHUSULYLOHJHVFXUUHQW XVHU DSSOLFDWLRQV GHYHORSHU VNLOOV HWF ,I ZHE DSSOLFDWLRQV DUH VHOHFWHG LW LV RQO\ UHTXLUHG D ZHE EURZVHU WR LQWHUDFW ZLWK WKH DSSOLFDWLRQ DYRLGLQJ 26 LQFRPSDWLELOLW\ 8VHUSULYLOHJHVFDQEHFRQILJXUHGLQWKHDSSOLFDWLRQDQGWKHGDWDSURFHVVLQJLVLQVWDOOHG RQWKHVHUYHUVLGHVLPSOLI\LQJV\VWHPPDLQWHQDQFHXSGDWH$FFRXQWLQJWKDW*$06LVD FURVVSODWIRUPDSSOLFDWLRQWKHRQO\UHVWULFWLRQIRUWKHGHYHORSHULVWKHVHUYHU¶V267KH FKRLFH RI D SURJUDPPLQJ ODQJXDJH LV FULWLFDO IRU DSSOLFDWLRQ VXFFHVV 7KH FRVW RI WKH DSSOLFDWLRQ ZLOO EH WKXV PDLQO\ GXH WR WKH DOJRULWKP DQG WKHSURJUDPPLQJ ODQJXDJH ZKLFK KDYH WR EH SRZHUIXO VROYLQJ WRROV )LQDOO\ FRUUHFW FKRLFH RI OLEUDULHV DQG SURJUDPPLQJODQJXDJHZLOOHQKDQFHVRIWZDUH¶VDQGLQWHUIDFHTXDOLW\DQGVXFFHVV %DVHGRQWKHDERYHIRUWKHFXUUHQWWRROZHXVHGVHYHUDOIUHHDSSOLFDWLRQVVXFKDV-ERVV $SSOLFDWLRQ 6HUYHU -%RVV 5LFK)DFHV -%RVV $MD[MVI -%RVV 6HDP -%RVV 7RROV &5(-%-)UHH&KDUWVDQG0\6TO 7HFKQRORJ\7UDQVIHU 7KH GHYHORSPHQW RI DQ LQWHUIDFH RI D VFKHGXOLQJ V\VWHP IRU D UHDO SUREOHP DQG WR EH XVHGE\WKHEXVLQHVVZRUOGUHTXLUHVWKHDQDO\VLVRILVVXHVVSDQQLQJIURPWKHFRQFHSWXDO DUFKLWHFWXUH UHDO LPSOHPHQWDWLRQ V\VWHPV¶ LQWHJUDWLRQ DQG YDOLGDWLRQ 7KH PDMRU REMHFWLYHRIWKHVRIWZDUHLVUHOLDELOLW\DQGWREHHDVLO\XVDEOHZLWKRXWDGHHSNQRZOHGJH RQ RSWLPL]DWLRQ +RZHYHU WKH WRRO QDYLJDWLRQ DQG WKH UHVXOWV GLVSOD\HG KDYH WR EH PHDQLQJIXOIRUWKHHQGXVHUV$QRWKHUJRDOLVWRREWDLQJRRGVROXWLRQVLQUHGXFHGWLPH $UHDOLPSOHPHQWDWLRQRIDWRRORIWKLVW\SHUHTXLUHVDKLJKOHYHORIFXVWRPHULQWHJUDWLRQ DQG D WLPHOLQH GHILQLWLRQ ZKHUH WKH UHVRXUFHV UHTXLUHG WR WKH XVDELOLW\ DUH WUDQVIHUUHG IURP WKH GHYHORSHU WR WKH FXVWRPHU )RU WKLV UHDVRQ D VKDUHG SODQQLQJ DQG FRQWURO LV HVVHQWLDO $Q LQLWLDO VWDJH PXVW LQFOXGH WKH OHYHUDJH RI LQIRUPDWLRQ V\VWHPV SUHVHQW DW WKH FXVWRPHU UHO\LQJ RQ GDWD UHTXLUHG UHVXOWV UHSRUWLQJ GDWD EDVH PDLQWHQDQFH IRUPDWV VHFXULW\ UHVRXUFH UHTXLUHPHQWV FRPSDWLELOLW\ RU FRPSDQ\ SURFHGXUHV ,W LV DOVR LPSRUWDQW WR WKLQN DERXW IXWXUH PDLQWHQDQFH ZKLFK FDQ LQFOXGH QRW RQO\ WKH LQWHUIDFH XSGDWH EXW DOVR PRGHO XSGDWH )LQDOO\ WKH DSSOLFDELOLW\ RI WKH WRRO PXVW EH WHVWHG RQ VLWH XVLQJ UHDO GDWD DQG GHDGOLQHV FRPSDULQJ WKH UHVXOWV REWDLQHG ZLWK WKH RQHVGHYHORSHGE\WKHFXUUHQWVFKHGXOHUV¶SURFHGXUH
'HFLVLRQ6XSSRUW6\VWHPIRU0XOWLSURGXFW3LSHOLQHDQG,QYHQWRU\ 0DQDJHPHQW6\VWHPV
913
,PSOHPHQWDWLRQDQG'LVFXVVLRQ
7KLV ZRUN KDV EHHQ SURPRWHG WKURXJK D WHFKQRORJLFDO GHYHORSPHQW SURMHFW ZLWK D 3RUWXJXHVH RLO SURGXFWV GLVWULEXWLRQ FRPSDQ\ &RPSDQKLD /RJtVWLFD GH &RPEXVWtYHLV &/& 7KLVFRPSDQ\WUDQVSRUWVRLOSURGXFWVIURPDVRXWKHUQUHILQHU\ORFDWHGLQ6LQHV WR WKHLU GLVWULEXWLRQ FHQWUH ZLWK VWRUDJH WDQNV ORFDWHG LQ WKH FHQWUDO UHJLRQ RI 3RUWXJDO 7KHLU VFKHGXOLQJ DFWLYLW\ KLJKO\ LQIOXHQFHV WKH SURGXFWLRQ SODQQLQJ DW WKH UHILQHU\ VLQFH &/& UHSUHVHQWV FORVH WR RI PRQWKO\ SURGXFWLRQ 7KXV WKH VFKHGXOLQJLVXVXDOO\GHYHORSHGIRUDWLPHKRUL]RQRIRQHPRQWK8SWRQRZWKHV\VWHP KDVEHHQVFKHGXOHGXVLQJ&/&¶VVFKHGXOHUVNQRZOHGJHRIWKHV\VWHPDVZHOODVVXSSRUW RIVSUHDGVKHHWV,QWKLVVHFWLRQDEULHIRYHUYLHZRIWKHFXUUHQWYHUVLRQRIWKHSURWRW\SHLV SHUIRUPHGZKLFKKDVEHHQLPSOHPHQWHGLQ3RUWXJXHVH)LJVKRZVRQHRIWKHLQLWLDO EURZVLQJSDJHZKHUHWKHXVHUVSHFLILHVLQLWLDOVFHQDULRGDWD7KLVGDWDLQFOXGHVWKHWLPH KRUL]RQWRFRQVLGHUIRUWKHVFKHGXOLQJWKHLQLWLDOLQYHQWRU\DYDLODEOHSHUSURGXFWDQGWKH FXUUHQWGDLO\GHPDQGVE\SURGXFW'HPDQGFDQEHDFFHVVHGIURPFRPSDQ\GDWDEDVHV )LJVKRZVWKHVHTXHQFLQJKHXULVWLFSDJHZKLFKSURYLGHVWRWKHXVHUDVHWRISRVVLEOH SXPSLQJ VHTXHQFHV SURYLGHG E\ WKH LQLWLDOL]DWLRQ KHXULVWLF 7KHVH VHTXHQFHV WDNH LQWR DFFRXQW SULRULWLHV WR SXPS D JLYHQ SURGXFW DQG WKH DPRXQW FRQVXPHG LQ WKH WLPH KRUL]RQ ZKLFK VHWV WKH VXSSO\ WDUJHW 7KH XVHU FDQ VHOHFW ZKLFK VHTXHQFHV WR WHVW DQG GHILQH WKH PD[LPXP FRPSXWDWLRQDO WLPH WR UXQ WKH PRGHO LQ HDFK VHTXHQFH $OO WKH VHOHFWHGVHTXHQFHVDQGUHVSHFWLYH PRGHOVUXQLQVHTXHQFH7KHUHVXOWVDUHREWDLQHGLQ WZRGLIIHUHQWSDJHVWKHUHVXOWVVXPPDU\)LJ DQGWKH5HVXOWV3DJH)LJVDQG 7KH UHVXOWV VXPPDU\ FRQWDLQV D EULHI GHVFULSWLRQ RI HDFK PRGHO UXQ IRU HDFK VHOHFWHG SXPSLQJVHTXHQFH7KLVLQIRUPDWLRQLQFOXGHVWKHVHTXHQFHPRGHOVWDWXVWKHWLPHDQG GDWHRIWKHUXQQLQJPRGHOVWDWXVDQGFRPSXWDWLRQDOWLPHEXWPRUHRYHULWLQFOXGHVWKH PRVW UHOHYDQW RSHUDWLRQDO LQGLFDWRUV RI WKH VROXWLRQ WKDW FDQ VXSSRUW WKH GHFLVLRQ RQ ZKLFK VROXWLRQ WR XVH 7KHVH RSHUDWLRQDO LQGLFDWRUV VSDQ IURP PHGLXP IORZUDWHV VWRSSDJHGXUDWLRQVEDODQFHEHWZHHQLQSXWVDQGRXWSXWVLQYHQWRU\OHYHOV)LQDOO\LQWKH UHVXOWVHDFKVROXWLRQFDQEHDQDO\]HGLQGHWDLOVXFKDVLQYHQWRU\SURILOHVE\SURGXFWRU SXPSLQJ UDWH SHU EDWFK )LJ DQG WKH SXPSLQJ VFKHGXOH LQFOXGLQJ VHTXHQFH RI SURGXFWVYROXPHVSXPSLQJWLPLQJVXQORDGLQJWLPLQJVDQGWUDYHOWLPHV)LJ 7KHWHFKQRORJ\WUDQVIHUSURFHVVHQDEOHGWKHLPSURYHPHQWQRWRQO\RIWKHLQWHUIDFHEXW DOVRRIWKH$7)PRGHO6HYHUDOVROXWLRQVREWDLQHGKDYHSURYHQWKDWWRRSHUDWLRQDOL]HD PRGHO VROXWLRQ LW LV QHFHVVDU\ WR LPSURYH PRGHO UHSUHVHQWDWLRQ DVVXPSWLRQV VLPSOLILFDWLRQV DQG VWUDWHJLHV $ VWUDLJKWIRUZDUG H[DPSOH LV WKH FRQWLQXRXV WLPH VFDOH GHSHQGLQJ RQ HDFK EDWFK SXPSLQJ HYHQWV ZKLFK FDQ KDYH GLVVLPLODU WLPH LQWHUYDO GXUDWLRQVDQGWKXVLQFOXGHPRGHOVROXWLRQVZLWKUHGXFHGRSHUDWLRQDOIOH[LELOLW\ 6LQFHWKH'7)PRGHOLVQRWLPSOHPHQWHG\HWLWZDVGHYHORSHGDVLPSOHDOJRULWKPWKDW GLVWULEXWHV WKH SURGXFWV SXPSHG WKURXJK WKH DYDLODEOH WKDQNV HQDEOLQJ WR DQDO\]H WKH XWLOL]DWLRQRI&/&¶VUHVRXUFHVGXULQJWKHWLPHKRUL]RQ7KLVDOJRULWKPIROORZVWKHXVXDO UXOHVXVHGE\&/&¶VVFKHGXOHUVDQGV\VWHPOLPLWDWLRQV
&RQFOXVLRQDQG)XWXUH:RUN :HSUHVHQWHGDSURWRW\SHVRIWZDUHWRROWKDWDFWVDVDGHFLVLRQVXSSRUWV\VWHPWRWKHKHOS WKHVFKHGXOLQJRIUHDOPXOWLSURGXFWSLSHOLQHVV\VWHPV7KLVWRROXVHV0,/3EUDQFKDQG ERXQG DQG WDLORUHG KHXULVWLFV WR RSWLPL]H WKH VFKHGXOLQJ RI RSHUDWLRQV DQG HQKDQFLQJ UHDOZRUOGDSSOLFDELOLW\7KHSURWRW\SHKDVDOUHDG\EHHQLQVWDOOHGLQWKHSDUWQHU $WWKH PRPHQWWKHVFKHGXOLQJLVEHLQJGHYHORSHGLQSDUDOOHOZLWKSUHYLRXVSURFHGXUHVXVHG VRDVWRWHVWWKHSURWRW\SHDQGLPSURYHWKHSHUIRUPDQFHDQGWKHUHVXOWVLWSURGXFHV$V IXWXUHZRUNLWLVH[SHFWHGWRLPSOHPHQWWKH'7)PRGHODQGWKHUHVFKHGXOLQJSURFHGXUH
914
65HOYDVHWDO
)LJ'DWDLQVHUWLLRQVHFWLRQ)LJ+HXULVWLFJHQHUDWLRQRIVHTXHQFHVRISURGXFWV
)LJ5HVXOWV3DJHSXPSLQJVFFKHGXOH
)LJ5HVXOWVSDJHPDMMRURSHUDWLRQDOUHVXOWV)LJ5HVXOWVVXPPDU\SDJH
$FNQRZOHGJHPHQWV 7KH DXWKRUV JUDWHIXOO\ DFNQRZOHGJH WKH FDVH VWXG\ DQG WKH ILQDQFLDO VXSSRUUW SURYLGHG E\&RPSDQKLD/RJtVWLFDDGH&RPEXVWtYHLV
5HIHUHQFHV JDQRQ '&&DIDURDQG-&HUGi³2SWLPDOVFKHGXOLQJRIPXOWLSURGXFWSLSHOLQHV\VWHPVXVLQJ GLVFUHWH0,/3IRUPXODWWLRQ´&RPSXW &KHP(QJYRO,VVXH SOHGHOLYHU\ '&&DIDURDQG-&HUGi³'\QDPLFVFKHGXOLQJRIPXOWLSURGXFWSLSHOLQHVZLWKPXOWLS GXHGDWHV´&RPSXW &KHP(QJ,VVXHV± (XURVWDW3RFNHW%RRN³(QQHUJ\WUDQVSRUWDQGHQYLURQPHQWLQGLFDWRUV´ KIRU /0DJDWmR/95$UUXGDDDQG)1HYHV-U³$PL[HGLQWHJHUSURJUDPPLQJDSSURDFK VFKHGXOLQJFRPPRGLWLHHVLQDSLSHOLQH´&RPSXW &KHP(QJ,VVXHV -1RUHODQG³(XURVWDW6WDWWLVWLFVLQ)RFXV7UDQVSRUW´ GXFW 55HMRZVNLDQG-03LQWWR³(IILFLHQW0,/3IRUPXODWLRQVDQGYDOLGFXWVIRUPXOWLSURG SLSHOLQHVFKHGXOLQJ´& &RPSXW &KHP(QJ,VVXH 3)'%DUERVD3yYRD-)LDOKRDQG$63LQKHLUR³3LSHOLQ QH 65HOYDV+$0DWRV$3 6FKHGXOLQJDQG,QYHQWRRU\0DQDJHPHQWRID0XOWLSURGXFW'LVWULEXWLRQ2LO6\VWHP´´,QG(QJ &KHP5HV,VVXH 3)'%DUERVD3yYRD-)LDOKR³5HDFWLYH6FKHGXOLQJ)UDP PHZRUNIRU 65HOYDV+$0DWRV$3 D0XOWLSURGXFW3LSHOLQHHZLWK,QYHQWRU\0DQDJHPHQW´,QG(QJ&KHP5HV 65HOYDV$3)'%DUERVD3yYRD+$0DWRV³+HXULVWLF%DWFK6HTXHQFLQJRQD0 0XOWLSURGXFW P´&RPSXW &KHP(QJ,VVXH 2LO'LVWULEXWLRQ6\VWHP 3LSHOLQH6FKHGXOLQJDQG,QYHQWRU\0DQDJHPHQWRID0XOWLSUURGXFW2LO 65HOYDV2SWLPDO3 RI/LVERQ 'LVWULEXWLRQ&HQWUH3KK'7KHVLV,QVWLWXWR6XSHULRU7pFQLFR7HFKQLFDO8QLYHUVLW\R 06DVLNXPDU353UDNDDVK603DWLO65DPDQL³3,3(6$KHXULVWLFVHDUFKPRGHHOIRU SLSHOLQHVFKHGXOHJHQHUUDWLRQ´.QRZOHGJH%DVHG6\VWHPV 16KDK³0DWKHPDWLFDOSUURJUDPPLQJWHFKQLTXHVIRUFUXGHRLOVFKHGXOLQJ´&RPSXW &KHP (QJ66
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Ice Cream Scheduling: Modeling the Intermediate Storage Martijn A.H. van Elzakkera, Edwin Zondervana, Cristhian Almeida-Riverab, Ignacio E. Grossmannc, Peter M.M. Bongersb,d a Dep. Chem. And Chem. Eng., Eindhoven University of Technology, P.O. Box 513, 5600MB Eindhoven, the Netherlands b Unilever R&D Vlaardingen, the Netherlands c Dep. Chem. Eng., Carnegie Mellon University, Pittsburgh, USA d Hoogewerff Chair for Product-driven Process Technology, Dep. Chem. And Chem. Eng., Eindhoven University of Technology
Abstract In this work, two problem specific models are developed for the ice cream scheduling problem. These models are based on product type dedicated time slots and more efficient methods of handling the intermediate inventory limitations. The first model aggregates the intermediate inventory tanks and the second model directly relates the time slots of the packing and the mixing stages. Both models are shown to be computationally more efficient than a general RTN model using two example problems. All three models are based on a unit specific continuous time approach. Keywords: Scheduling, Intermediate Storage, Ice cream production
1. Introduction Production scheduling has been studied extensively in the last twenty years. General formulations, such as the STN or RTN, have been successful to a certain extent in solving a wide variety of scheduling problems. However, sometimes a problem specific approach that exploits the process properties can provide a computationally more efficient model. We have studied such an approach for the ice cream scheduling problem, which was introduced by Bongers and Bakker [1,2]. As explained in Bongers and Bakker [1], the ice cream scheduling problem can be simplified into a two stage scheduling problem with limited intermediate storage. In the first stage the ingredients are mixed and pasteurized. In the intermediate storage the products are aged, and in the second stage they are frozen and packed. The choice of time representation is very important in any scheduling problem [3]. We have chosen a continuous time representation with unit specific time slots, because discrete time or global continuous time slot approaches are unsuitable since they require too many time slots. This is mainly caused by the varying batch production times, the sequence dependent changeovers and the unaligned start and end times between the different stages due to the required ageing time in the intermediate storage. The main challenge in developing the ice cream scheduling model was to increase the computational efficiency in order to be able to solve the example case for the medium sized ice cream factory described in Bongers and Bakker [1]. Both the way of modeling
M. van Elzakker et al.
916
the intermediate inventory as well as the time slots were investigated in an attempt to increase the computational efficiency.
2. Dedicated Time Slots The first new feature of our models is the dedication of time slots to specific product types. Since the number of packers is larger than the number of mixers and because of the limited intermediate storage, the mixers need to alternate between different product types to prevent the packers from idling. As a result, two mixing runs of the same type of product in a row are uncommon. This can be exploited by only allowing subsets of product types to be mixed in each period. In this manner, the size of the model can be reduced, and as a result the required computational time is significantly shorter. For the example problem, there are two product types and we can thus dedicate one product type to each period. This is depicted in Figure 1. By allowing periods to be empty it is still possible to produce the same product type in two consecutive periods, and therefore the flexibility is retained.
7P
7P
0L[HU 3DFNHU 3DFNHU Figure 1: Dedicated time period overview. In periods Tm1 only product type 1 can be processed and in periods Tm2 only product type 2 can be processed. For simplicity uniform length timeslots have been depicted, even though the starting and ending times are variables in the model.
3. Intermediate Inventory The biggest computational challenge is the modeling of the intermediate inventory. The intermediate inventory must be coupled with the mixing and packing stages, and if modeled in a straightforward manner, it would require its own time slots. In addition, the coupling of the start and end times of the time periods would split the mixing and packing runs into single tank runs, thereby greatly increasing the required number of periods. Therefore, we have studied different methods to model the inventory. 3.1. Aggregated Storage Model (ASM) The first method we have studied is an aggregated inventory, where we combine all storage tanks of one type into a large aggregated storage. The main advantages are that the mixing and packing periods are no longer forced to be single tank periods, and that we only need to model one inventory per product type instead of a number of individual tanks. As a result, the size of the model is reduced. A schematic overview of the problem when using this model formulation is given in Figure 2. It should be noted that the time slots of the aggregated storage tanks are directly related to the time slots of the mixing. At the start of each mixing period, constraints ensure
917
Ice Cream Scheduling: Modeling the Intermediate Storage
that there will be enough storage available without having to mix different batches in the same tank. Similarly, a constraint also ensures that there is always enough of the previous batch left to continue to feed the packer while the current batch ages. 7\SH
7\SH
7S
7S
$JJUHJDWHG 6WRUDJH
3DFNHU
0L[HU 7P 7P
7P
7P
$JJUHJDWHG 6WRUDJH
3DFNHU 7S
7S
Figure 2: Schematic overview of the Aggregated Storage Model
3.2. Related Period Model(RPM) The second method to model the inventory involves directly linking the mixing and packing periods and limiting the mixing based on the progress of the packing. A schematic overview of this method is depicted in Figure 3. For each product type, a constraint enforces that the nth mixing period can only start after the nth minus “number of storage tanks of this type” packing period has finished. This constraint ensures that a mixing run cannot start unless there is a storage tank available.
Figure 3: Schematic overview of the Related Period Model
For example, if there are two storage tanks for product type one, then packing period 1 must finish before mixing period 3 can start because otherwise there would be no storage tank available to store the product mixed in period 3. It should be noted that a distinction is made between the periods dedicated to product type one and those dedicated to product type two. The periods dedicated to the other product type are not considered when linking the periods since there are different storage tanks for each product type. In addition, empty periods are ignored when relating periods since no storage tanks are being used in an empty period. For the example problems, one period dedicated to type 1 is followed by two periods dedicated to type 2 because the single storage tank capacity of type 1 is twice as big as that of type 2. Since the mixer and packer throughput of both product types is comparable, the mixing runs of both product types are of similar length. Therefore, the
918
M. van Elzakker et al.
mixing runs of product type 2 involve approximately twice as many tanks as those of product type 1.
4. Example Two small example problems were used to compare the computational efficiency of the models with each other and with a more general RTN formulation. The RTN model was based on the unit-specific event-based continuous time model of Shaik and Floudas [4]. The example problems are based on the one from Bongers and Bakker [1]. However, the size of the example problems was reduced since not all of them were able to solve the full sized problem within reasonable CPU time. Both example problems have a 48 hour horizon and require a 2 hour cleaning period once every 36 hours. The product demand of the two example problems is given in Table 1. This demand must be met before the end of the time horizon. All other data is identical to that of the example problem from Bongers and Bakker [1]. Table 1: Demand for products in example problems
A
B
C
E
F
G
40,000 24,000
16,000 16,000
16,000
40,000 24,000
28,000 24,000
24,000
Product
Example 1 Example 2
Feasibility (Feas.) and makespan (MS) objective functions were used for both examples. In the feasibility optimization, the objective was to maximize production with the demand as upper bound, whereas in the makespan minimization the demand was set as a lower bound. All optimizations were performed with Gurobi 3.0 and the computational results are given in Table 2. Table 2: Computational results of the example problems
Ex.
Obj.
Model
Time slots
Variables (integer)
Const.
CPU Time
Nodes
1
Feas.
ASM RPM RTN ASM RPM RTN ASM RPM RTN ASM RPM RTN
22 27 27 22 27 27 19 36 36 19 36 36
1114(324) 1946(108) 2640(567) 1114(324) 1946(108) 2640(567) 1559(302) 3170(144) 4979(837) 1559(302) 3170(144) 4979(837)
2254 2088 12258 2254 2088 12258 2216 3170 26103 2216 3170 26103
51 s 1s 122 s 566 s 109 s 2446 s 320 s 7s 4773 s 411 min 20 min >24 hr
805 0 1188 26k 16k 125k 2701 62 3540 916k 59k 1105k
MS
2
Feas.
MS
Value Obj. Fun. 128k 128k 128k 42.63 42.63 42.63 128k 128k 128k 45.61 45.61 46.05
Ice Cream Scheduling: Modeling the Intermediate Storage
919
The aggregated storage model (ASM) requires the fewest time slots since it is the only model that can mix and pack multiple storage tanks of the same product in one time slot. However, the number of integer variables for the ASM is still larger than that of the related period model (RPM) because the ASM requires integer variables to model the aggregated storage tanks. The RTN model requires even more integer variables since it uses binary variables for individual storage tanks. All three models were able to meet the demand for both examples and they all found the same minimum makespan for example 1. However, even after 24 hour, the RTN model could not find the optimal makespan for example 2. The best makespan at that point was almost half an hour larger than the minimum makespan and the lower bound was still at 43.16 hours. The Gantt chart from the RPM model of example 2 is given in Figure 4. The Gantt chart of the ASM model was similar, with only a small difference in the mixing stage. The RTN model, however, produced a schedule with a different order on the second packer, which lead to the larger makespan.
Figure 4: Gantt chart for example 2 from the RPM model
5. Conclusions As expected, the very general RTN model is considerably less computationally efficient than the two problem specific models. Especially for the larger second example the CPU time for the RTN model becomes prohibitive. For this larger example problem, the difference between the RPM and ASM models also becomes clearer. Therefore, it could be concluded that the RPM model is the most efficient formulation for the ice cream scheduling problem.
6. Acknowledgements This research is supported by Unilever, which is gratefully acknowledged.
References [1] Bongers and Bakker, 2006, Application of multi-stage scheduling, Escape 16 Proceedings [2] Bongers and Bakker, 2007, Modelling an Ice cream factory for de-bottlenecking, Escape 17 Proceedings [3] Méndez, Cerdá, Grossmann, Harjunkoski and Fahl, 2006, State-of-the-art review of optimization methods for short-term scheduling of batch processes, Computers and Chemical Engineering 30, 913-946 [4] Shaik and Floudas, 2008, Unit-specific event-based continuous-time approach for short-term scheduling of batch plants using RTN framework, Computers and Chemical Engineering 32, 260-274
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Production Optimization and Scheduling across a Steel Plant Iiro Harjunkoskia*, Sleman Salibaa , Matteo Biondib a b
ABB Corporate Research, Wallstadter Str. 59, 68526 Ladenburg, Germany ABB S.p.A, Via Albareto 35, 16153 Genova, Italy
Abstract This paper focuses on the planning and scheduling optimization of two processes in steel production, the melt shop and hot rolling. Important steps and challenges of bringing a scheduling model from a conceptual idea to an industrial solution that steers the complete production of a steel plant on a day-to-day basis are covered. Moreover, we discuss the main development steps and the embedding of a mathematical model based solution into a production system. Finally, we present some results based on realworld data from an industrial implementation. Keywords: Enterprise-wide optimization, melt shop, hot rolling, MILP.
1. Introduction Scheduling the production of a steel plant is far from trivial since the solution needs to consider various production steps, each of which may be subject to very specific physical, chemical and economical rules and constraints. This poses a number of challenges. An ideal situation would be to comprise all energy-intensive steps of the metals supply chain (Fig. 1) into one single optimization model. However, this is currently not yet possible, both due to the computational and modeling limitations and the different methodologies required. The initiative of Enterprise-Wide Optimization (EWO) (e.g. Grossmann, 2009) is taking steps towards resolving these types of problems by applying true optimization capability across traditional system borders.
Figure 1. The steel production supply chain
In this paper, we focus on the two first production steps: the melt shop and hot rolling of steel. In a melt shop, typically, either scrap is melted into heats in an electric arc furnace (EAF) or liquid iron is tapped from a blast furnace (BF) and after some further processing, the heats (typically 100-150 tons of liquid steel) are cast into slabs. The *) Corresponding author: [email protected]
Production Optimization and Scheduling across a Steel Plant
921
melting process in an EAF is very energy-intensive. In Harjunkoski and Sand (2008), an MILP-based model for melt-shop scheduling was proposed for this type of steel plant. In that paper, melt-shop specific production requirements are discussed and a decomposition strategy is suggested. Complex production rules, combined with a large product portfolio, make it impractical to formulate one single mathematical model even for this first step of the metals supply chain.
2. The hot rolling process In our present work, we will focus more on the hot rolling mill and its production planning and scheduling. In a hot rolling mill, the slabs from a melt shop are re-heated and rolled into coils. It is important to note that normally one slab is transformed into exactly one coil. The steel slabs with a thickness of around 200 mm result into coils with a thickness of typically between 2-6 mm. The thickness is reduced through the impact of high pressure, for instance in a tandem mill, where the slab is going through subsequent pairs of work rolls that gradually reduce the thickness. A very simplified example can be seen in Fig. 2. Finally, the “long and thin” slab is rolled into a coil. pressure workrolls slab
tocoiler
Figure 2. Part of a tandem rolling mill with four work rolls
The main planning and scheduling problem considered comprises two steps: x Build the rolling programs x Sequence the built programs 2.1. Building hot rolling programs Rolling programs consist of a sequence of slabs that are rolled into coils in a campaign, after which the work rolls need to be maintained, mainly due to wear on the edges caused by high pressure. The hot rolling process is by itself a very demanding control problem, including pressure control with a high rolling speed under extreme temperature, and it is important to ensure stability for achieving a desired steel quality. The control part is not in the scope here, but the process heavily affects the constraints that need to be considered while defining rolling sequences. Some of these are: x Physical constraints e.g. restricted thickness- and width changes of subsequent slabs. Changes should be small in order to avoid instabilities that can result among others into a coil break or surface quality problems. x Upper limit on the rolling program length, caused by wear on the work rolls. At the edge of the slab the pressure difference on the work roll is very high carving a mark into the work roll. Therefore a common rule is only to change into narrower coils, such that the mark always stays outside the final product. x Chemical or metallurgical constraints, e.g. it may cause instabilities to mix steel types of different grades (subsequent hard and soft steel types). Thus the sequence of slabs (coils) is very important. Most commonly, programs follow a so called coffin-shape pattern, see Fig. 3, where the rolling starts with the uppermost slab (thin and wide rectangle) then increasing the width stepwise due to stability, after which the slab widths are monotonously decreasing. The first part with increasing width
922
I. Harjunkoski et al.
is called “header” normally containing coils with moderate quality requirements. In Fig. 3, the header is followed by three “bodies”, each representing a different width class.
Width
Figure 3. Width profile of a rolling program (starting from the top)
2.2. Sequencing the hot rolling programs After building the programs they need to be sequenced. The most critical sequencing task is done within a program, but it also plays an important role in which sequence the programs are rolled. Main criteria for the sequencing of programs emerge e.g. from customer due dates, steel types, program lengths, product mix etc. The sequencing needs to balance between these and many more factors while maximizing the throughput and minimizing the costs. The problem is a more traditional scheduling-type of problem that can be solved by restricting the search space of known mathematical programming approaches. In the following, we refer to the described problem as the hot rolling scheduling optimization problem (HSO problem).
3. Planning and scheduling solution for the hot rolling The optimization problem involves both planning and scheduling decisions. The solution should generate a schedule for up to 5000 coils. Due to the complexity and the fact that the sequencing part depend on the resulting rolling programs it is very difficult to solve the complete problem at once.. Lopez et al. (1998) formulated the HSO problem as a price collecting traveling salesman problem and suggested a heuristic based on Tabu Search. This approach was successfully applied to Dofasco, a Canadian steel producer, but failed to be applied to other steel plants. Cowling (2003) developed a commercial system, which provides semi-automatic schedules. Most recently, Zhao et al. (2009) applied a two-stage scheduling method to the hot rolling area of Baosteel, China, generating schedules for about 1000 slabs. This paper introduces a combined heuristic-mathematical programming approach comprising the following steps: 1. Heuristic method to construct valid parts of a rolling program (called bodies), that follow operational rules and use coils of the same width class. 2. Minimum cost flow problem to optimally put together the above generated parts into rolling programs (LP). 3. Sequencing problem (MILP) to determine the sequence of the programs. 4. Post-optimization to improve the solution. A systematic approach is crucial for being able to manage the multitude of rules and constraints for constructing rolling programs. Often the operational rules (mainly related to avoiding equipment wear and ensuring product quality) are described qualitatively, e.g. in a free form document. In the worst case they must be obtained through interviews with operational people. We systemized the rules by structuring and formulating them as XML-files, which allows editing and computer-based operations. The organization of rules is illustrated by the XML snapshots in Fig. 4. Some optimized rolling programs generated by our production planning and scheduling system are visualized in Fig. 5.
Production Optimization and Scheduling across a Steel Plant
923
Figure 4. XML-based rule file
The first picture on the left shows the overall structure of the rule file, containing e.g. rules for programs, headers, bodies, jumps (thickness changes) and default values. Next to it the definition of a jump rule is shown, which describes the change of coil thickness (restricted to 1mm within starting thicknesses of 4-8mm). The two rightmost pictures show the rules for constructing a body including among others the above jump rule.
Figure 5. Optimized hot rolling programs
4. Solution deployment aspects Having around 5000 coil orders, lots of specific rules and a need for frequent replanning, it is not enough to do an implementation in a modeling environment (e.g., GAMS), since the data amount is very large and its format may vary case-by-case. Reusability, flexibility as well as configurability are important aspects when building a software application. Especially this applies to advanced solutions such as planning and scheduling, since the amount of experts is limited and in the course of time, any industrial system will need to be maintained and updated. The implementation approach of both the melt shop and hot rolling mill schedule optimization solutions is shown in Fig. 6. The only communication to the outside is done through XML-files (following the ISA-95 standard). The models and algorithms are separated from the data handling, which makes extensions or changes easy to implement. XML (ISAͲ95)
XML(input)
XML(output)
Model3…
XML(output)
Model2
Solutionalgorithm
SolverEngine ILOGCPLEX
MathematicalOptimization
XML(input)
Dataprocessing
Model1
MSO Client
HSO Client
Inputdata(ISAͲ95)
HSOWS +Mngr
MSOWS +Mngr
HSO Module
MSO Module
Resultprocessing XML (ISAͲ95)
Output(ISAͲ95)
WebͲserver
Figure 6. System implementation (core modules and web-service)
924
I. Harjunkoski et al.
The solutions are implemented as web services. Following the web-service architecture, the optimization engine (PC) can be located anywhere within the intra/internet. The respective web-client can connect to the web-server and call its optimization service on demand. This approach allows for maximum flexibility and linking the optimization solution into an existing PC-environment does not require any changes to the solution. In integrated steel mills (plants with a melt shop and hot rolling mill) the slabs are stored in a slab yard, until taken into rolling. A slab yard containing 500-2000 slabs corresponds to a capital tied to stock of the order of hundreds of millions of EUR. There are many studies done on how to jointly optimize the continuous caster and the hot rolling mill, e.g. Cowling and Rezig (2000) or Tang et al. (2001), but the main deficit of these approaches is that the continuous caster schedule may not be feasible for the entire melt shop. An infrastructure as shown in Fig. 6 can enable a true collaborative optimization opportunity, where both plant areas have feasible schedules. Potential benefits also come from hot charging (a slab from the caster is directly rolled before it cools down), which can save significant amount of energy, as the re-heating of one slab typically requires 1000 m3 of natural gas.
5. Benefits and conclusion The main benefits of using the presented optimization for planning the discussed steel production are: x Melt-shop: Optimal casting sequences resulting in reduced setup times (20%), synchronization of all processing steps, reduced planning effort (>95%), more stable planning quality and repeatability, especially during disturbances. x Hot rolling mill: Higher productivity (throughput) owing to less work roll changes through longer production programs, better use of existing slabs in the slab yard, possibility to consider the full month of production, less thickness and profile changes improving quality and increased due-date reliability. x Potential plant-wide benefits: Increased overall productivity, higher ratio of hot charging slabs (energy savings), lower slab yard inventory Additionally, maintenance operations can also be part of the scheduling optimization to ensure that the impact of maintenance on productivity is minimized both in the melt shop, as well as in the hot rolling mill. The solutions presented do not only contribute to better local production schedules but also create potential for a true plant-wide or even enterprise-wide optimization (EWO). Owing to the flexibility of the methodologies, there are lots of ways how to implement the collaboration and this poses many research challenges for the future.
References Cowling P., Rezig W., 2000, Integration of continuous caster and hot strip mill planning for steel production. Journal of Scheduling 3, pp. 185-208. Cowling P., 2003, A flexible decision support system for steel hot rolling mill scheduling. Computers & Industrial Engineering, Volume 45, Issue 2, August 2003, pp 307-321. Grossmann I.E., 2009, Research Challenges in Planning and Scheduling for Enterprise-Wide Optimization of Process Industries, Computer Aided Chemical Engineering 27, pp. 15-21 Harjunkoski I., Sand G., 2008, Flexible and configurable MILP-models for melt shop scheduling optimization. Computer Aided Chemical Engineering 25, pp. 677-682 Lopez, l., Carter, M.W., Gendreau, M. (1998). The hot strip mill production scheduling problem: A tabu search approach. European Journal of Operations Research, Vol. 106, pp. 317-335. Tang L., Liu J., Rong A., Yang Z., 2001. A review of planning and scheduling systems and methods for integrated steel production. European Journal of Operational Research, Vol. 133, Issue 1, pp 1-20. Zhao, J., Wang, W., Lui Q., Wang, Z. and Shi, P. (2009). A two-stage scheduling method for hot rolling and its application. Control Engineering Practice, Vol. 17, Issue 6, pp. 629-641.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Simultaneous Optimization of Planning and Scheduling in an Oil Refinery Edwin Zondervana, Tijn P.J. van Boekela, Jan C. Fransooa, André B. de Haana a
Eindhoven University of Technology, P.O. Box 513, 5600MB Eindhoven, The Netherlands
Abstract In earlier work we have developed and tested a scheduling model [1] in the AIMMS software. In this follow-up contribution we will develop a planning model. Next we will identify the information flow between scheduling model and the planning model. Lastly we will integrate the two models in a straight-forward fashion using run-modes and rolling-horizon methodology. With a case study we prove that the overall strategy aids systematic integration of the two levels, which allows fast optimization of the shortterm as well as the long-term decisions without significantly affecting the quality of the solution Keywords: Oil refinery, Optimization, Planning, Scheduling, Rolling-Horizon, MILP
1. Introduction Optimization in an oil refinery can be performed at three levels of decision; the planning-, scheduling- and control level. Traditionally, models for each decision level in the oil refinery were developed and used separately. This resulted in different modelingand solution approaches, which restricted integration of these levels. As integration of the decision levels progresses, also complexity of the problem increases; resulting in problems that cannot be sufficiently solved with current computer power and available soft computing techniques. For this reason, sequential approaches to solve hierarchically structured problems are preferred to parallel methodologies. It is generally believed that independent optimization of the decision levels yield suboptimal results, prospecting major improvements when an integrated approach is followed. As scheduling concerns short term decisions, the planning level determines the long term operational strategy of the refinery. The length of this time horizon is not always the same, but common planning horizon lengths are one or more months or quarters, and in some cases even over one year. Planning decisions typically concern: which crude types to buy, how much of the chosen crude types should be bought and when to buy them. Planning also determines which run-modes should be used in the processing units in such way that profit is maximized. Information about market demand and the effect of the run-modes on the product yields are vital for making the right planning decisions. Once decisions have been made at the planning level, the optimized planning is sent to the scheduling level, which normally fills in the shorter time horizon which consists only of the first part of the planning horizon.
926
E. Zondervan et al.
The scientific literature offers several oil refinery planning models, for example the models by Pinto et al. [2] A disadvantage of Pinto’s model is that it contains nonlinearities (as a result of the crude oil properties), which gives rise to computational complexity. Gao et al. [3] as well as Göthe-Lundgren et al. [4] propose a simpler model, in which the crude oil property terms are replaced by so called run-modes. The disadvantage of this model is that it does not include the possibility to change the incoming flows and the product flows. Zhang et al. [5] also developed planning models, in which the production can be influenced by different operating conditions. Zhang’s model includes a large number of operating scenarios because the production level can vary around a feed rate centre level using very small step sizes within a 10% bandwidth, which increases the size of the problem considerably. In our work we will combine the planning models by Gao [3] , Göthe-Lundgren [4] and Zhang [5]; we will include run-modes which affect the basic production level that depends on the feed flow (to counter the disadvantage of the models by Gao and GötheLundgren). Here we will only allow a limited number of run-modes (unlike the model of Zhang) with fixed costs and influences on the product flows, to keep the problem size manageable.
2. Problem statement and proposed model Given the allowed run-modes, feedstock and product prices and the initial inventories, calculate a plan that decides what type of crude of each parcel should be and which run mode should be chosen for each of the CDU’s at each time period. In figure 1, the overview of the refinery operations that are included in our model is given.
Figure 1: Schematic representation of the problem
The outputs of this planning model form the inputs for our scheduling model. The scheduling model needs data such as which vessels arrive at which point in time. Using the run modes that are obtained from the planning model, the production rate and
Simultaneous Optimization of Planning and Scheduling in an Oil Refinery
927
corresponding profit can be calculated. The scheduling model calculates which vessel should unload which parcel to which storage tank at each period. The planning model calculates for the scheduling level which run mode to choose, which parcel size to choose, which initial inventory level and which CDU demand is appropriate. Planning calculates this on the bases of product and crude prices and on oil composition. The complete picture is depicted in figure 2. The scheduling model will recalculate the inventory level and will feed this data back to planning.
Figure 2: Link between planning and scheduling model
The arrow pointing from scheduling to the planning represents the feedback which the planning model receives after the scheduling model has finished detailed calculations for the first planning period. The inventories in the planning model are subsequently updated and the planning model is rerun. It is noted that this addition dos not make the model iterative, as it does not change the solution of either the planning or scheduling model at the time it is being solved. The actual updating of the planning model is done by a rolling horizon approach. We use the rolling horizon strategy to update the planning model, but also to keep the problem size manageable. The planning model is solved for the whole horizon, but the scheduling model will only be solved for the current time period. This will reduce the CPU time and it will not significantly affect the solution, because, by the time the next period is started (which was not solved for the current horizon yet), there is most likely new information available (e.g. better demand forecasts), which is the perfect prelude to rerun the model. In figure 3 the rolling horizon approach is illustrated, at the current time, the planning model is solved until the end of the specified horizon, with the information that is available. Once the planning model outcome is known, the outcome is fed to the scheduling model, which uses timeslots that are significantly smaller, to accommodate for the larger frequency in which decisions are taken. The objective of the planning model is to maximize the profit for the planning horizon, it will keep the inventory costs as low as possible, ergo, several problems arise, first a low inventory could lead to situations where there is not enough crude oil to feed the CDU, and secondly low inventory levels complicate blending of crudes with high or
928
E. Zondervan et al.
low density or sulphur level. Because the scheduling gives a more detailed description of the refinery operations, there has to be feedback to the planning model, for example the updated crude oil inventory. With the updated numbers, the planning model can calculate the optimal solution for the next planning horizon that is now shifted one period in time.
Figure 3: Rolling horizon approach
3. Numerical results and discussion We have developed optimized production plans and schedules for a test problem similar to our motivation example in [1], containing four planning periods, two crude types, four run-modes, two CDU’s, four storage tanks, two products, sixteen parcels and eight vessels. The initial tank volumes and CDU demands are considered to be known and model input. The objective function is defined as a profit. By introducing product flows (both in the planning and scheduling models) profit can be calculated as revenue minus costs as opposed to working with crude processing margins. All optimizations have been performed in AIMMS 3.10 with the CPLEX 11.2 solver on a computer with an Intel Core 2 Duo T7250 with 2GB RAM. The formulated MILP’s contain for the planning model 203 equations with 198 variables of which 80 discrete variables and for the scheduling model 1868 equations, with 1514 variables of which 76 are discrete. The planning model determines which crude type should be present at each predefined parcel-vessel-period combination. This data is then fed to the scheduling model which will subsequently calculate the crude oil flows from the vessels to the intermediate storage tanks and then to the CDU’s and after that the product flows from the CDU’s. Figure 4 shows the optimized profiles of crudes going to the CDU’s. Initially, the feed is at its maximum level, as the model is trying to lower the inventory costs by decreasing the tank holdups as fast as possible. As the feed ramps down, the total feed over the whole horizon should not exceed the demand. With the details of the inventory determined by the scheduling model, now the planning model is run again. As the planning is moving over the new, shifted horizon, the crude-parcel combinations and run modes are recalculated for the final three weeks of the previous planning horizon, as well as a new week that is now part of the planning horizon.
Simultaneous Optimization of Planning and Scheduling in an Oil Refinery
929
The total profit computed with our integrated planning-scheduling model equals 41532800 USD. The CPU time was less than 10 min (581 sec). These values relate very well to the findings of the separate scheduling model which we reported earlier [1] (~5%). As an example, figure 4 shows the optimized crude oil flows to the CDU’s.
Figure 4: Optimized crude oil flows from the (aggregated) tank inventory to the CDU’s after the first optimization run.
4. Conclusion In this contribution we have extended the model developed in earlier work [1] for the scheduling of crude unloading, blending and charging in an oil refinery to a planning model. This new model, which utilizes run-modes and a rolling horizon methodology, is clearly and improvement as the CPU time remained manageable and the solution integrity of the separate scheduling was maintained, while the optimization procedure was extended to a larger time horizon. We also modified the model by changing the objective from a cost minimization to profit maximization as the cost minimization did not directly influence the turnover.
References [1] van Elzakker, M., Zondervan, E., Fransoo, J., de Haan, A. Optimization of Crude Unloading, Charging and Blending in an Oil Refinery, Proceeding of ESCAPE 20, Naples, Italy, 2010 [2] Pinto, J.M., Joly, M., Moro, L.F.L. Planning and scheduling models for refinery operations (2000) Computers and Chemical Engineering, 24 (9-10), pp. 2259-2276. [3] Gao, Z., Tang, L., Jin, H., Xu, N. An Optimization Model for the Production Planning of Overall Refinery (2008) Chinese Journal of Chemical Engineering, 16 (1), pp. 67-70. [4] Göthe-Lundgren, M., T. Lundgren, J., A. Persson, J. An optimization model for refinery production scheduling (2002) International Journal of Production Economics, 78 (3), pp. 255-270. [5] Zhang, J., Zhu, X.X., Towler, G.P. A simultaneous optimization strategy for overall integration in refinery planning (2001) Industrial and Engineering Chemistry Research, 40 (12), pp. 2640-2653.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Efficient Scheduling of Batch Plants Using Reachability Tree Search for Timed Automata with Lower Bound Computations Subanatarajan Subbiaha, Christian Schoppmeyera, Sebastian Engella a
Process Dynamics and Operations Group, Department of Biochemical and Chemical Engineering, Technische Universität Dortmund, Emil-Figge Str. 70, 44227 Dortmund, Germany
Abstract In this paper we discuss an extension of a recent approach to solve batch scheduling problems using reachability analysis for timed automata (TA) by embedding lower bound computations in the reachability algorithm. We propose an extension of the embedded LP-formulation to handle scheduling problems with NIS policy and an improved minimum remaining processing time (MRPT) procedure to compute the lower bounds. The proposed bounding procedures are tested on typical batch scheduling problems. The comparative study shows that the MRPT-based bounding procedure is efficient and increases the overall performance significantly in comparison to the LPbased bounding procedure. Keywords: Batch scheduling, branch-and-bound, timed automata, reachability analysis.
1. Introduction Planning and scheduling of batch plants are complex tasks and require the use of intelligent decision-support systems. The state-of-the-art approach to handle scheduling problems in batch plants is to model them as MI(N)LP and to solve them using branchand-bound or cutting-plane methods. An alternative approach is to model the problem as timed automata (TA) and to solve it using cost-optimal reachability analysis [1]. The advantages of the TA-based approach include: intuitive graphical formalism to specify the problem, modular modeling of the recipes and of the resources individually as sets of automata, and the use of efficient reachability algorithms. The TA-based approach is also confronted with the problem of combinatorial explosion thus demanding for efficient procedures to reduce the search-space. Several reduction techniques were proposed in the past to enhance the efficiency of the search [2]. In order to speed up the solution procedure and to provide a measure of the quality of the solutions, lower bounds are widely used in branch-and-bound algorithms. Panek et al. extended the reachability algorithm for TA by embedding a specialized LP formulation to generate lower bounds [1,3]. Computational results showed that embedding LP’s reduces the search space considerably but a substantial amount of the computation time is needed to create and to solve the LP’s in the nodes. This motivated the search for a bounding scheme which is computationally cheap and can still provide efficient lower bounds. In this contribution, we revisit the LP-based bounding scheme which is extended by additional equations to handle non-intermediate storage policies. We then explain an alternative bounding scheme where the lower bounds are computed using a minimum remaining processing time (MRPT) procedure. We compare the performance of the proposed bounding procedures for standard job-shop problems as well as a benchmark example for batch scheduling [5].
Efficient Scheduling of Batch Plants Using Reachability Tree Search for Timed 931 Automata with Lower Bound Computations
2. Timed Automata Based Scheduling 2.1. Modeling approach The conceptual idea of modeling a scheduling problem as TA is as follows. Consider a simple example process where two products A and B have to be produced using processing units U1 and U2. The operational sequence and the durations for product A are (o1, U1, 2) ĺ(o2, U2, 5). Similarly, for product B they are (o3, U2, 5) ĺ(o4, U1, 2). It is assumed that there exist no storage units to store the intermediate materials (NIS) and the objective is to minimize the makespan. For each processing unit ‘k’ a separate resource automaton is created and for each product ‘j’ a separate recipe automaton with a clock is created (see Fig. 1). Each resource automaton consists of an idle location representing that the resource is not executing a recipe operation and a busy location representing that it is executing an operation. Allocation of the resource to perform a recipe operation is represented by a transition from the idle to the busy location and release of the resource after executing the operation is represented by a transition from busy to idle. In the recipe automaton of a product ‘j’ each recipe operation ‘i’ is represented by two locations namely waiti - indicating that the operation is waiting to be executed in the corresponding resource and execi - indicating that the operation is executing. An additional location fj is defined to indicate the termination of recipe ‘j’. Starting the execution of an operation ‘i’ by occupying a resource is represented by a transition labeled Di and finishing an operation by releasing a resource is represented by a transition labeled Mi. A clock cj is introduced in the recipe automaton to model the elapse of time during the operations. The invariants in the execi location of the corresponding operation force the automaton to leave the location once the durations have expired. The guard conditions on the clocks on the transitions labeled with M ensure that the task is executed for the corresponding duration only. The synchronization of the transitions in the resource automata and the recipe automata is realized by the corresponding synchronization labels of the transitions. This ensures that the resource does not perform more than one operation at a time. Note that if the resource is in its idle location it could either be holding the material that it has processed lately and waiting for the succeeding unit to start processing or it could be empty. In order to model the NIS policy for each resource ‘k’ a global shared binary variable Sk is introduced. The initial value of the global variable Sk is set to ‘0’ indicating that the resource is empty and ‘1’ if it is holding the material processed. In the illustrative example, the shared variable S1 of resource U1 denotes its state. In the transitions, the guards on the shared variables ensure that an operation can start only if the corresponding resource is empty, and the actions update the status of the resources. 2.2. Reachability analysis The interacting resource automata and recipe automata are composed to form the complete model of the scheduling problem. This is realized using a technique known as parallel composition which is usually performed on-the-fly during the reachability analysis to reduce space complexity. The composed automaton is a directed graph, called the reachability graph, with nodes representing the state of the system and arcs representing the transition from one state to another. A cost-optimal reachability Recipe automaton - A D M exec o1 wait o1 c1 := 0 c1 > 2 S1 == 0 c < 2 S1 := S1 + 1 1 Resource automaton - U1 D D idle1 busy1 M
wait o2
M D exec o2 c1 > 5 c1 := 0 S := S2 -1 S2 == 0 c1 < 5 2 S2 := S2 +1 S1 := S1 -1
fA
Resource automaton - U2 D D idle2 busy2 M
932
S.Subbiah et.al.
analysis is performed on the composed automaton searching for a path from the initial location linit , which represents that all recipe operations are waiting to be executed and all resources are idle, to the final target location ltar , which represents that all the operations have finished execution, with minimum cost. Every path from linit to ltar ensures a feasible schedule and the path with minimal cost among all the paths from linit to ltar represents the optimal schedule. The cost is computed by introducing a global clock which is started at linit , never reset to zero, and stopped when ltar is reached. The time elapsed on the global clock at a node in the tree indicates the cost incurred to reach this node, and at the target node ltar its value is the makespan.
3. Embedded Lower bound Computation Scheme The overall structure of the implementation with the embedded steps of the LP-based and the MRPT-based bounding procedure to compute lower bounds are shown in Fig. 2. 3.1. Embedded Linear Programming Routine The structure of the MILP model with the detailed algebraic formulation is given below. Sets: The sets declared and used are as follows: J – recipe orders, O - operations, K units, Oj - operations of order j where j J, Rk - operations to be executed on unit k where, Rk O, Oend - operations which have no successors with respect to a recipe. Parameters and Constants: Durations of the operations: d(o) 0 , o O. A conservative, safe time horizon and an upper bound for the makespan: H 0. 3.1.1. Variables Continuous variables: The starting times of the operations: s(o) 0, o O. Binary variables: Encode the precedence relation between operations that bid for the same resource. For each resource k K and for each pair of operations o, o’ where o and o’ Rk and o z o’ a binary variable p(o, o’) indicates the order in which the operations are executed: p(o, o’) = 1 if o is executed before o’ and p(o, o’) = 0 otherwise. 3.1.2. Equations Starting time and finishing time of operations: Every operation must be started at or after time zero and must be completed before H: o O, s(0) 0 and s(o) + d(0) H. Mutual exclusion properties: Simultaneous execution of two operations on the same resource is excluded: o, o’ Rk and o z o’ and k K, p(o, o’)+ p(o’, o) = 1 Precedence constraints: All operations belonging to a recipe must be executed in the specified order: om , on Oj where j J and m < n, s(om) + d(om) s(on). Sequencing constraints: The starting times and the finishing times of the operations executed on resource k, k K, are ordered by the following conditions, o, o’ Rk and o z o’: s(o) + d(o) - s(o’) H * (1 - p(o, o’)) and s(o’) + d(o’) - s(o) H * p(o’, o). These inequalities link the relation between the continuous and the binary variables. Blocking constraints: If the storage policy between the units is considered to be NIS then an operation o’ can start in unit k only if unit k is unoccupied and the content processed lately in unit k has been transferred to another unit. Let o+ denote the immediately succeeding operation of operation o of recipe j, then the blocking constraint is modeled as follows: k K o, o’ Rk and o z o’: s(o+) - s(o’) H * (1 - p(o, o’)) and s(o’+) - s(o) H * p(o’, o).
Efficient Scheduling of Batch Plants Using Reachability Tree Search for Timed 933 Automata with Lower Bound Computations Objective function: min ȥs(o) + d(o) o Oend. At each node, whenever a new operation o has been started on a resource k and o’ has not been scheduled yet, the corresponding LP is updated by fixing the binary variables p(o, o’) and the starting times s(o) and solved to obtain the lower bound. The trace of the scheduled and unscheduled operations is stored in every node of the tree. 3.2. Embedded Minimum Remaining Processing Time Routine (MRPT) In the reachability tree whenever an operation finishes, using the information from the current node, the MRPT bounding scheme computes the maximum among the following two expressions and provides a lower bound. Recipe based lower bound expression: For all given recipe orders ‘j’ where j J, lbj = GC + ¦ dur (ounscheduled) ounscheduled Oj where GC is the value of the global clock when the node was reached, the second term is the sum of the durations of all unscheduled operations that belong to recipe order ‘j’. In case of recipes with alternative paths the minimum durations among the alternative operations which are unscheduled are chosen. Resource based lower bound expression: For all given resources ‘k’ where k K, lbk = GC + min(tunscheduled-operation) + ¦ dur (ounscheduled) ounscheduled Rk where, the second term is the earliest time at which the resource could start an operation from the set of unscheduled operations and the third term is the sum of the duration of all unscheduled operations that have to be performed in ‘k’. The lower bound is then given by max (lbj, lbk) j J and k K. In case of recipes with alternative paths, the duration of the unscheduled operations is considered to be zero as it is not sure whether the resource will execute the operation or not. Since the lower bound for a problem with UIS is a valid and safe (but a conservative) lower bound for the same problem with NIS the MRPT-based lower bound computation for UIS can be used for the NIS case too.
4. Numerical Results The lower bound computation schemes were implemented in the TA-based scheduling tool TAOpt developed at our group. The search algorithm chosen is a combination of a depth-first and a best-first search strategy. The reduction techniques: weak non-laziness (NL), safe and the unsafe sleep-set-method (SSM) [2] are used for the tests. For all the tests the computation equipment used is a Linux machine with 2×2.4 GHz speed and 16GB memory and the embedded LP’s are solved using CPLEX 12.2. In order to investigate the performance of the bounding schemes two series of instances are used. The first series consists of a set of job-shop problems that were generated using the benchmark instance generator [4]. The generator creates as many operations as there are resources for each job, the durations of the operations are uniformly distributed within the range [10-50]. Table 1 shows the number of nodes explored and the computation times required to prove the optimality of the best solution obtained in the reachability analysis. The result clearly shows that while both bounding schemes in combination with the SSM and the NL reduction schemes leads to a better reduction than the SSM + NL scheme alone, the MRPT-based bounding technique significantly decreases both the number of nodes explored and the computation time in comparison to that of the LPbased bounding technique. Table 1: Comparison results on the self-generated job-shop instances: (EN) - number of nodes explored to prove optimality; (TCPU) -computation time in CPU s required to prove optimality. Safe (SSM + NL) LP + Safe (SSM + NL) MRPT + Safe (SSM + NL) Jobs/Res EN TCPU EN TCPU EN TCPU 5/5 4,561 0.04 197 0.04 221 0.02 5/6 69,524 0.78 2,895 1.40 2,235 0.07 5/7 270,354 3.50 31,865 13.15 15,211 0.37 6/5 46,071 0.50 1,950 1.11 623 0.04 6/6 1,853,996 25.30 24,780 14.67 14,939 0.41 6/7 2,431,789 36.64 180,138 91.49 65,263 2.05 7/5 2,215,662 30.01 24,368 17.28 6,346 0.19 7/6 10 million* 169.33* 217,501 166.71 72,275 2.27 7/7 10 million* 175.59* 1,620,984 1034.92 558,339 19.59
934
S.Subbiah et.al.
Table 2: Comparison results on the case studies from [5] : (EN) - number of nodes explored to prove optimality; (TCPU) -computation time in CPU s required to prove optimality. Case study
LP+Safe(SSM+NL)
MRPT+Safe(SSM+NL)
LP + Unsafe SSM
EN TCPU EN TCPU EN CS1UIS 1,175 0.57 699 0.01 617 CS2UIS 244 0.12 176 0.01 211 CS3UIS 28,540 1.14 CS1NIS 1,422 0.58 1,071 0.03 384 CS2NIS 857 0.25 863 0.02 362 CS3NIS 69,421 2.53 * sub-optimal solution (unsafe SSM pruned the optimal trace)
TCPU 0.20 0.08 0.09 0.10 -
MRPT + Unsafe SSM
EN 356 186 5,281 376 378 9,905*
TCPU 0.01 0.01 0.23 0.02 0.01 0.44*
The second series of instances considered is from the paper [5] where 3 case studies are investigated. In the interest of space the problem data are not given here. The first case CS1 considered is a 3 stage process with 4 units where 4 different products are produced. The demand is to produce 2 batches of the product 1 and 1 batch each of the other 3 products. The second case study CS2 is a 4 stage process with 4 units where 4 different products are produced. The total demand is to produce 1 batch each of the 4 products. The third case study CS3 is a 5 stage process with 6 units where the 5 different products have to be produced. The recipes have alternative production paths and the total demand is to produce 10 batches (2 batches each). Each of the case studies has been solved using the UIS storage policy and the NIS storage policy. Table 2 shows the number of nodes explored and the computation time required to prove optimality in the reachability analysis with the combination of the safe and unsafe SSM + NL and the proposed bounding schemes. Since the third case study consists of alternative production paths, the LP-based reduction shown above cannot be performed. Both bounding schemes with the safe reduction technique obtain the optimal solution. In the tests where the bounding schemes were combined with the unsafe technique, the search space was considerably reduced however in CS3 with NIS the optimal solution was pruned due to the unsafe reduction technique. The results clearly show that the MRPT bounding scheme significantly decreases both the number of nodes explored and the computation time in comparison to the LP-based bounding technique in all cases.
5. Conclusions An extension of the TA-based approach to batch scheduling by embedded lower bound computation schemes has been presented. The LP-based procedure and the MRPTbased procedure were tested on pure job-shop problems and on batch scheduling problems with different storage policies. The results show that the latter is substantially more efficient than the former. In contrast to LP-based bounding the MRPT-based bounding procedure reduces both the space complexity in terms of the number of nodes explored and the time complexity in terms of the computation time required to prove optimality or to find the best schedule. The main advantage of the proposed approach in comparison to the use of MILP solvers is that it can compute first feasible schedules very quickly. The closeness to the optimum however is not quantified if the search is terminated early, and continuous degrees of freedom (e.g. batch sizes) can only be modeled approximately. Current work includes the use of the lower bound calculations in batch scheduling problems with changeovers and of other objective functions.
References 1. 2. 3. 4. 5.
S. Panek, et al. , 2006, Control Engineering Practice, (14), 1183-1197. S. Panek, et al. , 2008, Computers and Chemical Engineering (32), 275-291. A. S. Manne, 1960, Operations Research, (8), 219-223. E. Demirkol, et al. , 1998, European Journal of Operations Research, (109), 137-141. S. Ferrer-Nadal, et al. , 2008, Industrial Engineering Chemistry Research, (47), 7721-7732.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Robust Market Launch Planning for a MultiEchelon Pharmaceutical Supply Chain Klaus Reinholdt Nyhuus Hansena , Martin Grunowb, Rafiqul Gania a b
Technical University of Denmark, Kgs. Lyngby 2800, Denmark Technische Universität München, München 80804, Germany
Abstract It is well known, that the pharmaceutical industry is struggling with increasing cost and length of R&D projects. Earnings of a drug drop drastically after patent expiration. Thus, the industry spends much effort on reducing Time-to-Market. In the literature, little attention is given to drug launching activities after the drug has been approved. In this paper, we present a recourse-based stochastic model, which allows for time phasing the market entries to balance the fluctuating demand with the fixed and periodic production of the active pharmaceutical ingredient. The two major risk elements during launch are forecasting inaccuracy and the risk of a required label change from local regulatory authorities. Robust solutions are found by implementing the Robust Optimization framework. Keywords: Market Launch Planning, Multi-Echelon Pharmaceutical Supply Chain, recourse-based stochastic programming, Robustness Optimization, 1. Introduction The process of launching a new drug is receiving a lot of attention from the pharmaceutical companies as they are under pressure to launch new drugs faster to counter the increasing R&D cost. To help solve this issue three distinct planning problems related to introduction of new products have been identified in [1]; planning (1) the portfolio of investigational new drugs, (2) the capacity level which will satisfy future requirements, and (3) the production of the drug. The first two planning problems include the uncertainty of obtaining an approval for the drug. Here attention is given to the two main activities after the approval is obtained; the preparation of the supply chain for the launch and the negotiations to determine sales price and subsidies with local authorities. First, supply chain preparations are necessary due to the fixed and periodic supply of the active pharmaceutical ingredient, which is insufficient to supply the initial
936
K.R.N.Hansen et al.
surge for the drug required to fill the entire downstream supply chain. Secondly, it is required that negotiations are carried out with local authorities before entering a market, since packaging material has to be approved and subsidies and sales prices negotiated. Two central risks influence operations. First, the demand forecast uncertainty for a new drug is very large. The drug manufacturer has to determine a balance between investing in inventory and the risk of supply shortage, which is difficult to overcome due to the limited flexibility of the production processes. Secondly, the regulatory authorities can demand changes in the packaging material. As a result, any finished i.e. packaged products on inventory have to be scrapped. It cannot be repackaged. A pharmaceutical company has to decide on, whether to place the stock further upstream; see the divergent supply chain in figure 1 (a). That would reduce the scrapping risk but increase the Time-toMarket [TTM]. Alternatively, the company can focus on short TTM by packaging the product prior to the negotiations; aptly called risk packaging because of the risk for scrapping the packaged products. Change API Production Site
Packaging Site
Markets
Demand Best case f orecast
Label change required Market
Realistic f orecast
Label not change required
Worst case f orecast Best case f orecast Realistic f orecast
Inventories
(a)
Worst case f orecast
(b)
Figure 1: (a) The divergent supply chain and (b) the scenario tree structure for each market.
In our approach we handle both risk elements by use of stochastic program with recourse for the six scenarios for each market seen in figure 1 (b). . In programming with recourse, variables are divided into design variables, which are independent of the uncertain parameter and control variables, which allow for corrective measures as the uncertain data becomes known. Since production must be planned ahead, flow variables before and at the packaging site are design variables; see figure 1 (a). The remaining downstream variables are control variables. It is assumed that both uncertainties are revealed simultaneously. A good review of the recourse terminology is given in [2]. Solving a program with recourse gives the best expected solution, but may lead to solutions that are highly vulnerable to scenarios with a large downside. Managers in the pharmaceutical industry are often risk averse and prefer stable solution despite reduced expected profit. To get solutions that are less sensitive to variations in the uncertain parameter, [3] developed a Robust Optimization framework. By penalizing the objective function with the variance of the
Robust Market Launch Planning for a Multi-Echelon Pharmaceutical Supply Chain
937
solution, solutions are found where the objective function value stays close to the expected solution despite variations in the uncertain parameter. In [4] the framework is applied to a planning model of a network of chemical processes. 2. Model Formulation Due to space limitation we only present key equations from the MILP model. The sets used are m M for markets, t T for time and s S for scenarios. 2.1. Market Access and Market Launch Constraints Equations 1 – 3 below are used for time phasing market launches, lmt, and starting the reimbursement negotiation, cmt. The drug can be launched only once in each market according to Eq. 1. Eq. 2 forces reimbursement negotiations with length Nm to be completed before a launch. Eq. 3 limits the number of negotiations carried out at any given time to reflect resource limitation on e.g. personnel to conduct the negotiations for the company. T
¦l
mt
d1
m M
(1)
m M , t T
(2)
m M , t T
(3)
t 1
t Nm
¦c
t lmt
cmt
¦
mt '
t' 1
t
¦
m 'M \{m} t ' t N m 1
cm 't ' d NC
2.2. Supply Chain Flow Constraints The inventory balance in Eq. 4 is the only supply chain flow constraints shown here. Here the added scrap variable, scmts, is needed for the scenarios in which a label change is required, forcing the company to dispose of the finished product inventory. rmts is the amount of products required given the demand. immts and pmmt are the market specific amounts of inventory and packaged drug respectively, as the drug becomes market specific in the packaging stage.
imm ,t 1, s scmts pmmt
rmts immts
m M , t T , s S
(4)
2.3. Scrap Constraint The scrap variable from Eq. 4 is set through Eq. 5. The variable is the same as the inventory on hand after the previous period, only if the reimbursement negotiations are just concluded in t i.e. started Nm periods before and the scenario prescribes a label change via IPms. Here K is a large number.
938
K.R.N.Hansen et al.
scmts t immt 1s K (1 cmt Nm IPms )
m M , t T , s S
(5)
2.4. Objective Function The profit as given in Eq. 6 is to be maximized, where H ms(lmt, immts, scmts) is the cash flow as a function of (i) the revenue of sales given lmt, (ii) the holding cost given immts plus the upstream inventory and (iii) the scrapping cost given scmts. Lms denotes the probability of each scenario for each market. M
S
max ¦¦Lms H ms (lmt , immts , scmts )
(6)
m 1s 1
All continuous variables are positive and inventory variables and flow variables describing packaged quantities have limited capacity. The design variables presented here are (cmt, lmt, pmmt) and the control variables are (immts, scmts, rmts). 2.5. Inclusion of Robust Optimization The model presented above is a typical MILP model with recourse. This solution is exposed to scenarios with potential low profit or even loss. To reflect risk aversion, the Robust Optimization framework from [3] is implemented. Here Eq. (7) is added to Eq. (6), since profit should be maximized: M
S
S
O ¦¦Lms ((H ms ¦Lms ' H ms ' ) 2 T ms ) m 1s 1
(7)
s' 1
O is a weight, which can be set freely and is a measure of the risk aversion. Higher O equals higher penalty in the objective function and reflects higher risk aversion. Eq. 7 penalizes the effect of radical scenarios with the difference between the scenario and the expected solution. T ms used in [5] ensures that the expression is positive , where T ms is found through Eq. 8. S
T ms t ¦Lms ' H ms ' H ms
m M , s S
(8)
s' 1
Presentations of the Restricted Recourse framework and model robustness have been omitted due to the lack of space. Please refer to [4] and [3], respectively. 3. Illustrative Example Data has been created to reflect a realistic case study, which includes 10 markets over a 30 period time horizon. In figure 2 markets covered for both the Market Launch Planning model [MLP] i.e. including equations 1 to 6 and the Robust Market Launch Planning model [RMLP] i.e. including equations 1 to 8 can be seen. Here delays are gaps between completion of negotiations and market launch. The risk aversion weight, O , is set to 1. As the approval of the
Robust Market Launch Planning for a Multi-Echelon Pharmaceutical Supply Chain
939
drug is given in period 12, negotiation cannot start before. After period 17, no new markets are entered for either model as ramp up of capacity follows the increase in demand. Markets 10 9 8 7 6 5 4 3 2 1 12
14
16
18
20
: MLP - Delay : MLP - Demand covered
22
24
26
: RMLP - Delay : RMLP - Demand covered
28
30
Time
Figure 2: Illustration of selected markets and delay between launch and negotiation completion.
As can be seen, the main difference in the solutions is that RMLP launches the drug in more and smaller markets and awaits the outcome of the negotiations in the larger markets 4 and 5. The profit for both solutions is almost the same, while revenue is reduced less than 1 % for RMLP compared to MLP. In the RMLP solution, the expected scrap is reduced by 33 %, though holding cost rises 13 % compared to MLP. The percentage of total inventory held at the upstream stocking point only increases less than 1 % for the RMLP, since more inventory of packaged drug is held to supply more markets. In the RMLP model the variance and maximum deviation are reduced 31 % and 28 % respectively compared to the MLP model. Computation time is 26 seconds and 39 seconds for the MLP and the RMLP, respectively. 4. Conclusion and Further Work We presented a modeling approach for planning market launches of new pharmaceutical products, which included a robust opimization framework. The model was tested for a case to illustrate its effectiveness. Future work will focus on linking the modeling approach to capacity planning and production planning. References [1] N Shah. Computers and Chemical Engineering 28 (2004) 929. [2] NV Sahinidis. Computers & Chemical Engineering 28 (2004) 971. [3] JM Mulvey, RJ Vanderbei, SA Zenios. Operations research 43 (1995) 264. [4] S Ahmed, NV Sahinidis. Industrial & Engineering Chemistry Research 37 (1998) 1883. [5] CS Yu, HL Li. International Journal of Production Economics 64 (2000) 385.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A new Coordination Heuristic for Plant-wide Planning and Scheduling Chaojun Xu,a,c Christian Staud,b Guido Sand,a Sebastian Engellc a
Process and Production Optimization, ABB AG Corporate Research Center Ladenburg, Germany (Tel: +49(0)6203-716044; e-mail: [email protected], [email protected]) b The Faculty of Mathematics and Computer Sciences, Heidelberg University, Germany (e-mail: [email protected]) c Process Dynamics and Operations Group, Biochemical and Chemical Engineering, TU Dortmund, Germany, (e-mail: [email protected])
Abstract The work is motivated by a large-scale steel making process that consists of a meltshop as the first production section and a hot rolling mill as the second production section. The first section produces intermediate products (steel slabs in different quality) which can be stored in an intermediate storage (a slab yard) and which are consumed by the second section. Production planning for each of the sections constitutes a difficult multistage scheduling problem, these are currently solved independently. In this paper approaches to the coordination of the formerly uncoordinated schedulers are discussed. The objective is to reduce the lead time of stored intermediates. Shortening the lead time can significantly reduce the energy consumption since the hot slabs produced by the melt shop cool down over time and need to be reheated for the hot rolling mill, consuming large amounts of natural gas. A generic model formulation for batch schedule coordination problems is presented. The model is based on due date / release date constraints for each set of slabs. Three different algorithms to optimize the due / release dates such that the storage times of intermediates are minimized are discussed and compared: an algorithm based on Lagrangean decomposition, the black box algorithm MCS and a new coordination heuristic. The comparison shows the advantage of the proposed heuristic in terms of solution quality and computational effort. At the end the problems encountered when applying Lagrangean decomposition techniques to the scheduling coordination problem when the objective function is piecewise linear are discussed. Keywords: Optimization and control, production operations, plant-wide planning and scheduling, MILP, coordination, horizontal integration.
1. Problem definition 1.1. Steel plant coordination problem The large-scale steel making production scheduling problem with two consecutive production sections is illustrated in Figure 1. The meltshop scheduler (MSO) and the hot rolling scheduler (HSO) are formulated and solved as mix-integer linear optimization problems (MILP). Currently the operators can generate an overall production schedule for the two sections either by a pull strategy, running the hot rolling scheduler first then MSO, or by a push strategy, first scheduling the melt shop and then performing HSO. In the both strategies one scheduler is adapted to the result of in the
A new Coordination Approach to Plant-wide Planning and Scheduling
941
other section, and the potential of coordination, in particular of charging slabs hot into the reheating furnace (energy saving), is wasted. However, it is unrealistic to merge the existing MSO and HSO in one centralized scheduling optimization model of the entire plant including the aspects above, due to the large computational cost to solve the merged MILP model. In a previous paper (Xu et al. 2010) it was demonstrated that a coordination scheme can be advantageous over pull, push and centralized strategies with respect to solution quality and computational effort. In this paper the coordination concept is explored further and a new algorithm for the coordination of schedulers of sequential production steps is proposed.
Figure 1: Distributed metal production scheduling modules
1.2. Mathematical formulation In this section a concise mathematical formulation of the generalized scheduling coordination problem as illustrated in Figure 2, where Schedulers A and B are assumed to be minimizing the makespan of each section, is proposed. It is proposed to use due date (for the upstream section) and release date (for the downstream section) constraints to coordinate the schedulers. The coordinator provides new due and release dates of the intermediate products (jobs) to the schedulers A and B, denoted them as the common linking variables ሺܾሻ. The Coordinator aims at minimizing the coordination objective function ܨௗ ሺܾሻ in (1) which depends on the objective function values ( ܨ ǡ ܨ ) of the schedulers (2) and (3). The schedulers A and B aim at minimizing the makespans in their sections as well as the sum of the slacks ሺσ୧ ୍א୧ ǡ σ୧ ୍א୧ ሻ between the linking variables ܾ אሺܾሻ of each job ݅ and its corresponding completion time ܥ in section A and its start time ܵ in section B. From the point of view of the coordinator, each job ݅ has to arrive at point ܾ in the storage and to be processed further as soon as possible after this date. This implies minimizing the total lead time in the storage area, which is one part of the coordination objective, େ୭୭୰ୢ ሺሻ ൌ ሺሻ ሺሻ ൌ σ୧ ୍א୧ σ୧ ୍א୧ ൌ σ୧୍אሺ୧ െ ୧ ሻ.
Figure 2: Architecture of the proposed coordination system ݉݅݊ሺሻ ܨௗ ሺܾሻ ൌ ܨ ሺܾሻ ܨ ሺܾሻ ൜ ݏǤ ݐǤ ݁݃ܽݎݐݏ݂ݏݐ݊݅ܽݎݐݏ݊ܿݎ݄݁ݐ
(1)
942
Xu et al.
ܨ ሺܾሻ ൌ ۓ ۖ݉݅݊൫௩ಲ ൯ ݉ܽ݇݁ ݊ܽݏ σאூ ݁ ۔ ۖ ە
ݏǤ ݐǤ ݁ ൌ ܾ െ ܥ ܫ א ݅ ܣ݂ݏݐ݊݅ܽݎݐݏ݊ܿݎ݄݁ݐ
(2)
ܨ ሺܾሻ ൌ ۓ ۖ݉݅݊ሺ௩ಳ ሻ ݉ܽ݇݁݊ܽݏ σאூ ݈ ۔ ۖ ە
ݏǤ ݐǤ ݈ ൌ ܵ െ ܾ ܫ א ݅ ܤ݂ݏݐ݊݅ܽݎݐݏ݊ܿݎ݄݁ݐ
(3)
This coordination problem can be considered as a bilevel programming problem (BLPP), where ሺܾሻ are the upper-level variables in (1) and ሺ ݎܽݒ ሻ and ሺ ݎܽݒ ሻ are the sets of lower-level variables in (2) and (3), where ܥ ݎܽݒ ؿ Ǣ ܵ ݎܽݒ ؿ Ǥ The advantage of this formulation is that several off-the-shelf BLPP algorithms, see (Colson et al. 2007), can be applied according to the nature of the lower-level optimization problems, here (2) and (3) . The most often used methods are Lagrangean-based methods, which either replace the lower-level objective value functions ሺ ܨ ǡ ܨ ሻ by KKT conditions and solve these as one optimization problem, or reformulate the whole BLPP as a centralized problem applying Lagrangean relaxation techniques. The latter approach is discussed in the next section. A new intersection coordination heuristic to solve this BLPP is introduced in section 3. Additionally, the whole BLPP can be considered as a black-box problem with two implicit constraints, (2) and (3), these can be evaluated via CPLEX by given ሺܾሻ, and the upper-level optimization problem can be solved with derivative-free optimization algorithms, e.g. MCS (Huyer & Neumaier 1999). In this paper we focus on the Lagrangean decomposition approach and the new heuristic. In the numerical results section the MCS algorithm is also included.
2. Lagrangean Decomposition approach (LD) The idea of LD is to split a large-scale problem by Lagrangean relaxation and decomposition into more easily solvable sub-problems and then to approach a good solution for the primary problem by iteratively adjusting the Lagrangean multipliers. In our case, LD can be applied after reformulating the coordination problem in the following way: A centralized mathematical model (4) is formulated based on the original problem (1), (2) and (3) with duplicated variables ୧ ൌ ୧ . Then the equations ୧ ൌ ୧ are relaxed and decomposed into two sub-problems (5) and (6) by introducing Lagrangean multipliers ሺߣሻ, which are solved by schedulers A and B. An LD heuristic adjusts ሺߣሻ in an iterative procedure in order to establish a feasible solution of the initial problem (4). The update mechanism for the Lagrangean multipliers ሺߣሻ used here is a sub-gradient method, (Guignard 2003). ۓ ۖ
݉݅݊൫ǡ௩ಲ ǡ௩ಳ ൯ ܨௗ ሺܾሻ ൌ ܨ ሺܾሻ ܨ ሺܾሻ
ݏǤ ݐǤ݁ ൌ ܾ െ ܥ ܫ א ݅ ݈ ൌ ܵ െ ܾ ܫ א ݅ ۔ ܾ ൌ ܾ ܫ א ݅ ۖ ܣ݂ݏݐ݊݅ܽݎݐݏ݊ܿ݊݅ݐܿݑ݀ݎەǡ ݁݃ܽݎݐݏ݀݊ܽܤ ܨ ۓఒ ൌ ݉݅݊ሺǡ௩ ಲ ሻ ݉ܽ݇݁݊ܽݏ ۖ σאூ ݁ σאூ ߣ ܾ (5) ݏ ۔Ǥ ݐǤ ݁ ൌ ܾ െ ܥ ܫ א ݅ ۖ ܣ݂ݏݐ݊݅ܽݎݐݏ݊ܿ݊݅ݐܿݑ݀ݎە
(4)
ܨ ۓఒ ൌ ݉݅݊ሺǡ௩ಳ ሻ ݉ܽ݇݁݊ܽݏ ۖ σאூ ݈ െ σאூ ߣ ܾ (6) ݏ ۔Ǥ ݐǤ ݁ ൌ ܵ െ ܾ ܫ א ݅ ۖ ܤ݂ݏݐ݊݅ܽݎݐݏ݊ܿ݊݅ݐܿݑ݀ݎە
A new Coordination Approach to Plant-wide Planning and Scheduling
943
3. Intersection coordination heuristic (IC heuristic) The sequence of the jobs in A and B does not affect the lower-level objectives (makespan) of the sub-problems (2) and (3) a lot, but it has a strong influence on the coordination objective ܨௗ in (1). Additionally the sequence of jobs within one subproblem does not change much from stage to stage. The idea of the IC heuristic is to formulate a new optimization problem, ܨூ , which is constructed by the storage constraints from (1) and some constraints from (2) and (3), which represent the scheduling problem of the last production stage in A and of the first production stage in B. ܨூ ሺܾ ǡ ݎܽݒᇱ ǡ ݎܽݒᇱ ሻ is an approximation of the optimization problem of the original BLPP (1), (2) and (3). Therefore ܨூ includes a subsets of variables from (2) and (3), ݎܽݒᇱ ݎܽݒ ؿ Ǣ ݎܽݒᇱ ݎܽݒ ؿ besides ሺܾሻ, see Figure 3. ܨூ focuses only on the stages of the overall process around the intersection area (last stage in A, first stage in B and storage), therefore it is termed intersection coordination heuristic.
Figure 3: IC approach to the coordinated plant-wide planning and scheduling
First ܨூ is solved, yielding a lower bound (LB) for the original BLPP and a first solution ሺሻ. Then, the original lower-level schedulers A (2) and B (3) are called for given ሺሻ. The schedules for sections A and B are combined to an overall schedule which usually is suboptimal because of removable idle times of some jobs in the storage. To improve the solution, the schedule for section B is shifted to the left on the time axis as much as possible, respecting the feasibility conditions ܥ ܵ ܫ א ݅ hold, without changing the sequences in A and B. After this shifting heuristic has been applied, an upper bound (UB) of BLPP is obtained. If the gap between LB and UB is sufficiently small, the iteration is stopped and the shifted solution is the optimal overall schedule. Otherwise an iterative process is carried out that adds integer or logical cuts in ܨூ iteratively in order to obtain new values of ሺሻ. When the underlying scheduling problems have a complex structure as in real-world applications, we suggest to formulate the cuts by extracting information based on the solutions from previous iterations. For instance, the cuts can be formulated as pre-assignments of certain jobs on certain machines or by building the campaigns of jobs based upon the observation of certain precedence relations in the lower level schedules.
4. Computational results and conclusions In this section a comparison of computational results of the different approaches mentioned above is given. The MCS algorithm, the in the section 2 described LD-based approach and the IC heuristic were tested for the same problem instance, a simplified model of the metal plant model. The model consists of two flexible flow shop problems
944
Xu et al.
where the first production section A has four production stages with two parallel machines in the first and last stage and the second production section B has three production stages with two parallel machines in the last stage. The IC heuristic is applied to this simplified model without solving the upper-level problem iteratively. The simulation hardware is equipped with a 2.79 GHz Intel Pentium processor with 1.99 GB RAM. The MILP models were implemented in GAMS® build 23.0.2 and solved by CPLEX® 11.2.1. The shifting heuristic and the iteration process were performed in Matlab® R2009b. Table 1: Computational results comparison of MCS, LD and IC approaches coord
Number of jobs n=2 n=3 n=4 n=5 n=6
Objective Function Value (F ) MCS LD IC Global Optimum 1112 1118 1112 1112 1452 1569 1479 1452 1836 2261 1856 1824 2281 3065 2187 2175 2800 3820 2559 2550
Optimality Gap MCS LD IC 0% 1% 0% 0% 7% 2% 1% 19% 2% 5% 29% 1% 9% 33% 0%
Computational Time [sec] MCS LD IC 107,10 111,20 8,92 142,30 115,70 9,74 157,40 149,90 10,11 285,80 303,40 14,98 1723,30 12861,20 73,29
Table 1 shows a comparison of the coordination algorithms. The IC heuristic provides good solutions in terms of the optimality gap, even when the problem size becomes larger. The computation time of the algorithms depend strongly on the number of calls of the lower-level schedulers where MILPs have to be solved. Since the IC heuristic needs a small number of calls, its computation time is very short compared to the other approaches. From n=7 the global optimum could not be found anymore by solving the centralized formulation (4) within a reasonable amount of time. We put a time limitation for the CPLEX solver to 600 seconds for solving each sub-problem in one iteration. For n=15, the solution quality of the IC heuristics is 50% better than the solution of the centralized formulation. MCS and LD are not able to find feasible overall solution within reasonable time. In the numerical tests, the LD leads to oscillations of solutions during the iterations and sometimes to convergence problems for several different Lagrangean multiplier update mechanisms. A large number of LD iterations is required in the scheduling coordination problem, which has been also observed in other paper, e.g.(Jose & Ungar 2000). The problem is caused by the piecewise linear objectives of the MILP subproblems. The mentioned papers propose to introduce augmented LD methods which add a quadratic penalty term to the objective function of the sub-problems, or to add inequality resource constraints to the sub-problems in order to fight the oscillations. If the sub-problems themselves are expensive to solve, the BLPP formulation with the intersection coordination heuristic is a better way to address plant-wide scheduling problems than Langrangean decomposition.
References Colson, B., Marcotte, P. & Savard, G., 2007. An overview of bilevel optimization. Annals of Operations Research, 153(1), pp.235-256. Guignard, M., 2003. Lagrangean relaxation. TOP, 11(2), pp.151-200. Huyer, W. & Neumaier, A., 1999. Global optimization by multilevel coordinate search. Journal of Global Optimization, 14(4), pp.331–355. Jose, R. & Ungar, L., 2000. Pricing interprocess streams using slack auctions. AIChE Journal, 46(3), pp.575-585. Xu, C., Sand, G. & Engell, S., 2010. Coordination of Distributed Production Planning and Scheduling Systems. In 5th Conference on Management and Control of Production and Logistics. Coimbra: IFAC.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of Closed-Loop Supply Chains under Uncertain Quality of Returns M Isabel Gomesa, Luis J Zeballosb,c, Ana P Barbosa-Povoad, Augusto Q Novaisb a
CMA, FCT, Universidade Nova de Lisboa, Monte de Caparica, 2825-114 Caparica, Portugal b Unidade Modelação e Optimização de Sistemas Energéticos, Laboratório Nacional de Energia e Geologia, Lisboa c Universidad Nacional del Litoral – Fac. de Ingeniería Química, Santa Fe, Argentina d Centre for Management Studies, Instituto Superior Técnico, UTL, Lisboa Portugal
Abstract The efficient design and operation of supply chains with return flows represent a major optimization challenge, given the high number of factors involved and their intricate interactions. In particular, the quality level of the return products has strong economic and societal implications and depends greatly on the type of product (glass, paper, electronic, oil, etc) and on the degree of consumers’ readiness, frequently promoted by various kinds of awareness raising campaigns. A multi-product multi-period model was previously developed by the authors [1] for the closed-loop supply chain (CLSC) design and planning, where strategic and tactical decisions were comprehensively considered. This model is now being extended to handle the uncertainty related to the quality of the returned products, which at this stage is modeled by a two-stage scenario-based stochastic approach. General strategies to solve optimization problems involving uncertainty tend to exhibit poor computational performance, due to the problem NP-hard complexity, which tends to worsen with the problem size. Therefore and, in addition, a model performance solution enhancement is also being explored. To increase the efficiency of the solution approach, an alternative representation to some of the integer variables employed in the mathematical formulation was developed, which is tested by means of computational experiments being performed on illustrative real sized examples. Keywords: Closed-Loop Supply Chain, Design and Planning, Two-stage Stochastic Optimization
1. Introduction Closed-loop supply chains (CLSCs) are an extension of traditional supply chains (SC) since they do not end up at final customers but also account for the return of used products. One of the major difficulties in CLSC is the uncertainty associated with the return flows. In traditional supply chains, uncertain issues come mostly from the demand side, while in reverse chains, where end-of-life products are sent back to be remanufactured or recycled, sources of uncertainty, among others, are the quantity and quality of these returned products. In comprehensive reviews Melo et al [1] and Papageorgiou [2] conclude that there is a lack of multi-product, multi-period supply chain design models that deal with uncertainty. In addition, most of the relevant and recent works considering the SC design and planning with uncertain parameters are related with forward flow of the products. It is worth noting that very few authors have addressed uncertain parameters
946
MI. Gomes et al.
other than demands. You and Grossmann [3] proposed a model for multi-product supply chain design with demand uncertainty, which was modeled as an MINLP model and solved by a spatial decomposition algorithm based on the integration of Lagrangean relaxation and piecewise linear approximation. Fonseca et al [4] proposed a bi-objective stochastic model in the context of reverse networks, where some of the facilities are considered as semi-obnoxious. The two objectives considered are the total cost and the total obnoxious effect. In this work, the uncertainty related to the quality of returned products is considered and handled by a two-stage scenario-based stochastic programming. Its effect on the supply chain configuration and performance is evaluated against a reference case, earlier employed by the authors to illustrate the applicability and adequacy of the multiproduct, multi-period CLSC model previously proposed [1].
2. Problem description This problem is an adaptation of the case study of a Portuguese glass company, previously presented by the authors and described in [1], together with the full mathematical CLSC formulation. In what follows and due to space limitations, the reader should refer to the paper previously published [1] since only main modifications will be reported in the current paper. In summary, the supply chain comprises four entities: factories (F), warehouses (A), customers (C) and sorting centers (R). Raw-material supply and poor quality returns’ disposal are not treated as entities since they have no geographical location. After use, the company collects all glass products as undifferentiated glass at customers and after a sorting process performed by sorting centers, high quality returns are sent back to the factories to be remanufactured. No distinction is made between new and remanufactured products. In terms of time, a 5-year horizon is set as the strategic time, with a macrotime unit of 1 year and the micro-time of 2 months. The original multi-period, multi-product, multi-echelon supply chain model is now modified in order for the mathematical formulation to accommodate the treatment of uncertainty in the returns’ quality. It is assumed that quality scenarios can be established and serve as the basis to define a two-stage stochastic model. Therefore, location variables are considered as the first stage decisions, while all the supplying, transportation, storage and collection are second stage decisions. Given, the network super-structure, the cost structure (transportation, production, supplying, collection and sorting), a penalization cost when customers are not fully supplied or are left out of the supply chain, customer demands and return rate, production and storage capacities, and an estimation of returns’ quality in terms of scenarios; Determine best facility location and allocation for the entire time horizon, and the best supply, production, transportation and collection planning for each time unit; So as to minimize the total supply chain cost. Regarding decision variables, the model is composed of one set of location variables (binary), one set of flow variables (that include all transportation, production, supplying and collection), one set of storage variables and one set of non-satisfying demand variables.
3. Model formulation The model formulation presented in [1] still holds with the following modifications: - The objective function expressed by eq.1 is now replaced by
Optimization of Closed-Loop Supply Chains underUncertain Quality of Returns
947
where stands for the probability of scenario of the scenario space . which were defined as “auxiliary (binary) variables that - The binary variables allow the modeling of minimum limits imposed on model flows, i.e. if the flows between entity and occurs at time ”, are no longer defined as such but rather as semi-continuous. Hence, these variables can either be 0, or can behave as continuous variables between lower and upper bound values for flows connecting entities i and . j - In terms of model constraints, the flow inequalities are replaced by
and is the continuous variable representing where the flow volume of product between entities and that occurs at micro time . Based on this modified model, a two-stage stochastic approach is followed to account (recovery for the return quality uncertainty. This is modeled through parameter target, where stands for scenario), with the model assuring that only quality products represents therefore the fraction of return proceed to be remanufactured. The products collected at customers that it is mandatory for factories to re-admit into production. Check equation [5] in reference [1]. Since location variables are first-stage variables, they do not depend on the different scenarios assumed for quality. All other variables (acquisition, production, distribution and storage volumes) will differ depending on the scenario. For instance, the flow : the flow volume of variable defined above is in the scenario model defined as product between entities and occuring at micro time in scenario .
4. Example and results analysis As mentioned before, this example is based on the closed loop supply chain of a Portuguese glass company [1]. Recovery targets are assigned the values of 0.30, 0.60 and 0.90, respectively, for the three separate deterministic cases L, M and H. The continuous uncertain quality of product returns are approximated by three discrete values, Į(cs1)=0.3, Į(cs2)=0.6 and Į(cs3)=0.9, which have, respectively, the following occurrence probabilities: 0.10, 0.75 and 0.15. The model was implemented in GAMS23.5 and solved with CPLEX 12.2.0.0, on a HP with AMD Athlon 64 X2 Dual Core 2.2GHz and 1GB RAM, for a 1% gap solution. The results for L, M, H and S (stochastic) cases are depicted in Figure 1 and Table 1, together with a reference case obtained from [1].
948
MI. Gomes et al.
Figure 1 (clockwise): 1) Nodes location for the CLSC Super-structure; 2) Optimal network configurations for cases with Low (L, Į=0.30), Medium (M, Į=0.60) and High (H, Į=0.90) return quality, together with the Stochastic approach (S); 3) Computational statistics.
Table 1: Main optimal network results
The solution of cases L, M and H, by comparison with the reference case, is found to require much less CPU time (about 15 %), in spite of a higher number of constraints (41006 against 32366) and a different composition of variables (semi-continuous vs. binary). The proposed modification is therefore found to be efficient and justify the quite reasonable CPU time observed for the more highly dimensional stochastic case. The CLSC under study is found to be very resilient in terms of its configuration. By comparison with the super-structure, it remains unchanged (differences indicated by gray squares) for cases L, M and S (3 factories, 3 warehouses, 3 sorting centres and 17 costumers), similarly to the reference case (where one costumer, at node Gu, is dropped). Given its particular economic setup, it is only when a high quality return product is achieved (and hence a more stringent recovery target applied) that the network configuration is significantly modified, with the elimination of costumers at nodes Av, Ev, Lxa and Se.
Optimization of Closed-Loop Supply Chains underUncertain Quality of Returns
949
As the return quality Į increases (from M to H), the actual amounts of disposal and recovery decrease, which can be explained by a drop in the total return due to a smaller number of customers and/or a lesser degree of demand satisfaction. This suggests there is a minimum volume that the company has to collect to have the most efficient supply chain structure, which is likely to be achieved when the percentage of good quality returns is between 80 and 90%. When considering all the scenarios simultaneously, i.e. the two-stage model, the actual recovery target becomes 90.2 %, which suggests that this is the best value for this CLSC. The value of the objective function, i.e. the global supply chain cost, remains constant for all cases (within the 1% tolerance). It should be noted, however, that the intake of product returns to the factory is being charged at the same unit cost, whether it is within the recovery target (good quality) or exceeds it (lower quality). The introduction of a stepwise linear cost function that would reflect the higher processing costs caused by less quality returns, would be instrumental in producing a more realistic cost evaluation. Due to the lack of space, a more detailed analysis regarding second-stage variables had to be omitted.
5. Final Remarks In this work a two-stage stochastic model is proposed to the design and planning of a closed-loop supply chain. Following previous work based on a glass company, taken as a reference, the present model allowed the determination of the most adequate network subject to uncertain quality of the returns. The results also show that the optimal network configuration is resilient since, for a constant global cost, it only changes when good quality products go above 90%. The present work is a first attempt to fill a gap in the handling of uncertainty in CLSCs. As future work, a multi-stage stochastic model is to be developed. This approach will enable a more accurate analysis of the effect of uncertainty in return products quality. Along that quest, and in addition to the solution approach based on an alternative representation of integer variables, which was presented and proved to be efficient, a specialized branch and bound algorithm is being investigated to further increase the efficiency of the exploration, by guiding the search and pruning portions of the solution space according to the problem structure.
Acknowledgement The authors gratefully acknowledge the support of the Portuguese National Science Foundation through the project PTDC/SEN-ENR/102869/2008.
References [1] Salema, M. I. G., Barbosa-Povoa, A. P., and Novais, A. Q. (2010). "Simultaneous design and planning of supply chains with reverse flows: A generic modeling framework.", European Journal of Operational Research, 203(2), 336-349. [2] Melo, M. T., Nickel, S., and Saldanha-da-Gama, F. (2009). "Facility location and supply chain management - A review." European Journal of Operational Research, 196(2), 401-412. [3] Papageorgiou, L.G., (2009). "Supply chain optimisation for the process industries: Advances and opportunities". Computers and Chemical Engineering, 33 (12), 1931-1938. [4]You, F., Grossmann, I. E., and Jacek Jezowski and Jan, T. (2009). "Optimal Design of LargeScale Supply Chain with Multi-Echelon Inventory and Risk Pooling under Demand Uncertainty." in Computer Aided Chemical Engineering, Elsevier, 991-996. [5] Fonseca, M., García-Sánchez, Á., Ortega-Mier, M., and Saldanha-da-Gama, F. (2010). "A stochastic bi-objective location model for strategic reverse logistics." TOP, 18(1), 158-184.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integrated Refinery Planning under Product Demand Uncertainty Edith Ejikeme-Ugwua, Songsong Liub and Meihong Wanga* a
Process Systems Engineering Group, School of Engineering, Cranfield University, MK43 0AL, UK b Centre for Process Systems Engineering, Department of Chemical Engineering, University College London, London WC1E 7JE, UK
Abstract This paper is aimed to deal with refinery planning under product demand uncertainty. Models of three main refinery subsystems (including crude unloading, production and product blending, and product distribution) were integrated. The integrated model is used for refinery planning considering uncertainty in final product demand. A probability distribution based two-stage stochastic linear programming (LP) with recourse approach is presented. Sample average approximation (SAA) method is taken as the solution approach. Case studies demonstrate the applicability of the proposed model and solution approach. Keywords: Refinery planning; uncertainty; stochastic optimization; linear programming
1. Introduction 1.1. Refinery Planning Refinery Planning is a high level decision making process which helps to determine the right crude oil to purchase, the products to produce, and the optimal volumes by making the best use of the existing resources. The characteristic of planning is the time horizon of months or weeks [1]. A mixed integer non linear programming (MINLP) model was developed for the production and product blending area for planning under deterministic conditions [2]. A simplified MINLP model was developed for integrated refinery subsystems for planning in [3]. However, their model is wholly deterministic. 1.2. Refinery Planning Under Uncertainty Uncertainty can be defined generally as something not known, or lack of confidence, or guarantee in an event or process. Uncertainty and its impact should not be underestimated. Planning under uncertainty is considered as one of the most important open problems in optimization [1]. [4] extended the work of [2] by improving the existing model for production and product blending to a corporate planning model that contains multiple refineries using scenario based approach. They developed a multiperiod MINLP model to deal with uncertainty in product price and crude price. In [5], the issue of uncertainty in product price and product demand was addressed when only the production and product blending area in a refinery was considered using LP. A novel approach was presented in [6] for refinery planning under uncertainty, in the production and product blending subsystem of the refinery with LP.
Integrated refinery planning under product demand uncertainty
951
1.3. Novelty of this paper The aim of this paper is to use integrated whole refinery model for planning under deterministic and product demand uncertainty respectively in this paper.
2. Modeling of the refinery subsystems and Planning for the integrated refinery subsystems 2.1. Modeling of the refinery subsystems The subsystems of a typical refinery are shown in Figure 1. Model in [7] was adopted for the crude oil unloading subsystem. The model was for scheduling and cost minimization. The main change was to fix the integer variables and the modified planning model was then used in this paper. Model from [8] was adopted for production and product blending, which was for planning and cost maximization. Therefore, no change is required. Model presented in [9] was adopted for product distribution and no change is required.
Figure 1 Block flow diagram for the three main subsystems in the refinery 2.2. Planning for the integrated refinery The three individual subsystems of the refinery were integrated. On integration, the following changes were made for the model to be linked together: (a) the initial crude volume of the vessel in the unloading subsystem was scaled up; (b) the quantity of blended crude required by the production subsystem is fed to the unloading subsystem to supply at each time period; (c) new constraints were added to make the model feasible. The objective function for the integrated refinery subsystem is to maximize the expected profit. l T § VS i , t VS i , t 1 · ¸¸ REV t ¦ COP t ¦ IMPCST t CINVST i ¦ ¦ ¨¨ ¦ 2 tT tT tT i 1 t 1 © ¹ CINVCT
J
j
T
§ VB
¦ ¦ ¨¨ j 1 t 1
©
j,t
VB 2
j ,t 1
· ¸¸ ¹
¦ CINVPT tT
t
¦ CTR tT
(1) t
3. Solution strategies for the planning of integrated refinery under uncertainty 3.1. Two stage stochastic programming The use of probability distribution based two-stage stochastic linear programming with recourse is proposed in [11]. (2) min >g x C T x ( [ Q x , [ @
s.t
Ax b xt0
^ h Tx , y t 0 ` . Here that w is assumed to be fixed (fixed recourse). It should be noted that the definitions of first and second stages are only related to before and after the random experiment and may in fact contain sequences of decisions and events [10]. The first Q x , [
min q T y | wy
952
E.Ejikeme-Ugwu et al.
stage decision variable is the amount of crude oil required to be processed (i.e. production plan), while the second stage decision variable is the quantity of product to be outsourced from third party y after the actual realization of the random demand for the products . 3.2. Use of sample average approximation (SAA) method The technique for solving a two-stage stochastic programming with recourse was proposed in [11]. The total number of scenarios was reduced by using only a subset of the scenarios randomly sampled according to a normal distribution over scenarios to represent the full scenario space. The main idea of SAA approach to solving stochastic SURJUDP LVDVIROORZV $VDPSOHȟ1ȟN of N realizations of the random vector ȟZ is generated, and consequently the expected value function (>4[ ȟȦ @ in Eq. (2) is approximated (estimated by the sample average function). The obtained SAA of the stochastic program is then solved by a deterministic optimization algorithm. N ½ Min x X ®gN x c T x N 1 ¦Q x, [ n ¾ , n 1 ¿ ¯
(3)
With uncertainty in the product demand, a large number of scenarios will be expected. Monte Carlo sampling technique can then be used to reduce computation. Theoretical justification for this method is that as the number of scenario sample increases, the solution to the approximate problem converges to an optimal solution of the true problem with probability approaching 1 exponentially fast.
4. Case studies 4.1. Case study for deterministic planning The following parameters are given: (a) Physical properties of materials; (b) Capacity of processing units; (c) Cost of raw materials; (d) Prices of final products; (e) Transport cost of final product to depot. The planning horizon is 30 days for this case study and each day is a time interval. The objective of the optimization problem is to maximize the expected profit over the entire planning horizon. The objective function and all constraints are linear. The variables for the flow rates of materials (crude and intermediate) to and from the refinery have been used in the optimization problem formulation. 4.2. Case study for planning under uncertainty The same deterministic base case in the planning model described in Section 4.1 is used here. However, only one change has been made for easy comparison with the uncertainty case. The planning horizon was divided into three equal periods and each period represents one month. The product demand was randomly generated independently and identically for each variable by sampling from a normal distribution. The mean value for final products at different time period, the penalty cost, mean and standard deviation values for final product demand were taken from [5]. The objective is to maximize the expected profit by choosing the first stage variables in a way that the difference between the first stage costs and the expected value of the random recourse variable is maximized.
Integrated refinery planning under product demand uncertainty
953
5. Results and analysis 5.1. Results for deterministic case Table 1 shows the results for the deterministic case. From Table 1, the profits from the integrated models are higher than the non-integrated models, which prove the necessity of considering the integrated three subsystems.
Separated Systems
Subsystems
Plan Period (days)
No. of Cons
No. of Var
Maximum Profit or Cost ($)
Unloading
30
1513
827
-788
Production and product blending
30
13591
2041
19,039
Distribution
30
5761
3211
-972
Sum of the above subsystems Integrated Systems
Unloading, production and distribution
17,279 30
29353
6047
46,952
Table 1 Summary of results from integrated and non-integrated models (deterministic) 5.2. Results for stochastic case The model considered uncertainty in final products demand. It was solved for 6 scenarios. The deterministic case with the mean values of product demand was run first and the value for first stage variable obtained. This value was held on and then the model was run for different sample points to obtain different values of the optimal profit and the 2nd stage variables as shown in Table 2. The model was also run to obtain the expected value of the profit. The optimal profit obtained for the when the mean product demand was used is equal to $2,893,002. The expected value for the profit was $2,893,024 at a larger sample point (104). The standard uncertainty which is equal to the standard error was also determined to be $5,209 at 1 standard deviation. Scenarios
Optimal profit($)
Unmet demand (bbl)
1
2,893,175
9,260
2
2,893,697
9,246
3
2,893,422
9,253
4
2,893,969
9,238
5
2,893,753
9,244
6 2,893,813 9,224 Table 2 Summary of results from stochastic case
6. Conclusions In this paper, the integrated refinery planning model was reformulated into two-stage stochastic linear programming where uncertainty in product demand was accounted for using SAA solution method. For the deterministic case, the profits from the integrated models are higher than the non-integrated models. For uncertainty case, the model considered the effect of uncertainty in the optimal objective value. The profit obtained
954
E.Ejikeme-Ugwu et al.
when at different scenario varies and this also affects the volume of unmet demand which is the 2nd stage variable. However, the expected value profit is better that the optimal profit. Further work is to consider not only product demand uncertainty, but also raw material uncertainty for planning with the integrated refinery planning model [12].
Nomenclature Rev t: Total Revenue from final sale of product at time t COP t: Total Cost of operations at time t IMPCST t: Total Raw material cost at time t CINVSTi: Inventory cost of storage tank i per unit time interval CINVCTj: Inventory cost of charging tank j per unit time interval CINVPT t: Product tank pt inventory cost at time t CTR t: Total transport cost of final product fp at time t VS: Storage tank capacity. VB: Charging tank capacity
ȟWKHYHFWRUIRUPHGE\WKHFRPSRQHQWV of qT, hT and w (ȟ: Mathematical expectation with respect toȟ Y: Penalty cost for unmet demand N: Number of realization of random variable
References 1.
Sahinidis V. N. (2004), Optimisation under uncertainty: State-of-the-art and opportunities, Computer and Chemical Engineering, 28, 971-983. 2. Pinto, J. M., Joly, M. & Moro, L. F. L (2000), Planning and scheduling models for refinery operations, Computers and Chemical Engineering, 24, 2259-2276. 3. Guyonnet P., Grant H.F., & Bagajewicz J.M. (2009), Integrated Model for Refinery Planning, Oil Procuring, and Product Distribution, Ind. Eng. Chem. Res., 48, 463-482. 4. Neiro, M.S. and Pinto, M. J. (2004), A general modelling framework for the operational planning of petroleum supply chains, Computers and Chemical Engineering, 28, 871-896. 5. Pongsakdi, A., Pramoch, R., Kitipat, S. and Bagajewicz, M. J. (2006), Financial risk management in the planning of refinery operations, International Journal of Production Economics, 103, 64-86. 6. Li, W. (2004), Modelling oil refinery for production planning scheduling and economics analysis, PhD Thesis, Hong Kong University of Science and Technology. 7. Lee, H., Pinto, J. M., Grossman, I. E. and Park, S. (1996), Mixed integer linear programming model for refinery short-term scheduling of crude oil unloading with inventory management, Ind. Chem. Eng. Res., 35, 1630-1641. 8. Aronofsky, S., Julius, D., John M. and Tayyabkhan, M. T. (1978), Managerial planning with linear programming in process industry operations, New York: Wiley and Sons. 9. Alabi, A. and Castro, J. (2009), Dantzig-Wolfe and block coordinate-descent decomposition in large-scale integrated refinery-planning, Computers and Operations Research, vol. 36 pp. 2472-2483. 10. Birge, J., and Francois, L. (1997). In Peter G., Stephen R. (Eds.), Introduction to stochastic programming, New York: Springer. 11. Ahmed S. and Alexander, S. (2002), The sample average approximation method for stochastic programs with integer recourse, Optimization online. http://www.optimization-online.org/(2002). 12. Ejikeme-Ugwu, E., Liu, S. and Wang, M. (2011), Planning for the integrated refinery subsystems under uncertainty, Chemical Engineering Research and Design (submitted).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modelling and dynamic optimisation for optimal operation of industrial tubular reactor for propane cracking Mehdi Berreni and Meihong Wang Process Systems Engineering Group, School of Engineering, Cranfield University, MK43 0AL, UK
Abstract Thermal cracking of propane in tubular reactors inside thermal cracking furnace will be the topic of this paper. The model is developed based on first principle and validated dynamically by comparing the simulated results against pilot plant data. This work is implemented in gPROMS®. Process optimization was then applied to the operation of this tubular reactor. The effects of coking on reduction of production time and the decoking cost have been considered. Steady-state and dynamic optimizations were implemented respectively. The case study indicates that dynamic optimization can improve net profit by 10.6% compared with the base case. However, the computation required by dynamic optimization is much higher than steady-state optimization. Keywords: Modelling, dynamic optimisation, tubular reactor, ethylene, steam cracking
1. Introduction Ethylene is one of the major building blocks in the petro-chemical industry. It has become one of the largest volume petro-chemicals produced worldwide. Thermal cracking furnace is the ‘heart’ of the ethylene manufacturing process. In typical tubular reactors inside cracking furnace, thermal cracking reactions, heat transfer and coke buildup inside the tube take place. These are closely coupled and interacted with each other. It is crucial to understand the process thoroughly and to optimise its operation. Gao et al. (2009a) applied steady-state optimization for naphta cracking based on steady state simulation in HYSYS. The results were very encouraging since the optimization improved the operating profit significantly. However, Gao et al. (2009a) assumed the process to be under steady-state conditions. This is actually not the case because of the coke building up with time. Moreover, the impact of the coke thickness on heat transfer coefficient is ignored. Gao et al. (2009b) applied steady-state optimization for naphtha cracking based on a detailed 1-dimensional (1D) dynamic model in gPROMS® and considered the impact of the coke thickness on heat transfer coefficient. Sowers and Reed (2001) studied the dynamic optimization of propane cracking with the software SPYROTM. The model they used was very simple in comparison to the one that is developed in this paper. Berreni and Wang (2010) applied dynamic optimisation with the tube external wall temperature profile as one of the optimisation variables. This makes the optimal operation study more realistic. In this paper, a 1D pseudo-dynamic model is developed for the plug flow reactor (PFR). The paper is aimed to compare steady-state and dynamic optimisation based on this 1D model for PFR in thermal cracking furnace.
M.Berreni and M. Wang
956
2. Mathematical modeling and model validation The mathematical model consists of a continuity equation for each chemical component, equations for energy balance, heat transfer and pressure drop. Coke buildup on the internal tube wall was also included. The 1D peudo-dynamic model is steady-state regarding mass and energy balances, but dynamic regarding coke build-up. In order to validate the model, data (i.e. frequency factor, activation energy, reactor design and operating parameters) from Sundaram and Froment (1979) is used. More details regarding modelling and model validation were described in Berreni and Wang (2011).
3. Optimal operation Based on the case studies in Berreni and Wang (2011), there are some trade-offs in selecting temperature profile and steam-to-propane (S/P) ratio. A high S/P ratio slows down coke formation, but it also reduces the amount of valuable products and the profit. 3.1. Mathematic formulation 3.1.1. Objective function Operating profit over a period of one year is taken as objective function to maximise. For the optimization, the following assumptions were made: (a) No downstream product separation costs were considered; (b) The price factors are taken as constant during the whole year; (c) After a decoking operation, the tube is completely clean and the coke thickness is 0; (d) Coke thickness is updated every hour. A new objective function was developed based on Gao et al. (2009a). The principle is to calculate the profit that is made every hour and to accumulate those profits over the whole duration of a cycle. A cycle includes the time of operation of the reactor plus the time of decoking operation. Thus, the decoking frequency is also taken into account. ܥܥܦ ݐ ߲ܲݐ݂݅ݎ ൌ ൬ܨଵ ܥଵ ܨଶ ܥଶ െ ܨ ܥ െ ܨ௦ ܥ௦ െ ܳ௧ሶ ܥொ െ ൰כ ݐ ݐௗ ݐ ߲ݐ
ሺͳሻ
The time between two decoking operations ݐ is calculated with Eq(3). To calculate the yearly profit, the calculated profit over a cycle is multiplied by the number of decoking operation (݊ௗ ) per year, which is calculated with Eq (2ሻ(Gao et al., 2009a). ݐ ݐௗ ݐ
ሺʹሻ
ߜ௫ ߮ ݎ ሺ ݖൌ ͻͷሻ
ሺ͵ሻ
݊ௗ ൌ ݐ ൌ
3.1.2. Constraints Realistic constraints should be included in the optimization. First of all, the run length must be at least 30 days, which is the minimum observed in industry. Secondly, constraints should be set for the control variables. Experiments by Van Damme et al. (1975) were carried out within certain operating range and the reaction scheme developed by Sundaram and Froment (1979) was based on Van Damme et al. (1975). Consequently to make sure that the reaction scheme is still valid, the same limits were applied for the control variables: (a) 973K < Coil Outlet Temperature (COT) for process gas < 1143K; (b) 0.2 kg/kg < steam-to-propane ratio < 1 kg/kg Moreover, a constraint was applied for the maximum temperature that can reach the tube metal. The material considered is HK40, and this limit was set at 1338.15K (Albright et al., 1983).
Modelling and dynamic optimisation for optimal operation of industrial tubular reactor
957
3.1.3. Setting of parameters Many parameters in the nomenclature were taken from Gao et al. (2009a) regarding optimization of thermal cracking of naphtha. As for chemical price factors, spot prices from May 2010 have been considered for propane, ethylene and propylene (Chemical Week Price Report, 2010). Since prices for propane are given per U.S gallon, a conversion per unit mass had to be applied. Finally, coke density was taken from Sundaram and Froment (1979). 3.1.4. Optimisation variables Process gas temperature profile and steam to propane ratio in the feed were used as optimisation variables. 3.1.5. Initial operating profit The initial operating profit for the base case is calculated with Eq (ͳሻǡ COT of 1111.15K and S/P ratio of 0.4 kg/kg are set as constant during the whole run length. The yearly profit amounts to $851,840. 3.2. Steady-state optimisation After steady-state optimization, the yearly profit is maximised up to $933,847. Thus, the optimization enabled a profit increase of 9.6% compared with the base case. From Table 1, the optimized conditions reduce the S/P ratio down to 0.29 kg/kg close to the lower boundary. The process gas temperature at the outlet is also lowered to 1100.58K. As for the run length, it is similar to the one observed in the base case. This is due to the fact that lowering the S/P ratio and lowering the COT have opposite effects on the run length. Figure 1 shows the tube outer wall temperature profile under clean tube condition (i.e. no coking) and at the end of the run length. Since the process gas temperature is maintained constant, the tube outer wall temperature profile increases significantly with time. In the optimal case, the tube outer wall temperature (COWT) is slightly lower than in the base case, the biggest gap is observed around 25% of the tube length. This gap reaches 10K at the beginning of operation and 20K at the end of the run length. Coil outer wall temperature [K]
1240 1220 1200 1180 1160 1140 1120 1100 1080 1060
Base case t=0h Base case t=956h Optimal case t=0h Optimal case t=958h
0
20
40
60
Tube length [%]
80
100
Figure 1 Tube outer wall temperature profile at t=0 hour and at the end of the run length 3.3. Dynamic optimisation The new yearly profit is $941,940 which means an increase of 10.6% compared to the base case. As for the run length, it is 39 days (936h) in the optimised case, which is roughly one day shorter than 39.8 days observed for the base case. From Figure 2, the S/P ratio is close to the lower boundary (i.e. 0.2 kg/kg) without reaching it. The ratio remains constant around 0.25kg/kg until 330 hours (2 weeks) when it starts increasing steadily until the end of the run length. It reaches a maximum
958
M.Berreni and M. Wang
value of 0.3kg/kg at 936h. Again from Figure 2, COT remains constant at 1111.15K for the base case, but the optimal case indicates that a lower COT should increase the objective function. Thus, under clean tube condition, the optimal COT is 1095.5K and it then gradually increases up to 1104K at the end of the run length. This can be understood because the coke thickness increases as time passes and the residence time decreases. Furthermore, the S/P ratio also increases. To compensate and maintain the product yields, the process gas temperature is increased. 0.4
1125
0.35
Steam-to-propane ratio
1120
0.3 0.25
1115
0.2
1110
0.15 1105
Coil Outlet Temperature
0.1
1100
Optimal case Base case
1095 0
200
400
600
Production time [h]
800
0.05 0
Steam-to-Propane ratio [kg/kg]
Coil Outlet Temperature [K]
1130
1000
Figure 2: Comparison of the steam-to-propane ratio and process gas temperature Parameter/Variable COWT at the outlet COT Steam-to-propane ratio Ethylene yield Propylene yield Propane conversion Pressure drop Objective function Run length
Unit K K kg/kg wt% wt% % Pa $/year days
Base case 1164.8-1224.4 1111.15 0.4 35.1-32.2 14.4-16.6 90.2-86 95,758-181,700 851,840 39.8
Dynamic optimization 1159-1224 1095.5-1104 0.25-0.31 30.8-30.2 17.5-17.7 82.7-83.2 102,577-182,490 941,940 39
Steady-state optimization 1160.9-1222.8 1100.58 0.29 32.1-28.5 16.7-18 85.6-81.5 101,184-199,177 933,847 39.9
Table 1 comparison between base case, steady-state and dynamic optimization The results obtained from dynamic optimization could be compared to those obtained by Sowers and Reed (2001). They advised to increase temperature until 50-60% of the run length when the maximum outer wall temperature is reached and then to make the temperature constant at this limit. But the model developed here can prove that such a policy would result in a very high coking rate and consequently many decoking operations per year. Moreover, Sowers and Reed (2001) did not provide any benchmark to validate the model and the economic loss due to decoking operation was not mentioned. Their model was much simpler than the one presented in this paper. Optimization results found by Sowers and Reed (2001) seem to favour the product yields. The difference may come from the fact that fixed S/P ratio is considered. 3.4. Comparison and discussions Dynamic optimization enables a profit increase of 0.87% compared to steady-state optimization. Table 1 summarizes the values of main operating variables during production time. When two values are given, they are respectively for clean tube and for tube at the end of the run length. For the base case, COWT varies from 1164.8 K to 1224.4 K due to increasing coke thickness. The COT for dynamic optimization starts
Modelling and dynamic optimisation for optimal operation of industrial tubular reactor
959
lower than the COT for steady-state optimization but ends higher. The same phenomenon is observed for S/P ratio. Dynamic optimization maintains the product yield and propane conversion nearly constant whereas they change considerably with time when steady-state optimization is applied. Pressure drop and run length are very similar with both optimizations. Regarding computation, a PC having dual core processor (1.9 GHz AMD Athlon 64 bits) and 3 GB of RAM is used. Steady state optimisation requires to run 1.3 hours and dynamic optimisation requires to run 19 hours.
Nomenclature ߜ௫ Maximum thickness of coke layer [0.0135 m] Coke density [1,600 kg/m3] ߮ Price factor of propane [0.596 /kg] ܥ Price factors of ethylene [1.356$/kg] ܥଵ Price factors of propylene [1.356$/kg] ܥଶ ܥொ Heat price factor [1.26 x 10-5$/kJ] ܥ௦ Price factor of steam [0.0129 $/kg] ܥܥܦCost of a decoking operation [$66,000] Mass flowrate of propane [kg/h] ܨ
ܨଵ ܨଶ ܨ௦ ݊ௗ ܳሶ௧ ݐ ݐௗ ݐ ݎ
Mass flowrates of ethylene [kg/h] Mass flowrates of propylene [kg/h] Mass flowrate of steam [kg/h] Decoking operations times per year [-] Total heat transfer rate [kJ/h] Time between 2 decoking operations, i.e. the run length [h] Decoking time [48 hours] Yearly production time [8160 hours] Coking rate at tube outlet [kg/(m2.h)]
4. Conclusions 1D pseudo-dynamic model is developed based on first principle for PFRs inside thermal cracking furnace. Steady-state and dynamic optimizations were applied to the operation of this tubular reactor respectively. The results indicate that dynamic optimization can improve net profit by 10.6% compared with the base case and 0.87% compared to steady-state optimisation. However, the computation required by dynamic optimization is much higher than steady-state optimization.
References Albright, L.F., Crynes, B.L. and Corcoran, W.H. (1983), Pyrolysis Theory and Industrial Practice, Academic Press Inc., New York, USA. Berreni, M. and Wang, M. (2011), Modelling and dynamic optimisation of thermal cracking of propane for ethylene manufacturing, Computers & Chemical Engineering (submitted in 2010) Gao, G.-Y., Wang, M., Ramshaw, C., Li, X.-G., Yeung, H. (2009a), Optimal operation of tubular reactors for naphtha cracking by numerical simulation, Asian Pacific Journal of Chemical Engineering, Vol. 4, No. 6, p885-892. Gao, G.-Y., Wang, M., Pantelides, C.C., Li, X.-G. and Yeung, H. (2009b), Mathematical modeling and optimal operation of industrial tubular reactor for naphtha cracking, Computer Aided Chemical Engineering, Vol. 27, p501-506. Sowers, G. and Reed, C. (2001), Dynamic Optimization of Ethylene Furnaces Cracking Propane,
Proceedings of AIChE Ethylene Producers Conference, Houston TX, p366-405. Sundaram, K. M. and Froment, G. F. (1979), Kinetics of Coke Deposition in the Thermal Cracking of Propane, Chemical Engineering Science, Vol. 34, no. 5, p635-644. Van Damme, P. S., Narayanan, S. and Froment, G. F. (1975), Thermal Cracking of Propane and Propane-Propylene Mixtures: Pilot Plant versus Industrial Data, AIChE. J., Vol. 21, p1065-1073.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
An Efficient Mathematical Framework for Detailed Production Scheduling in Food Industries: The Icecream Production Line Georgios M. Kopanos,a Luis Puigjaner,a Michael C. Georgiadis,b Peter M. M. Bongersc a
Universitat Politècnica de Catalunya-ETSEIB, Diagonal 647, Barcelona 08028, Spain Aristotle University of Thessaloniki, Department of Chemical Engineering, Thessaloniki 54124, Greece c Unilever R&D Vlaardingen, the Netherlands b
Abstract An efficient mixed integer programming (MIP) continuous-time model is developed to address production scheduling problems in multistage multiproduct food industries. The overall mathematical framework relies on an efficient modeling approach of the sequencing decisions, the integrated modeling of all production stages, and the inclusion of valid integer cuts in the formulation. The simultaneous optimization of all processing stages increases the plant production capacity, reduces the production cost for final products, and facilitates the interaction among the different departments of the plant. The proposed MIP model is well-suited to ice-cream production; however, it could be also used, with minor modifications, in scheduling problems arising in other food industries with similar processing features. As the challenging case study reveals, our model features a salient computational performance due to our modeling approach. Keywords: production scheduling, mixed integer programming, food industry.
1. Introduction Most production plants in the food industry sector combine continuous and batch operations in their overall production process, thus working in semicontinuous production mode. The literature in the field of production scheduling and planning in the food processing industries is rather poor. Entrup et al. (2005) presented three MIP models for scheduling and planning problems in the packing stage of stirred yogurt production. They accounted for shelf life issues and fermentation capacity limitations; however changeover times and production costs were ignored. Marinelli et al. (2007) addressed the planning problem in a packing line producing yogurt, and they proposed a two-stage heuristic for obtaining near-optimal solutions. Changeover costs and times were not considered. Doganis and Sarimveis (2008) presented a MIP model for scheduling yogurt packing lines. Changeover times/costs were considered; however fermentation stage limitations were ignored. Simultaneous packing of multiple products was not allowed since the parallel machines shared the same feeding line; thus simplifying the problem. Kopanos et al. (2010) developed a MIP model for the simultaneous production scheduling and lot-sizing in yogurt production lines sharing resources (e.g., fruitmixers). Although the problem was focused on the packing stage, timing and capacity constraints of the fermentation stage were also considered. The overall formulation takes into account changeover times and costs, production overtimes as well as typical daily production shutdown and setup times due to hygienic requirements.
An Efficient Mathematical Framework for Detailed Production Scheduling in Food Industries: The Ice-cream Production Line
961
2. The Ice-cream Production Facility The ice-cream production facility under study was first described by Bongers and Bakker (2006). The plant is based on a three-stage production process, as shown in Figure 1. Refer to Bongers and Bakker (2006) for the remaining set of data.
Figure 1. Ice-cream production facility.
3. Problem Statement This study considers the production scheduling problem of industrial-scale multiproduct multistage semicontinuous processes with the following features: A set of products i I should be processed by following a predefined sequence of processing stages s S with processing units j J working in parallel. The total demand ] i for each product i , which is divided into a number of batches b B , that should be met in the available scheduling horizon Z . Product i can be processed in a specific subset of units j J i . Similarly, processing stage s can be processed in a specific subset of units j J s . . A product batch should Every ageing vessel j J s2 has a maximum capacity P max j remain for a minimum ageing time W iage and no longer than its shelf-life H ilife . Parameter Uij denotes the processing and packing rate for every product i . Sequence-dependent changeover times J ii cj between consecutive products are present in the processing s1 and the packaging s3 stage. The key decision variables are: the allocation of batch b of product i to units j J i per stage, Yibsj ; the relative sequence for product batches i, b and ic, bc in process line ( s1 ), X ibicbc ; the relative sequence for any pair of products i and i c in ageing vessels ( s2 ) and packing lines ( s3 ) for j ( J i J i c J s ) , X ii c ; and the starting and completion time of product batch i, b ; Libs and Cibs , respectively. The minimization of makespan constitutes the optimization goal in this work.
962
G. M. Kopanos et al.
In the problem under study: (i) the ageing vessels operate in maximum capacity, (ii) each product can be stored into a number of equal-capacity ageing vessels, and (iii) the ageing vessels are supplied by process lines that have the same processing rate. These process features allows us to predetermine: the minimum number of batches Eimin to fully meet the demand, and the ageing vessels filling ( W ifill ) and empting ( W iempt ) times.
4. Mathematical Formulation The proposed MIP model is defined in terms of the following set of constraints:
min Cmax t Cibs i, b d Eimin , s ¦
j( J i J s )
Yibsj
Libs W i fill
3
(1)
1 i, b d E imin , s
(2)
Cibs i, b d Eimin , s 1
Libs W i fill W iage Wibs W iempt
Cibs i, b d Eimin , s
(3)
2
(4)
Wibs d H ilife W iage i, b d Eimin , s
2
(5)
Libs W iempt
3
(6)
Cibs i, b d Eimin , s
Libs
Libs 1 i, b d Eimin , s
2
(7)
Cibs
Cibs 1 i, b d Eimin , s
3
(8)
Cibs
Lib 1s i, b d Eimin , s
3
(9)
Li cbcs t Cibs J iicj Z (1 X ibicbc ) Z (2 Yibsj Yicbcsj ) i, b d Eimin , i c, bc d Eimin 1 c , s, j ( J i J i c J s ) : i i c, s Libs t Ci cbcs J icij Z X ibi cbc Z (2 Yibsj Yicbcsj ) i, b d Eimin , i c, bc d Eimin 1 c , s, j ( J i J i c J s ) : i i c, s Li cbcs t Cibs J iicj Z (1 X iic ) Z (2 Yibsj Yicbcsj ) i, b d Eimin , i c, bc d Eimin c , s, j ( J i J i c J s ) : i i c, s ! 1 Libs t Ci cbcs J icij Z X ii c Z (2 Yibsj Yicbcsj ) i, b d Eimin , i c, bc d Eimin c , s, j ( J i J i c J s ) : i i c, s ! 1 Libcs t Cibs Z (2 Yibsj Yibcsj ) i, b d E imin , b c d E imin , s, j ( J i J s ) : b b c
(10)
(11)
(12)
(13)
(14)
An Efficient Mathematical Framework for Detailed Production Scheduling in Food Industries: The Ice-cream Production Line
963
Cmax t I jmin (D min 1) J min ¦ W iempt E imin s, j J s : s j j
(15)
iI j
X ibicbc
3
X iic i, b d Eimin , i c, bc d Eimin c , s, j ( J i J i c J s ), j ( J i J ic J s 2 ) : i ic, s 1
(16)
Eq. (1) defines the makespan which is the objective function in this study. Eq. (2) states the unit allocation constraints while eqs. (3) to (6) define the timing for every product batch in the same processing stage. Eqs. (7) and (8) give the timing for a product batch between consecutive stages, and eq. (9) defines the timing of two batches of the same product in the packing stage. The sequencing between product batches in all processing stages ismodelled as big-M constraints, according to eqs. (10) to (14). Finally, eqs. (15) and (16) are some tightening constraints. More specifically, eq. (15) gives a tighter lower bound for the makespan, and eq. (16) introduces valid integer cuts by correlating the relative sequence variables of the process and packing stage.
5. Industrial Case Study Bongers and Bakker (2006) made the first attempt to solve this scheduling problem by using INFOR advanced scheduling software. As they reported, a feasible schedule on all stages could not be derived automatically by applying the available solvers. They finally obtained a feasible schedule by manual interventions. Note that some decisions in the beginning of the production week of interest have been already taken at the end of the previous production week (i.e., batches D.b1, G.b1, G.b2, and G.b3 have already passed from the process line and assigned to ageing vessels V1, V3, V4, and V5, respectively). Subbiah and Engell (2010) studied the same ice-cream plant. They used timed automata and solve the optimization problem by reachability analysis. A heuristic was applied to reduce the model size. They did not mention if they consider the overlapping decisions from the previous week schedule. A feasible solution (Cmax = 119 h) was found in 13.13 CPU s, but it cannot be rule out that the heuristic employed pruned the optimal solution.
Figure 2. Optimal production schedule (minimization of makespan).
964
G. M. Kopanos et al. Table 1. Process line and packing lines utilization breakdown.
Unit PROC PACK 1 PACK 2
Unit operation processing cleaning idle processing cleaning idle processing cleaning idle
Time (h) 76.48 20.58 22.94 115.05 3.50 1.45 106.00 4.50 9.50
Operation utilization 63.73% 17.15% 19.11% 95.88% 2.92% 1.21% 88.33% 3.75% 7.92%
Total unit utilization 80.89% 98.79% 92.08%
The resulting MIP model consists of 15,848 constraints, 491 continuous variables, and 2,024 binary variables. The optimal solution (118.55 h) was reached in 1.83 CPU s, despite the fact of the challenging (very high) total demand for final products. Figure 2 illustrates the optimal production schedule. Table 1 shows the breakdown of the utilization of the available scheduling time in the processing and packing lines. The process line is utilized for both processing and cleaning 80.89% of the available time compared to a food industry standard of 70%. Packing lines 1 and 2 operate at 98.79% and 92.08% of the total available time, respectively, including both packing and cleaning. The high total demand explains the high utilization in the process and the packing lines. Packing lines appear low total changeover times. As expected, in the process line total cleaning times are higher, since changeovers for batches of different products are more frequent. In the feasible schedule reported by Bongers and Bakker (2006) the process line is utilized 90% (that is 9.11% higher than that of the optimal schedule of this work) of the available time, thus resulting to higher production costs (due to higher changeover costs) comparing it with the proposed production schedule; considering that changeover costs are proportional to changeover times.
6. Concluding Remarks The proposed MIP model can easily be the core element of a computer-aided advanced scheduling and planning system in order to facilitate decision-making in relevant industrial environments. As the challenging case study reveals, our model features a salient computational performance due to the efficient modeling approach of the sequencing decisions, and the strong valid integer cuts introduced.
Acknowledgements Financial support from the Spanish Ministry of Education (FPU Grant) and project DPI2006-05673 is gratefully acknowledged.
References Bongers, P.M.M., Bakker, B.H., Computer Aided Chemical Engineering 21, 1917-1922 (2006). Doganis, P., Sarimveis, H., Annals of Operations Research 159, 315-331 (2008). Entrup, M.L., Günther, H.O., Beek, P.V., Grunow, M., Seiler, T., International Journal of Production Research 43, 5071-5100 (2005). Kopanos, G.M., Puigjaner, L., Georgiadis, M.C., Industrial and Engineering Chemistry Research 49, 701-718 (2010). Marinelli, F., Nenni, M.E., Sforza, A., Annals of Operations Research 150, 177-192 (2007). Subbiah, S., and Engell, S., Computers Aided Chemical Engineering 28, 1201-1206 (2010).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Corporate Production Planning for Industrial Gas Supply Chains under Low-Demand Conditions Matteo D’Isanto,a Flavio Manenti,a Nadson M. N. Lima,b Lamia Zuniga Linanb a
Politecnico di Milano, CMIC Dept. “Giulio Natta”, Piazza Leonardo da Vinci 32,
20133 Milano, ITALY – Email: [email protected] b
University of Campinas (UNICAMP), Department of Chemical Processes, PO Box
6066, 13081-970, Campinas, São Paulo, BRAZIL – Email: [email protected]
Abstract When the market demand significantly changes as within the ongoing worldwide economical crisis, many plants are forced to operate far from their nominal conditions. In this case, the plantwide optimization of production sites is a myopic approach that could unavoidably lead to plant inefficiencies and unconventional operation issues without preventing anyhow economical losses. A way to tackle low-demand conditions is to raise the decision-making process from the plantwide level to the corporate level by assigning a Boolean variable to each production site to manage its on/off status. By doing so, certain additional (social) constraints may become relevant. This work proposes a novel approach to the corporate production planning entirely based on MS VISUAL C++ and on specific algorithms belonging to BzzMath library. Keywords: MINLP; Corporate planning; Low-demand scenario; BzzMath library.
1. Introduction Many production fields are undergoing a long period of reduced market demand. This is causing relevant economical losses, since many processes loose in efficiency when they are forced to operate far from their nominal conditions. In fact, many plants were engineered and constructed years ago, when the main objective was to maximize the production (production-driven scenario) in spite of the plant flexibility and operability, whereas these features should be particularly useful in the current market-driven scenario and ongoing economical crisis. Industrial gases supply chains are more involved in the worldwide crisis and thus in the current low-demand scenario: facing a significant reduction in the market demand (hence in profits), there is no a corresponding decrease in costs since the raw material (air) is free as well as the overall energy efficiency of the energy-intensive plants such as air separation units (ASUs) dramatically drops down when the production is low (nonlinear relationship). No optimizations at the plantwide level can ensure a net operating margin for ASUs when
966
M. D’Isanto et al.
they have to operate below a certain competitive threshold. Nevertheless, some decentralized-production societies have the concrete possibility not only to limit losses, but even to obtain a positive margin: since the overall capacity of industrial gas supply chains is currently significantly larger than the final market demand, the low-demand scenario must be tackled from a corporate point of view, by even accounting for connections among production sites, logistics, and geographical distribution of the production and storage capacities. From this perspective, the supply chain planning at the corporate level is an appealing approach to provide an economical solution since it has the possibility to ponder the temporary shutdown of some ASUs and, at the same time, to force some others to operate close to their nominal conditions and thus with good plant efficiencies. The corporate optimum imposed to the air separation units may be significantly far from the ones dictated by the single plantwide optimizations.
2. Mathematical Programming The corporate production planning is a complex optimization problem since it involves nonlinearities of chemical/industrial processes, discrete variables to support the decision-making process, large dimensions due to the detailed models of many plants, and uncertainty due to market/demand fluctuation and volatility. To be effective, the corporate production planning should be based on detailed mathematical models to properly simulate each single ASU and their possible connections (pipelines, storages, trucks, railways…). This unavoidably means to account for certain nonlinearities typical of this kind of processes such as air fractionators (Bonami et al., 2008), polytrophic compression, cryogenic storages (Rodriguez and Diaz, 2007), etc.. The intrinsic nonlinearity of ASUs makes the optimization problem not only constrained, but also nonlinear. The corporate production planning is a strategic optimization usually aimed at assigning to most profitable production load to each plant so as to obtain the largest net operating margin. The final objective changes when the demand is low, since the corporate production planning is aimed at defining which plant should produce and which other should not, by assigning a Boolean variable to each plant. Thus, decision variables are of discrete/Boolean nature and they transform the process optimization into a mixed-integer nonlinear programming (MINLP) problem. Another problem is related to the number of variables and degrees of freedom involved in the corporate production planning. Actually, the corporate level involves many plants, many production trains each, many processes, many units for each process, etc.. The number of state variables, decision variables, and degrees of freedom increases exponentially, leading to a large-scale MINLP problem. As all the optimization problems characterized by a prediction horizon (i.e., planning, dynamic optimization, nonlinear model predictive control), there is an increasing uncertainty on the information regarding the future since no one can know exactly a priori, for example, if the market demand shall be the planned one tomorrow or if raw material costs will increase/decrease in the next future. A longer the prediction horizon leads to higher market uncertainties. To reduce the effect of uncertainty, the so-called rolling horizon methodology could be adopted.
Corporate Production Planning for Industrial Gas Supply Chains under Low-Demand Conditions 967
3. Solution Strategy Following the numerical complexity of the corporate production planning described in paragraph 2, specific numerical solvers are needed. We adopted both existing and novel algorithms included in BzzMath, a comprehensive numerical library for scientific computing available at the Professor Guido Buzzi-Ferraris’s webpage (Buzzi-Ferraris, 2010), entirely developed in C++ and based on object-oriented philosophy. Specifically, a set of very robust and efficient optimizers is adopted to solve every NLP encountered. They exploit openMP directives for parallel computing on shared memory architectures (multiprocessor machines) to improve the computational speed according to the available processors as discussed elsewhere (Buzzi-Ferraris and Manenti, 2010). The same set of optimizers is behind the nonlinear system solver of BzzMath adopted to solve the nonlinear constraints of the mathematical models of ASUs which the optimization is subject to. The mixed-integer nature of the problem is solved using a brute force; the rolling horizon methodology to handle uncertainties is not yet implemented in this preliminary activity, but the development of a BzzMath-based branch & bound algorithm and the rolling horizon are planned as future development, as testified by recent publications on moving horizon methodology (Manenti et al., 2009).
4. Modeling, Mathematical Programming, and Numerical Results An existing corporate for industrial gases production is studied. The following hypotheses are currently assumed: (I) three production sites (ASUs) are considered; (II) the production sites have the same production capacity and the same process flow diagram; (III) the sites can share their gas and liquid products free of logistic costs (i.e. pipelines); (IV) a daily discretization and a monthly time horizon are assumed; (V) when a plant is operating, its efficiency decreases by 0.5% for each interval; (VI) when a plant is in maintenance, its efficiency increases by 0.5% for each interval; (VII) optimization parameters are the vent lines (three vents for each plant, see figure 1); (VIII) the daily energy cost is varies between night and day; (IX) storage capacity and initial conditions are the same for each plant except for the initial efficiency D : D1 0.973 ; D 2 0.881 ; and D 3 0.748 ; (X) to avoid too short shutdown periods, at least 1 week of stop is fixed when the corporate production planning force a plant to stop; (XI) at the same time, the week is also the maximum period of stop to avoid social problems (i.e. strikes).
Figure 1. Corporate network, process flow diagram of each plant, and a C++ code sample behind the proposed approach.
968
M. D’Isanto et al.
5. Numerical Results The whole optimization problem involves 6022 equations and 360 decision variables (3 vents and a Boolean variable (for the on/off condition) for every plant at each interval of the corporate production planning). For the sake of conciseness, we report the essential formulation of the economical objective function (1) without entering the model and term details: NP 3 NT 60 ° f (x, b) 0 min ) ¦ ¦ ª¬ REVi , j x, b COSTSi , j x, b º¼ s.t. : ® (1) x ,b °¯ g x, b t 0 i 1 j 1 where x and b are the continuous and discrete variables, respectively; NP is the number of plants; NT is the number of time intervals (12h each for 30 days); f , g represents the nonlinear equations and inequality constraints, respectively, of the models of the plants involved in this study. BzzMath library is used to solve the resulting MINLP problem. Specifically, the class BzzNonLinearSystem is used to solve the resulting nonlinear systems; the class BzzFunctionRootRobust is used to evaluate the top temperature of Linde fractionators; the class BzzMinimizationRobust is used to solve the multidimensional nonlinear optimization problem. The discrete branch originated by Boolean variables is solved by brute force method. Numerical results of Figure 2 are obtained for a monthly corporate production planning.
Figure 2. Monthly corporate production planning; liquid oxygen (LOX), nitrogen (LIN), and argon (LAR) storages and shutdown periods due to low-demand conditions. The plant no. 1 is forced to be under maintenance during the fourth week; the plant no. 2, the second week; and the plant no. 3 the remaining weeks of the month. This is because the three plants have a different behavior: the most efficient plants (no. 1 and 2) are generally operating, while the plant no. 3 is used in spare to the others for its low production efficiency. Consequently, the plant no. 3 is characterized by strong
Corporate Production Planning for Industrial Gas Supply Chains under Low-Demand Conditions 969
oscillations in the liquid storages, whereas the other plants start immediately producing a certain amount of commodities at the beginning of the month. Figure 2 shows the liquid levels of liquid nitrogen (LIN), oxygen (LOX), and argon (LAR) during the monthly corporate planning. Note that plants no. 1 and 2 fill the storage tanks by liquid products during the first week and then they empty storages dayby-day to reduce the expensive storage costs. Specifically, the liquid argon is always stored when produced to match as possible to market requirements. Since it is very difficult to produce (there is a small amount in the air (less than 1%) and a complex side splitter to separate it), its demand influences ASUs vents. Liquid oxygen is usually stored in the largest tanks essentially for two reasons: its storage cost is significantly lower than the liquid nitrogen cost; it is only one fourth of the nitrogen in the air and this unavoidably makes it the bottleneck of liquid products of ASUs. Thus, when liquid oxygen is produced, it is generally stored to match possible demand peaks and market dynamics. On the other hand, the liquid nitrogen is the most expensive item of ASU storages and it is not stored in advance; in addition, its predominant composition within the air allows a just in time production even when the market demand present strong oscillations. Under a low-demand scenario, the stored amount of liquid nitrogen is practically zero. The monthly corporate planning here solved practically assigns full load (high efficiency) to two of the three available production sites, keeping the third site under maintenance for relatively short periods; this production scenario is very different from the single plantwide optimization of every site, which should force plants to operate around the 40-50% of their nominal capacity (low production efficiency).
6. Conclusions The paper shows certain tangible benefits of the corporate production planning to tackle a low-demand scenario. The proposed approach is fully developed in MS VISUAL C++ and based on BzzMath. It is an appealing solution to reduce losses and to reasonably plan complex production networks. Possible social problems deriving from corporate decisions (temporary shutdown of certain production sites) are accounted too.
References Bonami, P., Biegler, L.T., Conna, A.R., Cornuejols, G., Grossmann, I.E., Laird, C.D., et al., 2008, An algorithmic framework for convex mixed integer nonlinear programs. Discrete Optimization 5(2), 186-204. Buzzi-Ferraris, G. (2010). BzzMath: Numerical library in C++. Politecnico di Milano, http://chem.polimi.it/homes/gbuzzi. Buzzi-Ferraris, G., & Manenti, F., 2010, A Combination of Parallel Computing and ObjectOriented Programming to Improve Optimizer Robustness and Efficiency. Computer Aided Chemical Engineering 28, 337-342. Manenti, F., Buzzi-Ferraris, G., Dones, I., & Preisig, H.A. (2009). Generalized Class for Nonlinear Model Predictive Control Based on BzzMath Library. In S. Pierucci (Ed.), Icheap9: 9th International Conference on Chemical and Process Engineering, 17(1-3), 1209-1214. Rodriguez, M., & Diaz, M.S., 2007, Dynamic Modelling and Optimization of Cryogenic Systems. Applied Thermal Engineering 27, 1182-1190.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Standards for Continual Scheduling of Batch Operations Charles Silettia, Demetri Petridesb, Dimitri Vardalisc a
Intelligen, Inc. 700 Walton Ave.,Mt. Laurel, NJ 08054 USA Intelligen, Inc. 2326 Morse Ave, Scotch Plains, NJ 07076 USA c Intelligen Europe, Thessaloniki, Greece b
Abstract A batch production schedule is essentially a description of future events that is sufficiently detailed to serve as a guide for the operations team. A process schedule constantly changes as a result of changes in production orders, receipts, shipments, equipment maintenance or variable durations in the process itself. This paper focuses on issues surrounding the implementation, dissemination, and maintenance of the production schedule. Standards explored are for batch recipe representation. A recipe or master batch is a description of the process and the various constraints to which it is subject. Ideally, the process should be represented in a fashion that is independent of the formulation or solution method. The ISA S88 standard provides a basis for describing batch processes. The standard may be implemented in different ways. In an example, the SchedulePro (Intelligen, Inc.) user-interface is used to create a recipe and represent it in a series of S88-inspired tables in a relational database. Batches of the recipe are subsequently scheduled with an MILP solver. If the capabilities of a relational database aren’t required, an S88 based XML representation, such as BatchML might also provide an application-independent representation. The production schedule itself may be represented in an application independent way. A relational database is an ideal representation for two reasons. First, a database allows for user-specific reports and views based on the user’s location or job function. Second, a database allows for multiple versions of a process schedule. An example illustrates how each update to the schedule is saved as a “snapshot.” This allows for a detailed plannedversus-actual analysis that can pinpoint when and why a particular batch was late or early. The example will also illustrate how different users might view only the parts of the schedule that they need. Keywords: Batch scheduling, batch processing, manufacturing systems, XML
1. Introduction Each year brings developments in batch scheduling. These developments include advances in solution methodologies, problem formulations and algorithms. These advances have lead to faster solution times and have broadened the range of problems that can be solved. Scheduling approaches have been the subject of periodic reviews including Pekny and Reklaitis[1] and Floudas and Lin[2],[3]. However the solution methodology is only part of a scheduling system. A set of standards for representing process scheduling information would facilitate a modular approach to scheduling with interchangeable solution components.
Standards for Continual Scheduling of Batch Operations
971
2. Motivating Example 2.1. Process Description A Biotechnology process has numerous clean-in-place (CIP) steps that require both the equipment to be cleaned and a shared equipment item known as a CIP skid. These processes also require the preparation of buffers or media. For this example there are 3 blends to be prepared, 2 blend tanks and a single CIP skid. Blending operations require 2 hours and cleaning requires 1 hour. Blend 1 requires Tank 1 while the remaining blends require Tank 2. The goal is to produce the blends and free the tanks as soon as possible. 2.2. Scheduling Models The process was modeled in a particular commercial scheduling system, which organizes a batch process into unit-procedures, e.g. blending A unit-procedure requires a process unit, e.g. a tank. A unit procedure is subdivided into operations, e.g. CIP. The tool defines timing relations and resources at the operation level. The system employs a job-based scheduling algorithm, which assigns resources in priority order. This can lead to sub-optimal solutions as shown in Figure 1.
Figure 1: Blending Schedule
While the user could optimize the solution by changing priorities, a simple math programming formulation automatically finds the best solution. Since this case has dedicated process equipment and fairly simple constraints, a continuous time model with the variables represent the start times of each operation. Constraints are introduced for the precedence of operations. Equipment constraints ensure that process units are not overbooked. The CIP Skid limitation is managed with disjunctive constraints as described by Jenerström and Westerlund[4]. Because the tool can represent the process model in a relational database, it is a simple matter to develop an external program (Visual Basic in this case) that queries the database and generates appropriate input for an MILP solver (LPSolve). A similar program can read the solution and insert appropriate database records to represent the resulting schedule. The resulting solution is shown in Figure 2.
Figure 2: Improved Schedule
972
C. A. Siletti et al.
If the process required different solution methodology, a new formulation routine would also be required, but the database representation would remain the same. A standardized process description would allow the formulation and solution to be independent of the interface tool used for problem set up and display.
3. Scheduling Standard Requirements 3.1. General Requirements Pekny and Reklaitis describe a scheduling system having the following four basic components: a user-interface, a representation layer, a formulation layer and a computational layer. Separation of the layers into independent modules requires data interfaces that allow them to communicate. If the modules are to be interchangeable, the interfaces must be standardized. The interface between the formulation and computational layers depends on the solution technique. Standards exist for math programming techniques, e.g. MPS files, but for other methods the formulation and solution may be single module. The interface between the formulation and representation and between the representation and interface modules communicates information about the process and is independent of the solution methodology. A standard for sharing process scheduling information should meet the following requirements: x The standard should be independent of the solution method. x The standard should be independent of hardware and software platforms. x The standard should adequately describe the scheduling problem and the solution. x The standard should be extensible. 3.2. Level of Detail Batch scheduling models are generally intended to schedule produces whose operating conditions are fixed. The details of reactions and separations are ignored, and the model is focused on the process tasks, their durations, relative timing and resource requirements. In general the following information should be included: x The processes to be scheduled and the tasks for each process x The duration and precedence information for each task x The equipment, labor, material inputs and material outputs for each task x The (possibly time-variant) limits on resources and inventories x Orders for each process with release dates and due dates x Costs of resources and prices of products This organization implies a partition of responsibilities among the different modules. The interface program translates user input and input from other systems into a standardized representation. The interface program also selects (or allows the user to select) an appropriate solver program. The solver program reads the standardized representation and formulates the scheduling problem, and formulates the results in a standardized format. The solver program reports its success or failure to formulate and solve the problem to the interface program. If the solution is successful, the interface program is responsible for displaying and publishing the results. 3.3. Format A relational database format offers the advantages of easy searching and reporting, and applications can easily share information stored in a database. A file-based format, e.g.
Standards for Continual Scheduling of Batch Operations
973
XML, may be more convenient for transferring data and allows a greater degree of autonomy among the system components. The two formats are not mutually exclusive. Bert Bos[5] describes how to convert a relational database format to XML. 3.4. ISA S88/S95 3.4.1. Process Recipe Information The International Society of Automation has issued two data model standards that are deal with the management of information associated with batch process scheduling. The ISA-S88, issued in 1995[6], describes information needed for batch control systems. The standard includes a hierarchical description of the batch process steps, resources, specifications. The standard provides a description of conditions and state requirements for processing steps. The S88 standard does not specifically describe limitations on resource availability, nor does it describe production plans, but the ISA-S95 covers such business planning and logistics information often associated with enterprise resource planning (ERP) and manufacturing execution systems (MES). The standard organizes information related to production orders, resource limitations, business rules and overall schedules[7]. In the S88 standard, a process description is represented by a master recipe. The master recipe allows hierarchical process organization consisting of a series of unit procedures each of which may consist of a series of operations which may, in turn, be organized into phases. Each level may have sequencing or precedence logic, equipment requirements and resource requirements. 3.4.2. Resource Limitation Information Plant resource limitations in the S95 standard are described by production capabilities which provide time-dependent resource availability information. The standard also provides the production request entity which describes production orders and associated information. 3.4.3. Production Schedule Information The result of scheduling is a series of batches for which the timing and resources are explicitly determined. For a given batch, this detailed information is given by a control recipe. A control recipe is similar to a master recipe except that it is associated with a specific batch, references specific resources, and the steps have fixed start and end times. 3.5. XML Standards Neither the S88 nor the S95 standards enforce any specific implementation of data organization. The standards are amenable to implementation as tables in a relational database, classes in an object-oriented programming environment or as XML. The World Batch Forum maintains XML schemas that implement both standards. The S88 standard is implemented as the BatchML (batch markup language) schema, and the S95 standard is implemented as the B2MML (business to manufacturing markup language) schema. The two XML schemas have been consolidated and may be used together. Standards
4. Updating, Tracking and Implementing a Schedule 4.1. Scheduling in Manufacturing Systems In the hierarchy of process manufacturing systems, detailed process scheduling is part of the MES function which links ERP systems and process control[8]. An MES is generally responsible for implementing the production by communicating instructions to operators or to the the control system and tracking their execution.
974
C. A. Siletti et al.
4.2. Schedule Updating A schedule is a description of the future. Actual events may have deviations in duration or in resource usage that invalidate the schedule. Periodic rescheduling with current information is required. The S88 standard and its BatchML implementation are precisely aimed at communicating such information across systems. Hansen [9] describes one possible implementation. 4.3. Schedule Tracking Each cycle of updating and rescheduling leads to a new version of the schedule. There is value in retaining each version of the schedule. Many organizations report key performance indicators (KPI) based on planned vs. actual production information. The version of the schedule that serves as the basis may vary. An annual planning report may include information about deviations from the initial schedule, while for an investigation of batch deviations, planned may mean the last schedule before the start of a batch. The S88, S95 standards also provide a good format for saving schedule information in a relational database.
5. Conclusion A fully modular scheduling architecture requires a standard for communicating process recipe and scheduling information. The ISA-88 and ISA-95 standards respectively organize the information for communicating process recipe information and production scheduling respectively. These standards and their XML implementations may serve as a means of modularizing scheduling interfaces, solution technologies, and implementation.
References [1] J. Pekny and G. Reklaitis, Towards the Convergence of Theory and Practice: A Technology Guide for Scheduling/Planning Methodology, Foudations of Computer Aided Process Operations, 3, (1998) 91-111 [2] C. Floudas and X. Lin, Mixed Integer Linear Programming in Process Scheduling: Modeling, Algorithms, and Applications, Annals of Operations Research, 139 (2005) 131-162 [3] C. Floudas and X. Lin, Continuous-Time versus Discrete-Time Approaches for Scheduling of Chemical Processes: a Review, Computers and Chemical Engineering, 28 (2009) 2109-2129 [4] P. Jenerström and T. Westerlund, A Comparison of Three Different Modeling Approaches for Solving Multi-Product, Multi-Purpose Plant Scheduling, Foudations of Computer Aided Process Operations, 4 (2003) 319-322 [5] B. Bos, XML Representation of a Relational Database, World Wide Web Consortium, http://www.w3.org/XML/RDB.html (1997) [6] ISA, ANSI/ISA–88.01–1995, Batch Control Part 1:Models and Terminology (1995) [7] R. Menendez and D. Tanner, Unified Manufacturing Control System MCS Architecture for Pharmaceutical and Biotech Manufacturing, Pharmaceutical Engineering, 27 (2007) 1-8 [8] F. Peignes and J. Trouchaud, Succeeding with Manufacturing Execution Systems (MES), Pharmaceutical Engineering, 18, No. 5 (2001) 36 [9] N. Hansen, Human Machine Interface for Process Scheduling Support, Ørsted Denmark Technical University Masters Thesis (2006)
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
New Scheduling Approach for Shared Resources and Mixed Storage Policies Pedro M. Castro,a Luis J. Zeballos,a,b Carlos A. Méndezc a
UMOSE, Laboratório Nacional de Energia e Geologia, 1649-038 Lisboa, Portugal Universidad Nacional del Litoral, Facultad Ingeniería Química, Santa Fe, Argentina c INTEC (Universidad Nacional del Litoral – CONICET), Santa Fe, Argentina b
Abstract We propose a new mixed-integer linear programming (MILP) model for scheduling automated wet-etching stations (AWS), which can be classified as a multistage batch plant with both zero-wait and local storage policies and featuring a robot for moving the wafer lots from one stage to the next. The novelty is the use of a slot-based approach for handling the processing tasks together with general precedence sequencing variables for dealing with the transportation tasks. The performance of the new continuous-time model is shown to be significantly better than a pure sequencing based formulation and a closely related hybrid model taken from the literature. Keywords: optimization, mixed-integer linear program, mathematical modeling.
1. Introduction The key element of scheduling is ensuring that equipment resources do not handle competing tasks simultaneously. Task sequencing can be done explicitly through the use of immediate or general precedence sequencing variables (SV), or implicitly, by assigning tasks to different slots of a time grid. The former option is typically associated to multistage plants, where a given unit belongs to a single stage and product identity is kept. In contrast, slot based models are preferred for multipurpose plants where there is also the need to keep track of material resources over time. These are also linked to problems featuring renewable shared resources other than equipment, such as manpower and utilities (Janak et al. 2004, Castro & co-workers 2004, 2009). Nevertheless, the SV approach of Méndez & Cerdá 2003 has been shown a better alternative for a single stage batch plant with limited manpower. This article proposes a new hybrid slot/sequencing based model for the optimal scheduling of AWS. We use general precedence variables, whereas in earlier work by Bhushan and Karimi (2003) the sequencing variables were defined for a narrow neighborhood. The new model is shown to be more efficient and faster at finding good solutions, particularly with the increase in the number of baths.
2. Problem Definition Given an automated wet-etching station consisting of mM units divided into |M|-1 baths and an output buffer, iI wafer lots and a robot for moving lots across baths, the objective is to determine the sequencing of lots in the units and the sequence of transportation tasks by the robot so as to minimize the makespan. Fixed processing, pi,m, and transportation times to a given unit/buffer, Sm, are assumed. All lots follow the same recipe going through chemical and water baths in sequence, see Figure 1. Overexposure
976
P. Castro et al.
in the chemical baths can damage the wafers, so the contact time must be controlled strictly (zero-wait policy, ZW). On the other hand, overexposure to water is safe. Water baths can thus act as local storage (LS). There is no intermediate storage (NIS). The robot moves a single wafer lot at a time and cannot be used as a temporary buffer.
Figure 1. An automated wet-etching station (AWS)
3. New Model In terms of plant topology, AWS feature a single unit per stage meaning that we need to be concerned with task sequencing only. Because of this, multiple time grid approaches become particularly attractive since there is no longer uncertainty in the number of slots to specify for each unit, which is equal to the number of lots, |T|=|I|. Starting from the model by Castro & Grossmann (2005), we derive the MILP model by adding new variables and constraints to model the features of AWS linked to NIS and ZW policies. Binary variables Ni,t assign lot i to slot t, with the latter index implicitly giving the position of lot i in the sequence. It is important to highlight that the initial sequence is kept throughout the processing stages due to the NIS and ZW policies. Continuous variables Tt,m indicate the starting time of slot t in unit m, while Tet,m give the time at which processing ends in water bath m (mZW). Equation 1 states that the duration of slot t in unit m must be greater than the processing time of the lot assigned to the slot plus its transfer to unit m+1 and the transfer of the subsequent lot, assigned to slot t+1, to unit m. While this is true for all units, the constraint is written for chemical baths only, since for the water baths the relation is with the end of processing in the previous slot, eq 2. The starting time in bath unit m+1 must be equal to the start of the same slot t in the previous unit plus processing and transfer times, eq 3. In contrast, the end of processing in bath units may go beyond the lot processing time, eq 4. The starting time of the first slot in the first unit must be greater than the transfer time to it, eq 5. Eqs 6-7 enforce that a slot holds exactly one lot. The makespan is defined through eq 8, while eq 9 makes the formulation more efficient. The unlimited robot model (URM), is completed with eq 10, where MS is the makespan.
Tt 1, m t Tt , m ¦ N i ,t p i , m S m 1 S m
m ZW , t T , t z| T |
(1)
iI
Tt 1,m t Tet ,m S m1 S m
m LS , t T , t z| T |
Tt ,m1 t Tt ,m ¦ N i ,t pi ,m S m1 iI
m ZW , t T
(2) (3)
New Scheduling Approach for Shared Resources and Mixed Storage Policies
Tet ,m t Tt ,m ¦ N i ,t pi ,m
m LS , t T
977
(4)
iI
T1,1 t S 1
(5)
¦ N i,t
1 i I
(6)
¦ N i,t
1 t T
(7)
tT iI
T|T |,|M | d MS Tt ,m
¦ N i ,t iI
(8)
¦
( pi ,m ' m'M m 't m m 'z|M |
S m'1 ) d MS m M , m z| M |, t T
min MS
(9) (10)
In order to model the transportation tasks of the robot resource, we use the following general precedence sequencing variables that relate different slot-unit pairs. In fact, not all combinations need to be considered. After a careful analysis one finds that only a subset of the binary variables may be different than zero (YD0) or one (YD1). The exact definition is given below, where LBm,t and UBm,t give the lowest/highest possible slot, belonging to a hypothetical robot grid, for transferring the lot in position t (with respect to the bath sequence) to unit m.
Yt ,m,t ', m'
YD0 YD1 LBm,t
1 if slot t in unit m starts after slot t' in m' t ' ! t , m z m' ® ¯0 otherwise
^(t, m, t ' , m' ) : t, t ' T , t ' ! t, m, m' M , m z m' ,UBm,t ! LBm',t ' ` ^(t, m, t ' , m' ) : t, t ' T , t ' ! t, m, m' M , m z m' ,UBm',t ' ! LBm,t `
¦ ¦1
m M , t T ;
t 'T m 'M t 'dt m'd m t t '
UBm,t | I | | M | 1 ¦
¦1
m M , t T
t 'T m 'M t 'tt m 't m t t '
The sequencing constraints that prevent simultaneous transfer of lots by the robot are given by big-M constraints 11-12, where H is an upper bound on the makespan. This completes the one robot model (ORM).
Tt ,m t Tt ',m' S m H (1 Yt ,m,t ',m' ) (t , m, t ' , m' ) YD0
(11)
Tt ',m' t Tt ,m S m' H Yt ,m,t ',m'
(12)
(t , m, t ' , m' ) YD1
4. Computational Results We now compare the computational performance of the new model to that of our implementation of the hybrid model by Bhushan & Karimi (2003) and also to the pure general precedence sequencing model (AM) by Aguirre & Méndez (2010). Thirteen test problems generated from the 25 lots in 12 baths problem first considered by Bhushan & Karimi (2003) are used for that purpose. These range from 18 lots in 4 baths to 15 lots in 12 baths. Parameter H in eqs 11-12 was set to 500, well above the optimal makespan for all example problems.
978
P. Castro et al.
Table 1. Overview of Computational Statistics for One Robot Case Makespan
CPUs
Sol. relaxed MILP
Model
(I,M-1)
ORM
BK
AM
ORM
BK
AM
ORM/BK
AM
P1
(8,4)
95.6
95.6
95.6
0.82
1.31
55.3
85.4
54.3
P2
(10,4)
115.6
115.6
115.6
4.95
3.17
3600
100.5
61.7
P3
(12,4)
134.1
134.1
135
46.1
13.3
3600
116.5
69.17
P4
(15,4)
163.6
163.6
167.2
1682
104
3600
141.4
79.42
3600
168.1
90.11
350
113.9
84.06
a
P5
(18,4)
194.7
195.7
202.8
3600
706
P6
(8,8)
131.6
132.6
131.6
28.7
3600
P7
(10,8)
153.5
-
155.8
3600
3600
3600
130.2
92.03
P8
(12,8)
172.5
193.6
182.2
3600
3600
3600
146.8
99.77
P9
(15,8)
214.6
-
218.7
3600
3600
3600
172.1
110.7
P10
(8,12)
170.6
-
170.6
92.7
3600
2298
146.1
116.5
P11
(10,12)
199.1
-
206.6
3600
3600
3600
164.1
125.62
P12
(12,12)
-
-
-
3600
3600
3600
180.4
133.19
P13
(15,12)
782.1
-
-
3600
3600
3600
207.0
145.15
a
Out of memory termination.
Table 2. Overview of Computational Statistics for Unlimited Robot Case URM
AM without robot constraints
RMILP
Makespan
CPUs
RMILP
Makespan
CPUs
P1
85.4
95.1
0.39
54.3
95.1
2.40
P2
100.5
115.5
1.30
61.7
115.5
81
P3
116.5
133.7
7.72
69.2
133.7
3600
P4
141.4
163.1
65.4
79.4
163.1
3600
P5
168.1
194.2
2524
90.11
195.6
3600
P6
113.9
130
0.63
84.06
130
3.34
P7
130.2
149.4
5.67
92.03
149.4
148
P8
146.8
169.1
62.8
99.77
169.1
3600
P9
172.1
197.5
1501a
110.7
197.3
3600
P10
146.1
170.3
2.44
116.5
170.3
6.24
P11
164.1
192.2
12.7
125.6
192.2
259
P12
180.4
210.7
38.1
133.2
210.7
3600
P13
207.0
241.4
2959
145.2
242.4
3600
a
Suboptimal solutions in italic. Out of memory termination. The models have been implemented in GAMS 23.5 using CPLEX 12.2 as the MILP solver. The hardware consisted on an Intel Core2 Duo T9300 2.5 GHz laptop (two parallel threads), running Windows Vista Enterprise. The relative optimality gap was set to 10-6 and the maximum computational time to 3600 CPUs.
New Scheduling Approach for Shared Resources and Mixed Storage Policies
979
The results in Table 1 clearly show that our new model is the best performer. ORM successfully proves optimality under the hour mark for 6 problems (P1-P4, P6 and P10) and it always provides the best solution. The conceptually similar model BK can be considered the second best, being indeed better for P2-P4. This tells us that BK handles better an increase in the number of lots, provided that the number of baths remains low. Since the 4-index sequencing variables used in BK are somewhat between the concepts of immediate and general precedence, the results suggest that it is preferable to rely on general precedence variables already for 8-bath problems. In fact, beyond this point, BK struggles to even find feasible solutions, while the pure sequencing model AM can still find reasonably good schedules for P7-P9 and P11 and prove optimality for P6 and P10, despite the significantly worse relaxation (see last two columns). It is also interesting to report that the main contributor to the major difference in performance between ORM and AM originates in the unlimited robot part of the model. Notice that by using slot variables we make the URM model considerably tighter by avoiding big-M constraints. In contrast, big-M constraints are required to sequence different lots processed in the same unit, in the unlimited robot version of AM. As can be seen in Table 2, there is a single problem that cannot be solved to global optimality by URM, whereas this is true for 7 problems when solved by AM. Nevertheless, AM only failed to find the optimal solutions for P5 and P13, and found a better solution than URM for P9. Overall, removing the robot constraints makes the problem much easier to solve and normally leads to a lower makespan. From the comparison between Tables 1 and 2, one finds that the difference is inexistent for P1 and almost negligible for P2 but then tends to increase with problem size.
5. Conclusions This article has addressed the optimal scheduling of automated wet-etching stations using a single robot for transporting wafer lots between baths. A new hybrid formulation has been proposed that relies on the concept of time slots to implicitly determine lot sequencing in the chemical and water baths, and on general precedence sequencing variables for the transportation tasks. Model performance has been evaluated through the solution of 13 test problems taken from the literature and compared to a closely related model and to another relying solely on sequencing variables. The results have shown that the new model was able to solve more problems to optimality and did always find the best solution of the group.
Acknowledgments The authors gratefully acknowledge financial support from Fundação para a Ciência e Tecnologia and Ministerio de Ciencia, Tecnología e Innovación Productiva, under the Scientific Bilateral Cooperation Agreement between Argentina and Portugal (20102011) and from AECID under Grant PCI-D/024726/09.
References A. Aguirre, C. Méndez, 2010, In In Computer-aided Chemical Engineering, Vol. 28, 883-888. S. Bhushan, I. Karimi, 2003, Ind. Eng. Chem. Res. 42, 1391-1399. P. Castro, A. Barbosa-Póvoa, H. Matos, A. Novais, 2004, Ind. Eng. Chem. Res. 43, 105-118. P. Castro, I. Grossmann, 2005, Ind. Eng. Chem. Res. 44, 9175-9190. P. Castro, I. Harjunkoski, I. Grossmann, 2009, Ind. Eng. Chem. Res. 48, 6701-6714. S. Janak, X. Lin, C. Floudas, 2004, Ind. Eng. Chem. Res. 43, 2516-2533. C. Méndez, J. Cerdá, 2003, In Computer-aided Chemical Engineering, Vol. 10, 721-726.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimal Scheduling of Multi-Level Tree-Structure Pipeline Networks Diego C. Cafaro,a Jaime Cerdáa a
INTEC (UNL-CONICET), Güemes 3450, (3000) Santa Fe, Argentina
Abstract Refined products pipelines move oil derivatives such as gasoline, jet fuel, and diesel fuel from refineries, trade centers, or seaports to markets. Most of them are multiproduct systems transporting batches of different products one after the other into the same duct. Tree-structure pipeline networks consist of a trunk line serving high-volume, long-haul transportation requirements, and secondary lines distributing smaller volumes along shorter distances. Secondary lines usually originate from the trunk line at an oil products terminal or a branching node. Moreover, some of them are branched into smaller ducts, resulting in a multi-level tree structure. This work introduces a rigorous mathematical formulation for the operational planning of multi-level tree pipeline networks. Lots of different refined products can be branched and re-branched to lowerlevel pipelines, and delivered to accessible demanding depots during a single pumping operation. The problem goal is to find the optimal transport schedule in order to satisfy all terminal requirements at minimum pumping, interface, idle transport capacity, and inventory carrying costs. Keywords: oil products pipelines, multi-level tree structure, MILP, optimal scheduling.
1. Introduction Refined products pipelines begin at refineries and ends at distribution terminals, close to large consumption regions. In a tree-structure pipeline network, product batches move down from the origin of a trunk line and are diverted to bulk terminals, while some portions are branched to nested secondary lines and transported to nearby clients. Batches moving through a trunk line can be directly transferred ("tightlined") to other pipelines or temporarily stored in storage tanks at branch points. Since different products are pumped back-to-back into the same duct, mixing inevitably occurs. Smaller batch sizes cause larger interface losses, but require less storage capacity at the depots. Planning the injection of new batches and the simultaneous branching of flows to secondary lines is a complex logistic task that needs efficient supporting tools. Most contributions on this subject consider a unidirectional pipeline connecting one or more refineries to several receiving terminals (Cafaro & Cerdá, 2008, 2009, 2010a). Some others tackle a whole pipeline network based on decomposition approaches (Moura et al., 2008; Boschetto et al., 2010). Recently, some works on short-term distribution planning of multiproduct tree-structured pipeline networks have been published (Castro, 2010; Cafaro & Cerdá, 2010b; MirHassani & Jahromi, 2011). However, the proposed models are focused on single-level tree pipeline systems, where all branches emerge from the mainline. To overcome such limitation, this work introduces an MILP (mixedinteger linear programming) formulation for the optimal scheduling of multi-level tree pipeline networks, where every secondary line can be directly connected to depots and/or to lower-level branches. The model is aimed to find the optimal schedule of pumping and delivery operations in order to satisfy all transport requirements at
Optimal scheduling of multi-level tree-structure pipeline networks
981
minimum operating cost. To reduce the interface reprocessing, model constraints strictly monitor the splitting of batches and the creation of new interfaces while transferring products to lower-level pipelines. This permits to avoid forbidden product strings and compute additional costs associated to new interfaces. The continuous-time formulation is able to perform a rigorous tracking of lots and interfaces in trunk and secondary lines, including the knowledge of the original batches from which split-streams were diverted.
2. Multi-Level Tree Pipeline Networks A tree pipeline network consists of a unidirectional trunk line (Ɛo PL) and a set of delivering pipelines directly or indirectly connected to it. In a multi-level tree-structured pipeline system, branches of level one emerge from the mainline (PLƐo = {Ɛo1 , Ɛo2 …}) and any of them (Ɛoi) can be re-branched into delivering lines of level two (PLƐoi ={Ɛoi1 , Ɛoi2 … }), and so on. Batches of refined products are injected at the origin of the trunk line and may be either delivered to depots j Jo that are accessible from Ɛo (i.e. the mainline terminals) and/or branched to secondary lines of level one. Every first-level branch (Ɛoi) has its own starting point along the mainline at volume coordinate Uoi from the origin. At that node, lots of products can be branched into line Ɛoi from the trunk line. Coordinates of mainline terminals given by Vj (j Jo) are referred to the system origin and represent the volume between the input station and depot j. In turn, coordinates for depots connected to a secondary line Ɛ (j JƐ) are referred to the origin of Ɛ, i.e. the branching node. Furthermore, every depot j J is connected to a single pipeline Ɛ (JƐ ŀ JƐ' = for Ɛ' Ɛ). Receiving terminals are strict output nodes, and new product lots are only injected at the mainline origin. The proposed mathematical model will consider the short-term scheduling of a multi-level tree pipeline network where every branch Ɛ of level k starts from another line of level (k – 1), and distributes products to depots j JƐ and/or to minor pipelines of level (k + 1). To simplify the nomenclature, it will be assumed that the pipeline network is composed by a trunk line Ɛo and a set of branches Ɛ z Ɛo . The group of secondary lines emerging from pipeline Ɛ is represented by PLƐ . D2 V2 = 160
D2 D3
Ɛo11 (branch of level 2) Input Terminal
Refinery
D1
Ɛo (Trunk Line)
B3160
Input Terminal Distribution Centers
Ɛo1 (branch of level 1)
D4
B3210
ыo1 B4320
ыo
ыo11
Uo11 = 210
D1 V1 = 320 B3110 B2290 Uo1 = 430
B2280
Uo2 = 720
Branching Node
Ɛo2 (branch of level 1)
D4 V4 = 1200
B1480 B2130
D3 V3 = 490
D6 V6 = 260 B1130
ыo2 D6
D5 V5 = 130
D5
Figure 1. A multi-level tree pipeline network and the proposed system representation
Figure 1 depicts a multi-level tree structure, with a single input station located close to a major refinery injecting product batches at the origin of the trunk line. They are destined to 6 receiving terminals demanding multiple products to meet accepted customer orders. Such terminals are connected to the input station through a set of 4 pipelines: PL = {Ɛo , Ɛo1 , Ɛo2 , Ɛo11}. The trunk line (Ɛo) is directly connected to depots D1 and D4, i.e. Jo = {D1, D4}, while delivering lines Ɛo1 and Ɛo2 are first-level branches supplying products to terminals D3 and D5-D6, respectively. Hence, PLo = {Ɛo1 , Ɛo2}, Jo1 = {D3}, and Jo2 =
982
D.C. Cafaro et al.
{D5, D6}. Besides, a second-level branch Ɛo11 emerges from the lateral pipeline Ɛo1 (PLo1 = {Ɛo11}) and supplies products to depot D2 (Jo11 = {D2}).
3. Mathematical Formulation Since not all the batches moving along pipeline Ɛ* are partially or completely transferred to lower-level branch Ɛ PLƐ* emerging from Ɛ*, tracking the lot sequence in secondary lines is not a trivial matter. Let us introduce a binary variable wi,Ɛ* that is equal to one only if a portion or the whole batch i has been diverted to the secondary line Ɛ*during the current horizon. In the same way, a portion of batch i travelling through line Ɛ* can be diverted from Ɛ* to the lower-level branch Ɛ PLƐ* only if wi,Ɛ = 1. Because only batches transported through the secondary line Ɛ* can be re-branched into minor pipelines Ɛ PLƐ* then, wi , " d wi , "*
i I , "* z " o , " PL"*
(1)
If wli,Ɛ(i') is a 0-1 variable denoting that a portion of batch i is transferred to pipeline Ɛ during the injection of a new batch i' into the mainline, and Ti,Ɛ(i') is the associated volume inputted in line Ɛ, the following conditions should be fulfilled, t min wli ,"
( i ')
wi ," d
¦ wl
i 'I
new
d Ti ,"
: i 'ti
( i ')
d t max wli ,"
( i ')
( i ')
; wli ,"
( i ')
d wi ,"
i I , i ' I new , i ' t i, " z " o
(3)
i I , " z " o
i ,"
(2)
Based on the information on the product p P contained in any batch i, given by the binary variable yi,p , the MILP formulation is able to identify every pair of products [p’, p] that travel back-to-back into a secondary line Ɛ Ɛo of any level, and the associate interface loss given by WIFi,p’,p,Ɛ . WIFi , p ', p ," t if p ', p ," ( yi , p yi ', p ' wi ," wi ',"
i 1
¦w
k i ' 1
k ,"
3) i' , i I , i ' i, ( p' , p) FS , " z " o
(4)
By enforcing the expression on the RHS of Eq. (4) to be less or equal to zero for any pair of incompatible products [p, p’] FS, the model can prevent the transfer of forbidden product sequences to secondary lines. On the other hand, the volume of a batch flowing along the mainline is traced by, Qi
Wi ,"o
Wi ,"o
( i ')
(i )
¦D
j J o
Wi ,"o
( i ' 1)
(i )
i, j
¦T
"PLo
¦D jJ o
( i ')
i, j
(i )
; Fi ,"o
i ,"
¦T
"PLo
( i ')
(i )
Wi ,"o
(i )
i I new
i I , i ' I new , i ' ! i
i ,"
(5) (6)
Qi is the total volume of injection i, Wi,Ɛo(i') is the size of batch i into the mainline at the end of run i', and Di,j(i') and Ti,Ɛ(i') are the volumes of batch i that are diverted from Ɛo to depots j Jo and to first-level branches Ɛ PLo during run i', respectively. Moreover, a batch moving through a secondary line Ɛ Ɛo may change its size by: (a) receiving a further amount of product at the branching node, (b) delivering product to receiving depots j JƐ , and/or (c) diverting a batch portion to lower-level branches Ɛ' PLƐ . In that way, the batch volume can be monitored at the end of every run i' (time Ci'): Wi ,"
( i ')
Wi ,"
( i ' 1)
Ti ,"
( i ')
¦ Di , j jJ "
( i ')
¦T
" 'PL"
i ," '
( i ')
i I , i ' I new , i ' ! i, " z " o
(7)
983
Optimal scheduling of multi-level tree-structure pipeline networks
If Fi,Ɛ(i') is the upper volumetric coordinate of the portion of batch i moving along pipeline Ɛ, the feasibility conditions for diverting flows to a secondary pipeline Ɛ' PLƐ emerging from line Ɛ at coordinate UƐ' are given by constraints (8) and (9). All the coordinates are referred to the origin of pipeline Ɛ, whose total content is given by pvƐ . ¦ ¦ WIFi , p ', p ," t U " ' wl i ," '
Fi."
( i ')
Fi ,"
( i ' 1)
( i ')
Wi,"
( i ' 1)
¦T
i ,k kPL" : U k d U" '
( i ')
¦D
(8)
i I , i ' I new , i ' t i, " PL, "' PL"
pP p 'z p
( i ')
i, j jJ " : V j d U" '
Ti,"
( i ')
d U" ' ( pv" U" ' )(1 wli ," ' ) ( i ')
(9)
i I , i' I new , i' ! i, " PL, "' PL"
From the continuity of the batch flow through the pipelines and the incompressibility property of the liquid refined products, it is derived a set of equations relating the volumetric coordinates of consecutive batches in line Ɛ. Moreover, an overall volume balance around the pipeline network leads to a relationship between the size of a new batch injection and the product deliveries to depots taking place during that injection. Fi."
( i ')
Wi."
( i ')
Fi 1."
( i ')
¦¦ D
i I , i ' I new , i ' t i, " PL ;
idi ' jJ
( i ')
i ' I new
Qi '
i, j
(10)
Besides, local balances around the mainline and branches are also necessary,
¦( ¦ D i di '
jJ o
(i ')
i, j
¦T
"PLo
(i ')
i,"
Qi ' i' I new;
)
¦( ¦ D i di '
jJ "
(i ')
i, j
¦T
" 'PL"
(i ')
i ,"'
)
¦T i di '
( i ')
i,"
i' I new, " z " o (11)
Further constraints standing for the feasibility conditions for delivering products from pipelines to depots, as well as the inventory control at refinery and terminal tanks are similar to those proposed by Cafaro & Cerdá (2010b). Finally, the objective function aims to minimize the sum of pumping, interface reprocessing, inventory carrying, and underutilization costs. Min z
¦¦¦ ¦ cp pP jJ iI i 'I new
¦
p, j
DPi , p , j
( i ')
¦ (cirp IRS p
pP i 'I new
¦¦ ¦ ¦ cf
"PL pP p 'z p i !1 ( i ')
p ', p ,"
WIFi , p ', p ,"
¦ cid p , j ID p , j ) cu (hmax ( i ')
jJ
(12)
¦
Li )
iI new
4. Results and Discussion The approach was successfully applied to an illustrative example that was obtained by modifying a single-level tree pipeline network problem previously studied by Cafaro & Cerdá (2010b). A second-level branch for supplying products P1 and P3 to a new destination D7 was added to the pipeline network. The optimal pipeline schedule comprises 4 consecutive injections (P38000-P18000-P427000-P230000, with the subscripts indicating the injected volumes, in m3) and presents a makespan of 77 h. Lateral pipelines of level one (namely Ɛo1 and Ɛo2) are operated during time intervals [0 h; 47 h] and [47 h; 77 h], respectively. Pipeline Ɛo1 receives two further split-streams with products P23000 and P317000 from the mainline, while Ɛo2 receives three batch portions of products P32000, P12000, and P46000, in that order. The second-level branch (Ɛo11) connected to D7 is activated from time 10 h to 47 h, and receives 10000 m3 of product P3 from the first-level branch Ɛo1 . The model includes 5184 constraints and 2331 variables, of which 410 are discrete. It was solved to optimality in 67 CPU s on an Intel Xeon 2.66 GHz PC using CPLEX 12.2 with 6 parallel threads as the MILP solver. Inventory profiles at the depots are also shown in Figure 2.
984
D.C. Cafaro et al.
D7
5000
P1 Volume[m3]
D4
D1
Input Station
5000
14000
25000
5000
D3
17000
20000
16000
t = 0.00 h
D6
4000
D2
6000
3000
13000
t = 10.00 h
3000
8000
5000
5000
5000
3000
8000
2000
13000
D6 6000
D6
t = 47.00 h
P1
P2
P3
P4
D2
8000
D2
D4 D5
D6
D6 D7
0 0
20
40
60
80
Time[h]
P4 Volume[m3] 30000
D7
25000
D3 13500 4000 1000
18500 5000
1000 1000 2000 1000 4000
t = 77.00 h
D1
6000
3000
2000
P3 Volume[m3]
5000
2500
4000
6000 2000 2000
28500
Time[h]
D3
D4
1500
30000
80
10000
5000 7000
60
15000
D5
D1
40
20000
D3
D2
D7 20
25000
D7
5000 3000
13500
D6
0
D4
20002000 4000
D4 D5
6000
3000
D2
5000
D3
5000
D1
0
4000
9000
25500
8000
1500 6000
27000
20000
D3
16000
7000
Time[h]
10000
D5
D1
80
15000
D7
2000
3000
D2
60
Volume[m3]
D4
t = 20.00 h
40
25000
3000 3000
20
P2
D3
D5
7000
D7 0
D4
4000
2000
D6
0
D7
5000 2000
D2
D1
D4
5000
16000
1000
10000
4000 3000
4000
8000
D3
D5
5000
8000
D2
10000
D5
D1
D1
15000
D5
D6
D1
20000
D2
15000
D3 D4
10000
D5
5000
4000
2000
D6 D7
0 0
20
40
60
80
Time[h]
Figure 2. Optimal pipeline schedule and inventory profiles found for the illustrative example
5. Conclusion An MILP continuous-time formulation for the operational planning of multi-level treestructure pipeline networks was developed. Results show that a continuous approach is able to effectively manage all branching and delivery operations taking place during a single batch injection. This feature significantly reduces the model size with regards to discrete approaches (Herrán et al., 2010). Batches can be rigorously tracked all along trunk and secondary lines, monitoring the creation of new interfaces when transferred to lower-level pipelines. An optimal schedule for the distribution of 4 products to 7 depots through 4 pipelines over 4 days was successfully found in less than 70 CPU s.
References S.N. Boschetto, L. Magatao, W.M. Brondani, F. Neves-Jr, L.V. R. Arruda, A.P. Barbosa-Povoa, S. Relvas, 2010, Ind. Eng. Chem. Res., 49, 12, 5661-5682. D.C. Cafaro, J. Cerdá, 2008, Comput. Chem. Eng., 32, 4-5, 728-753. D.C. Cafaro, J. Cerdá, 2009, Ind. Eng. Chem. Res., 48, 14, 6675-6689. D.C. Cafaro, J. Cerdá, 2010a, Comput. Chem. Eng., 34, 10, 1687-1704. D.C. Cafaro, J. Cerdá, 2010b, Ind. Eng. Chem. Res., In Press, doi:10.1021/ie101462k. P.M. Castro, 2010, Ind. Eng. Chem. Res., 49, 22, 11491-11505. A. Herrán, J.M. de la Cruz, B. de Andrés, 2010, Comput. Chem. Eng., 34, 3, 401-413. S.A. MirHassani, H. Fani Jahromi, 2011, Comput. Chem. Eng., 35, 1, 165-176. A.V. Moura, C.C. de Souza, A.A. Cire, T.M. Lopes, 2008, Lect. Notes in Comput. Sci., 5202, 3651.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
New Tools for the Detailed Scheduling of Refined Products Pipelines Vanina G. Cafaro,a Diego C. Cafaro,a Carlos A. Méndez,a Jaime Cerdáa a
INTEC (UNL-CONICET), Güemes 3450, (3000) Santa Fe, Argentina
Abstract New tools for efficiently generating the detailed schedule of a pipeline system conveying multiple oil products from a single origin to several destinations are presented. They permit to refine the aggregate pipeline schedule provided by continuous-time optimization approaches. We propose a continuous-time mixed-integer linear programming formulation to find the detailed pipeline schedule that minimizes the total cost of shutting-down and restarting flows in pipeline segments over the planning horizon. Besides, a novel heuristic algorithm running on a discrete-event simulation framework is also developed. The performance of the proposed rigorous and heuristic approaches are compared by applying them to a real world pipeline system. Keywords: Multiproduct pipeline; detailed schedule; MILP; discrete-event simulation.
1. Introduction Liquid pipelines transport a wide range of petroleum products with the fewest number of releases and the lowest input energy requirement as compared to other transportation modes. The short-term operational planning of pipeline systems is a very difficult task involving two major steps: (a) the generation of the pipeline input schedule at an aggregate level, and (b) the refinement of the aggregate plan to develop the detailed schedule performed by the pipeline operator. At the upper level (a), the sequence of batch injections and the batch features (product, batch size, mean pump rate) are all defined. In addition, the aggregate product deliveries to depots during every pumping run are also determined. Finding the best input schedule is a combinatorial problem aimed at satisfying customer demands on time and simultaneously minimizing interface, pumping, and inventory carrying costs. Several continuous-time approaches were proposed to find the optimal input schedule for shipping oil product batches from a single refinery to several depots (Cafaro & Cerdá, 2008). Although they provide the set of aggregate batch stripping operations for every pumping run, the detailed sequence of individual “cuts” to be accomplished by the pipeline operator is not specified. Generally, there are many ways to distribute the inputted batches among the assigned destinations. At the lower level (b), the aggregate schedule is refined by the proposed tools to determine the detailed sequence of individual lots leaving the pipeline, their lot sizes, and the assigned depots. This stage provides the times at which pumps should be turned on/off, and valves must be open/closed to accomplish the detailed delivery plan. Its main goal is to reduce the number of pipeline stoppages and on/off pump switching to get savings on the energy consumption for restarting flows in idle pipeline segments (Hane & Ratliff, 1995). Generating a feasible detailed schedule that efficiently accomplishes the aggregate delivery operations found through a continuous-time approach is a difficult problem that has not been solved yet in a rigorous way. Discreteevent simulation tools may be a good choice (Mori et al., 2007; García et al., 2008).
V.G. Cafaro et al.
986
This work introduces new computational tools to perform this activity in a reliable and cost effective manner.
2. Detailed Pipeline Scheduling Tools Two different kinds of tools for solving the detailed pipeline scheduling problem are presented. One of them is based on a novel MILP formulation rigorously tracking the size and location of every batch into the pipeline system in a continuous manner. The other one stands for a heuristic-based technique running on a discrete-event pipeline simulation system recently published (Gleizes et al., 2010; Cafaro et al., 2010). 2.1. An MILP Continuous-Time Formulation for the Detailed Scheduling Problem. The injection of a planned batch i' Inew at the origin is aimed at pushing batches i d i' to destinations j Ji,i' . In other words, Ji,i' stands for the subset of terminals that should receive a portion of batch i during the injection of lot i', as stated by the input schedule. In the detailed output schedule, every injection i' is divided into a number of individual operations k Ki'. Each operation is characterized by: (a) the lot injection to which it belongs, (b) the active receiving terminal and the associate lot that will be delivered during its execution, (c) the pumped volume (Qk), (d) the operation length (Lk), and (e) the completion time (Ck). One of the crucial decisions is related to item (b), as it determines the sequence of product deliveries from the pipeline to distribution terminals. Furthermore, it establishes the schedule of shutting down and restarting operations at every pipeline segment and the related operating costs. The binary variable xi,j(k) will indicate if terminal j receives product from batch i during the operation k. Because the set of input operations (K) is chronologically ordered, the operation k should start after completing the previous one (k - 1). Besides, the start and completion times for the injection of the new batch i’ (sti' / fti') given by the input schedule define the time interval during which the operations k Ki' must be performed. Ck Lk t sti ' i' I new, k
first (Ki' ) ; Ck Lk t Ck 1 k ! 1 ; Ck d fti' i' I new, k last (Ki ' ) (1)
From the liquid incompressibility assumption, no terminal will receive product from the pipeline during run k if this operation is not executed (uk = 0). On the contrary, only one terminal can receive a single lot during its execution if the operation is active (uk = 1).
¦ ¦x i di ' jJ i , i '
(k ) i, j
uk
; u k l min
( i ')
d Lk d u k l max
( i ')
i ' I new , k K i '
(2)
Since the total number of required operations is not known beforehand, some elements of the set Ki' may be unnecessary. If so, Eq. (3) avoids redundant solutions. uk d uk 1 i ' I new , k Ki ' , k ! first ( Ki ' )
(3)
The size and position of every batch into the pipeline are controlled at the end of every operation k (time Ck) through the continuous variables Wi(k) and Fi(k), respectively. (k )
Fi 1
Wi
(k )
Fi
(k )
i I , i ' I new , i ' t i, k Ki '
(4)
For the new batch i' being injected, its size at the end of run k Ki' is given by: Wi '
(k )
Wi '
( k 1)
Qk
¦D
i ', j
(k )
i ' I new , k K i '
jJ i ',i '
For all the other batches (i < i') already inside the pipeline, their sizes are given by:
(5)
New tools for the detailed scheduling of refined products pipelines
Wi
(k )
Wi
( k 1)
¦D
(k )
987 (6)
i I , i' I new , i ' ! i, k K i '
i, j
jJ i ,i '
Eqs. (7)-(8) represent the feasibility conditions for diverting a portion of batch i to the accessible terminal j during the operation k. Besides, Eq. (8) gives an upper bound on the total volume that can be diverted to j (Di,j(k)) due to the unidirectional flow condition. ( i ')
d min xi , j Fi
( k 1)
(k )
d Di , j
(k )
( i ')
d d max xi , j
(k )
;
Fi
t V j xi , j
(k )
i I , i ' I new , i ' t i, j J i ,i ' , k K i ' (7)
(k )
Wi (k 1) Di(,kj) d V j ( pv V j )(1 xi, j ) (k )
i I , i' I new, i' t i, j J i,i ' , k Ki '
(8)
On the other hand, overall volume balances around the pipeline system are needed.
¦ ¦D i 'di jJ i ,i '
(k )
i, j
(9)
i ' I new , k K i '
Qk
Moreover, batch injections and product deliveries planned by the input schedule at the aggregate level should also be fulfilled at the detailed level. Then,
¦Q
kK i '
k
qq i '
i' I new
;
¦D
kK i '
(k ) i, j
deri , j
( i ')
i I , i ' I new , i ' t i, j J i ,i '
(10)
To calculate operational costs, the model should identify the overall volume that has been stopped or set in motion at the time of executing a new input operation. In the former case (stopped flows), the receiving terminal j* during operation k will be closer to the pipeline origin than the previous active terminal j' (Vj* < Vj'). In the latter (starting flows), the situation is reversed. Hence, stopped and activated volumes (VDk / VAk) can be determined by knowing the receiving terminals j' and j* featuring xi,j'(k-1) = xi,j*(k) = 1. VDk t ¦ ¦V j xi, j
¦ ¦V j xi, j ; VAk t ¦ ¦V j xi, j
( k 1)
(k )
idi ' jJi ,i '
idi ' jJi ,i '
idi ' jJi ,i '
(k )
¦ ¦V j xi, j idi ' jJi ,i '
( k 1)
i' I new, k Ki ' (11)
Finally, the objective function tends to minimize the total operational cost, considering (a) pipeline segment stoppages, (b) pipeline activations, and (c) the number of individual input operations. Min z
¦cud *VD
kK
k
cua *VAk cfo * uk
(12)
2.2. MILP-Based Scheduling Tools Based on the previous MILP formulation, we propose three different approaches for determining the detailed pipeline schedule for a whole month. Every methodology assumes that the pipeline planning at the aggregate level is given, and the goal is to refine the output schedule at minimum operating cost. To this end, the inputted batches are rigorously traced along the pipeline system for determining the sequence and timing of single-terminal delivery operations during every batch injection. 2.2.1. Full Decomposition Approach. It consists of sequentially solving isolated MILP models, one for each batch injection. In other words, |Inew| = 1 for every problem instance, and aggregate delivery operations related to a single injection are refined. Although the CPU time is considerably short, this scheme may lead to a non-optimal solution as it completely ignores the terminal requirements at later pumping runs. 2.2.2. Pair Decomposition. Similar to the previous approach, it consists of iteratively solving an MILP model now involving a pair of consecutive batch injections at every problem instance (Inew = {i1, i2}). The aim is to find the detailed schedule related to
988
V.G. Cafaro et al.
both injections, but the operations for the first run (k Ki1) are only fixed. The remaining ones will be revised at the next problem instance, when the second injection of the previous step becomes the first element of the new set Inew. The goal is to dynamically develop a more integrated solution with a typical rolling-horizon scheme. 2.2.3. No Decomposition. A single MILP model plans all output operations for the whole month. Although it achieves the global optimum, the CPU time may be too high. 2.3. Heuristic Rules for Developing the Pipeline Output Schedule Finally, alternative tools running on a discrete-event simulation framework generate detailed pipeline schedules based on different heuristic rules, assigning priorities to receiving terminals. Such priorities directly affect the order in which single-terminal product deliveries are made, and therefore the value of the objective function. When a product batch entity reaches a demanding terminal, the entity may be transferred to an available tank. If so, every entity in the pipeline located between the input station and the selected terminal moves on, while the others remain idle. Three rules were tested: (a) the Nearest-First (NF) rule, which prioritizes the eligible terminal closest to the origin, (b) the Farthest-First (FF) rule favoring the farthest eligible terminal, and (c) the Nearest to the Current active terminal (NC) rule that prioritizes product deliveries to the eligible terminal closest to the one being currently served. Obviously, the current active terminal has the highest priority to continue receiving product.
3. Case Study: A Real-World Problem of the Oil Pipeline Industry The proposed methods were applied to a real-world problem introduced by Rejowski and Pinto (2003). It addresses the operation of a 20 in. pipeline that transports 4 refined products (gasoline, diesel, LPG, and jet-fuel) from a major refinery to 5 distribution terminals along 955 km (see Figure 1). The aim is to develop an efficient detailed schedule starting from the monthly input plan reported by Cafaro and Cerdá (2008). The input schedule comprises the injection of 9 new batches (B6-B14) at pump rates ranging from 800 to 1200 m3/h (see Table 1). T2
T1 400
REF
0
200
T3
T4
700 400
600
800
1000
200
200
1200
1400
T5 P1 P2 P3 P4
135 1600 Volume[102 m3 ]
Figure 1. Initial content of the pipeline system Table 1. Pipeline input schedule for the next month Lot ID
Pro.
Vol. [102m3]
Initial t [h]
Lot ID
Pro.
Vol. [102m3]
Initial t [h]
Lot ID
Pro.
Vol. [102m3]
Initial t [h]
B6
P4
425
5.0
B9
P1
1235
184.0
B12
P4
260
442.0
B7
P2
1356
55.0
B10
P3
390
338.0
B13
P2
1963
466.0
B8
P4
120
173.0
B11
P1
665
385.0
B14
P4
290
635.0
All the algorithms were implemented on an Intel Core i7 3.33 GHz processor, using Arena 12.0 simulator, and Gurobi 3.0 with 4 parallel threads as the MILP solver. Results are summarized in Table 2. Pair-decomposition (PD) strategy provides the global optimal solution in a much lesser CPU time than the full model (ND). On the other hand, NC-Heuristic achieves a very good solution with the least possible stopped
New tools for the detailed scheduling of refined products pipelines
989
volume (7450 [102m3]) and only 2 individual delivery operations over the optimal result. Figure 2 illustrates the detailed output schedule for the injection of lot B11. It shows the reduction of one operation when applying the PD-MILP model with regards to the NC-Heuristic. Favoring the NC active terminal, product delivery to depot D4 is made in two nonconsecutive operations, meanwhile the MILP approach makes it in one. Table 2. Comparison of the detailed output schedules provided by the proposed tools Support
Pump Oper. |K|
Stopped Vol. [102m3]
Cost [$/month]
Opt. Deviation
CPU Time [s]
FD
Decomp.
MILP
53
8300
136000
6.67%
32.1
PD
Decomp.
MILP
53
7450
127500
0.00%
108.1
ND
Glob. Opt.
MILP
53
7450
127500
0.00%
2261.5
NF
Heuristic
D-E Simul.
65
13145
196450
54.08%
6.0
FF
Heuristic
D-E Simul.
63
13825
201250
57.84%
6.0
NC
Heuristic
D-E Simul.
55
7450
129500
1.57%
6.0
280
468
120
367
120
367
110
248
39
659
39 0
280
620
220
220
400 6
200
280 400
600
800
248 1000
120 1200
328 1400
120
477
358
120
477
280
358
120
477
280
358
120
477
120
477
1600
Last Active Terminal
290
648
290
390 110
290 400
P1 P2 P3 P4
110
6
400
510
280
248
110
149
367
390
110
120
OutputSchedulefortheinjectionofB11usingtheMILPPDͲtool
6
468
409,40
280
385,00
367
418,65
120
409,40
400
468
419,16
390
418,65
290
428,41
367
420,21
120
440,95
477
429,46
648
110
180
390
120
180
110 110
648
110
390
Time Interval[h] 384,00
OutputSchedulefortheinjectionofB11usingtheNCͲHeuristic
6
385,00
394,26
394,26 409,40
418,66
418,66
419,16
419,16
437,67
437,67
440,95
409,40
384,00
Time Interval[h]
TA
Strategy
TA
Tool
659
280
248
120
328
149 0
200
400
600
800
1000
1200
Volume[102 m3]
1400
1600
Volume[102 m3]
Figure 2. Alternative output schedules for the injection of lot B11
4. Conclusions Alternative tools for generating the output schedule of a multiproduct pipeline system with several destinations were developed. Two of them are computationally efficient and provide very good detailed schedules. They are: (a) the Pair-Decomposition algorithm, and (b) the Nearest-to-Current terminal heuristic strategy. For a real-world case study, the former one finds the global optimal solution in approximately 100 CPU sec, while the other is only 1.57 % over the optimal cost, but it can be run in 6 CPU sec.
References D.C. Cafaro, J. Cerdá, 2008, Comput. Chem. Eng., 32, 4-5, 728-753. V.G. Cafaro, D.C. Cafaro, C.A. Méndez, J. Cerdá, 2010, Proceedings of the WSC 2010. A. García-Sánchez, L.M. Arreche, M. Ortega-Mier, 2008, Stud. in Comput. Intell., 128, 301-325. M.F. Gleizes, G. Herrero, D.C. Cafaro, C.A. Méndez, J. Cerdá, 2010, Comput. Aided Chem. Eng., 28, 1697-1702. C.A. Hane, H.D. Ratliff, 1995, Annals of Oper. Res., 57, 1, 73-101. F.M. Mori, R. Lüders, L.V.R. Arruda, L. Yamamoto, M.V. Bonacin, H.L. Polli, M.C. Aires, L.F.J. Bernardo, Comput. Aided Chem. Eng., 2007, 24, 691-696. R. Rejowski, J.M. Pinto, 2003, Comput. Chem. Eng., 2003, 27, 8-9, 1229-1246.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A rigorous mathematical formulation to Automated Wet-Etch Station scheduling with multiple material-handling robots in Semiconductor Manufacturing Systems Adrián M. Aguirrea, Carlos A. Méndeza, Pedro M. Castrob. a
INTEC (UNL-CONICET), Güemes 3450, 3000 Santa Fe, Argentina Unidade Modelação e Optimização de Sistemas Energéticos, Laboratório Nacional de Energia e Geologia, Lisboa, Portugal
b
Abstract Track systems have been increasingly utilized in many different stages in the Semiconductor Industry. Most of the manufacturing processes, particularly those executed in the wafer fabrication, require these transportation devices to perform the transfers of wafers lots through several process steps. Since these process steps only produce one lot at a time in a single bath, the number of transfer operations between consecutive baths increases drastically with the number of steps and lots in the system. Therefore, the problem to be tackled in this work aims at finding the efficient structure and operation strategy of a particular track system working in an Automated Wet-Etch Station (AWS). A rigorous mathematical formulation (MILP) for the simultaneous scheduling and design of the AWS is developed in order to minimize the residence time of wafer lots in the system, providing at the same time a better utilization of track’s resources. Keywords: MILP, Scheduling and Optimization, Semiconductor Manufacturing System (SMS), Automated Wet-etch Station (AWS).
1. Introduction In the modern semiconductor industry, most processes such as photolithography, etching, deposition and testing, require automated transportation devices to move wafers lots or jobs (i=1...N) across a predefined sequence of process steps (j=0…M+1). Each process step is composed by a single bath j with finite capacity in which wafer lots are performed one by one. The cluster of single-wafer processing modules with multiple material handling devices is called track system in Lee, 2008. Track systems usually have an input buffer (j=0) and an output buffer (j=M+1) at the beginning and at the end of the sequence of process steps. Automated Wet-Etch Station (AWS) represents a particular track system with a linear configuration of chemical baths (j=1,3,5,...M-1) and water baths (j=2,4,6,...M) arranged in alternate way, with single or multiple wafer handling devices (r=1,2,3...R) moving on a rail (see Figure 1). For more detailed information regarding the semiconductor manufacturing process, see also Bhushan and Karimi (2003), Lee (2008), Aguirre and Méndez (2010).
A rigorous mathematical formulation to Automated Wet-Etch Station scheduling with 991 multiple material-handling robots in Semiconductor Manufacturing Systems
Figure 1. Linear configuration of the Automated Wet-etch Station (AWS) with multiple resources Transportation devices, also called rail-guided vehicle (RGV) (Lee, 2008), are usually single robot arms assigned to a specific zone (service zone) along the rail, in order to handle jobs processed in a sub-set of baths in the AWS station. Each service zone involves one or more consecutive baths and a single robot for material-handling. In the same way, each robot must serve a single service zone in order to avoid collisions or deadlocks between robots. Due to the lack of intermediate storage areas between consecutive baths, there are certain baths in which consecutive zones are overlapped. Under this situation, some undesirable collisions or deadlocks may occur between robots of adjacent areas. In turn, stringent storage restrictions as Zero-Wait (ZW) and Local Storage (LS) policies have to be satisfied in chemical and water baths, respectively. Also, a non-intermediate storage (NIS) policy must be strictly enforced by the robots, as illustrated in Figure 2.
Figure 2. Configuration example of service zones assigned to multiple robots. The problem that is faced here consists on finding the best configuration of multiple robots allocated to different service zones in the AWS with the principal aim of minimizing the total residence time of every job in the system and, at the same time, generating an efficient integrated scheduling of transportation and processing activities. Related works for the scheduling of the AWS station with multiple material handling devices are reported in Geiger et al. 1997, Bhushan and Karimi 2003, Karimi et al. 2004, Lee et al. 2007, Zeballos et al. 2010, Aguirre and Méndez 2010. This work provides a rigorous mathematical formulation for the AWS Scheduling problem with multiple wafer-handling devices. The main goal is to define the optimal job’s sequence and timing for processing operations by determining the most efficient operative structure of multiple rail-guided vehicles within a specific number of services zones in the system. Furthermore, a detailed pick-up and a delivery activity program of shared wafer-handling robots are considered in order to minimize the necessary time for finishing all the lots in the system. Therefore, we propose an extended continuous-time MILP formulation based on a previous version developed by Aguirre and Méndez, (2010). This new formulation adds
992
A. Aguirre et al.
more complexity to the one presented for the original problem, allowing the possibility of making operative decisions about the logical configuration of track equipment in the wet-etch station. Thus, the model developed for this problem provides a more detailed representation of real-life operations in AWS. The efficiency and applicability of our solution approach is confirmed by solving relevant examples taken from literature with modest computational effort.
2. The MILP-Mathematical Formulation The proposed approach is based on a previous MILP model developed by Aguirre and Méndez (2010). The former model is able to represent the operative behavior of the AWS station with single or multiple resources and is useful to find optimal solutions of the system for short-term scheduling problems in a reasonable computational time. In that model Eqs. (1-8) allow representing the original AWS scheduling problem in which a single robot is considered. Likewise, Eqs. (9-11) are defined to generate the best solution of the AWS station with multiples robots working simultaneously. All instances of this problem were solved driven by a makespan criterion (MK) represented in Eq.(12). This formulation assumes that no collisions or deadlocks can occur between multiple material-handling devices. So, it does not consider a real-life restriction of multiple robots (RGV) working together on the same rail. To overcome this practical limitation, equations (13-21) are added to the previous model. Then, the new mathematical formulation is able to represent the real-life operation of the AWS station avoiding the undesirable conditions produced when multiple robots are working together. For this, two new binary variables, k(r,j) and v(r), are defined. The first one determines the service zone of a resource r along a single rail, which represents the allocation of a robot r to a bath j, whereas the second one establishes the activation of this robot r. A brief explanation of equations (13-21) is presented below. 2.1. Allocation of resources to a bath j. Equation (13) forces the condition that one resource r must be assigned to every bath j, which is represented by k(r,j)=1. Then, this condition means that each bath j will belong to at least one service zone. Note that if the zones are overlapped, the bath must be attended by two different resources in order to transfer the wafer lot from one to another bath between these consecutive zones. ୀோ
݇ሺݎǡ ݆ሻ ൌ ͳ ݆ ൌ ͳ ǥ ܯ ͳሺͳ͵ሻ ୀଵ
2.2. Activation of a service zone in a resource r. As mentioned before, each service zone of a resource r should comprise at least one bath and at most all the baths, in the case that only a single robot is available. Thus, the activation of a robot is managed by the binary variable, i.e. v(r)=1, which indicates the existence of a given service zone. These conditions are shown in equations (14) and (15). Also, the activation of robots in an arranged way is presented in Eq. (16). ୀெାଵ
݇ሺݎǡ ݆ሻ ݒሺݎሻ ݎ ܴሺͳͶሻ ୀଵ ୀெାଵ
݇ሺݎǡ ݆ሻ ݒሺݎሻ כሺ ܯ ͳሻ ݎ ܴ ሺͳͷሻ ୀଵ
ݒሺ ݎ ͳሻ ݒሺݎሻ ݎ ܴሺͳሻ
A rigorous mathematical formulation to Automated Wet-Etch Station scheduling with 993 multiple material-handling robots in Semiconductor Manufacturing Systems 2.3. Sequencing of multiple resources on a rail. Multiple robots allocated on the same rail must be sequenced. In order to avoid deadlocks and collisions, equations (17-19) are proposed. Equation (17) forces that, if baths j and j+2 are assigned to robot r, then bath j+1 have to be assigned to robot r. This restriction avoids the overlapping of more than one bath between consecutive zones. Also, it ensures that the sequence of baths assigned to a robot, which define its service zone, cannot be interrupted, in the middle or anywhere, by another robot. ݇ሺݎǡ ݆ሻ ݇ሺݎǡ ݆ ʹሻ ݇ሺݎǡ ݆ ͳሻ ͳ ݎ ܴǢ ݆ ൌ ͳ ǥ ܯ ͳሺͳሻ In the same way, the equations (18-19) determine the beginning and the end of the baths assigned to each service zone. Also, equation (20) defines a direct precedence between the set of baths assigned to resource r-1 and resource r. Ʋୀெାଵ
݇ሺݎǡ ݆ሻ െ ݇ሺݎǡ ݆ ͳሻ െ ൫݇ሺݎǡ ݆Ʋሻ൯ ͳ െ ሺ ܯ ͳሻ כ൫െͳ ݇ሺݎǡ ݆ሻ െ ݇ሺݎǡ ݆ ͳሻ൯ Ʋୀଵ Ʋவାଵ
ݎ ܴǢ ݆ ൌ ͳ ǥ ܯ ͳሺͳͺሻ Ʋୀெାଵ
െ݇ሺݎǡ ݆ሻ െ ݇ሺݎǡ ݆ ͳሻ െ ൫݇ሺݎǡ ݆Ʋሻ൯ ͳ ሺ ܯ ͳሻ כ൫ͳ ݇ሺݎǡ ݆ሻ െ ݇ሺݎǡ ݆ ͳሻ൯ Ʋୀଵ Ʋழ
ݎ ܴǢ ݆ ൌ ͳ ǥ ܯ ͳሺͳͻሻ ݇ሺ ݎെ ͳǡ ݆ሻ െ ݇ሺ ݎെ ͳǡ ݆ ͳሻ ݇ሺݎǡ ݆ ͳሻ ݎ ܴǢ ݆ ൌ ͳ ǥ ܯ ͳሺʹͲሻ 2.4. Assignment of transfers to a resource. When a resource r is assigned to a specific bath j, all the transfers (i,j) related to this bath have to be performed by this allocated resource. To handle this situation, equation (21) is explicitly defined. Decision variable W(i,j,r), originally proposed in Aguirre and Méndez (2010), indicates whether or not a transfer operation (i,j) is performed by a given handling resource r. ܹሺ݅ǡ ݆ǡ ݎሻ ൌ ݇ሺݎǡ ݆ሻ ݎ ܴǢ ݆ ൌ ͳ ǥ ܯ ͳǢ ݅ൌ ͳ ǥ ܰሺʹͳሻ All these equations are included in the original model in order to provide a more realistic solution strategy. Particularly, last equation is applied to link both models.
3. Results and Discussion The case study presented below was previously addressed by Aguirre and Méndez (2010). This AWS example considers the scheduling problem of four baths NxM=[4x8] and eight wafer lots. The information of processing times and transfer times for this problem are reported in Aguirre and Méndez (2010). Table 1 reports the results obtained for this problem when multiple resources are considered. Then, the solution obtained by the original model of Aguirre and Méndez (2010) and the new approach, with the addition of robots restrictions, are compared in order to analyze both the track configuration and the detailed activity program of multiples robots in the system (see Fig. 3). To do this, both multiple robot models (MRM) are solved considering the application of two robots in the system (v(r)=1;r=1,2). Also, the results in brackets represent the solution by using the same job sequence provided by the unlimited robot model (URM). The last model provides the optimal solution without resource limitations. The results reported in Table 1 and Figures 3 that are presented below show the applicability of our solution approach.
994
A. Aguirre et al.
Table 1. Model Statistics and computational cost URM (without robot MRM (Aguirre and MRM (with additional MxN Statistics limitations) Méndez, 2010) robot restrictions) 678(650) 668(640) 28 Binary Variables 73(101) 73(101) 73 Cont. Variables 2699 2640 392 Constraints 4x8 114.85 114.85 114.85 Makespan 1 9.565(0.285) 29.145(0.307) 1.090 CPU Time [sec.]* 2 10.417(0.218) 16.547(0.162) 0.418 CPU Time [sec.]* *Using GAMS with Cplex 12(1) and Gurobi (2) in a Intel PC Core 2 Quad with parallel processing in 4 threads Robots
Schedule r1
r2
Baths
r2
Baths
Robots
Schedule r1
Time
Time
Figure 3. Gantt Chart solution structure for both the MRM model without robots restrictions and our solution approach in which r1 is assigned to j1-j2-j3 and r2 to j4-j5
Our MRM model with the addition of robots restrictions provides the same optimal solution (MK=114.85 units) found by the URM model and the original MRM model by Aguirre and Méndez (2010) in a reasonable computational effort. Also, both MRM models reach the best solution by using the job sequence provided by the URM in a negligible CPU time. Despite of the higher number of binary variables and constraints, due to the increased number of decisions that have to be made, our model provides a more realistic operative structure of the track system and also a more reliable solution of the pick-up and a delivery activity program for the robots in the AWS station, even at less CPU time than the original MRM model.
4. Conclusion An MILP model for the scheduling of the AWS station with multiple automated material handling devices in the semiconductor manufacturing system is presented. The proposed solution approach considers a real-life limitation of multiple rail-guided vehicles simultaneously working on a single rail. The results generated for a problem taken from the literature show the relevance and the effectiveness of this approach, which is able to provide integrated design and operative decisions on flexible AWS scheduling problems. Future challenges will focus on using the proposed model to develop an efficient MILP-based method to solve industrial-scale problems with reasonable computational effort.
References Aguirre, A.M. and Méndez, C.A. (2010). Computer Aided Process Engineering 28, 883-888. Bhushan, S. and Karimi, I.A. (2003). IEC Research 42 (7), 1391-1399. Bhushan, S. and Karimi, I.A. (2004). Comp and Chem Eng, 28(3), 363-379. Geiger, C.D., Kemp, K.G., Uzsoy, R. (1997). Journal of Manufacturing System, 16(2),102-116. Karimi, I.A., Zerlinda, Y.L., Bhushan S.T. (2004). Comp and Chem Eng, 29, 217-224. Lee, T.-E. (2008). Proceeding of the 2008 Winter Simulation Conference, 2127-2135. Lee, T.-E., H.-Y. Lee, S.-J. Lee. (2007). International Journal of Prod Res 45 (3), 487-507. Zeballos, L.J., Castro, P.M., Méndez, C.A. (2010). 2nd Int Conf on Eng Opt, Lisbon, Portugal.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A MILP Planning Model for a Real-world Multiproduct Pipeline Network Suelen N. Boschetto,a Leandro Magatão,a Flávio Neves-Jr,a Ana P.F.D. Barbosa-Póvoab a
CPGEI – Federal University of Technology – Paraná. Av. Sete de Setembro, n. 3165,
80230-901, Curitiba, Brazil b
CEG-IST – Instituto Superior Técnico – UTL – Lisboa, Av. Rovisco Pais, 1049-001,
Lisboa, Portugal
Abstract This work proposes a generic MILP model for planning activities of a multiproduct, multi-pipeline system. This is applied to a real-world pipeline network. The network includes 30 bidirectional multiproduct pipelines associated with 14 areas: 4 refineries, 2 harbours, 6 depots, and 2 final clients. The pipelines can be shared by 34 oil derivatives and the products can be sent from an origin area to various destination areas, as can be received in a destination area from different origin areas. The necessary requirements on the supply and inventory management of the producers and consumers areas are taken into account. As final results, the model defines the products and the volumes to be transported in order to attain storage goals, while respecting consumers demands. Optimal solutions for various scenarios are obtained with a high level of performance. Keywords: Planning, Multiproduct Pipeline Network, MILP, Oil Industry, Real-world scenarios.
1. Introduction One current challenge in the oil industry is the optimization of the transportation of products from the producing to the consumption areas. Such transportation is often complex and involves the management of a series of operational conditions dictated by the involved resources, such as pipelines and associated valves and pumps. According to Moro (2000), the goal of this optimization is to achieve better operational conditions while guaranteeing some criteria such as the use of better paths to transfer products without changes on the network physical structure (e.g. valves, pumps, and pipes). Within the supply chain oil several articles have been published on optimization planning however, most of these are related to refinery planning involving blending process and tankage allocation (Pinto et al., 2000).
996
S.N. Boschetto et al.
The current work presents a planning model to address the allocation and transportation of products among different producing/consuming areas in a complex pipeline network. In this way, the pipelines logistics is performed respecting a series of real-world operational constraints. In a subsequent step, the proposed model can be used in a collaborative way with the scheduling architecture proposed by Boschetto et al. (2010) to obtain the complete pipeline network scheduling. The current paper is organized as follows. Section 2 presents a problem description. The proposed MILP planning model and the pipeline operational constraints are characterized in Section 3. The results of the planning model are presented in Section 4, and the concluding remarks are given in Section 5.
2. Problem Statement In the addressed real-world multiproduct pipeline network (Fig. 1), the transport of each product involves the management of a set of specific tank farms, according to the considered areas. The inventory management must respect capacity constraints and maximum and minimum inventory operational limits. Additionally, predefined storage goals must be preferentially observed. Refineries (N3, N4, N5, and N6) produce or consume products while depots (N1, N8, N9, N11, N12, and N13) can receive, deliver or consume the stored products. Harbours (N7 and N10) can import or export the oil derivatives. Two areas are final clients (N2 and N14) and only receive products. The pipelines can be shared by 34 oil derivatives and the products can be sent from an origin area (e.g. refineries, harbours, depots) to various destination areas (e.g. final clients, depots, harbours, refineries), as can be received in a destination area from different origin areas. Also, the best route (involving areas and pipelines) for each transported product has to be defined by the model from a set of hundreds of possible routes. Planning Planning
(MILP (MILPPlanning PlanningModel) Model) •Chose •Choseofoforigin/destination origin/destinationareas areas •Chose of routes/pipelines •Chose of routes/pipelines •Calculus of total sent volume •Calculus of total sent volume
Scheduling Scheduling(Boschetto (Boschettoetetal, al,2010) 2010) Resource Allocation / Sequencing / Pre-analysis Resource Allocation / Sequencing / Pre-analysis MILP Timing Model MILP Timing Model
Final FinalSolution Solution
Figure 1. Pipeline Network
Figure 2. Optimization Structure
997
A MILP Planning Model for a Real-world Multiproduct Pipeline Network
If no formal routes were previously defined two or more available routes can be connected in order to supply a product from production to consumption areas. The first route starts on an origin area and finishes in an intermediate area. The last route starts in an intermediate area and finishes on a destination area. The model developed is able to link these n routes to transport a product, observing inventory resources in intermediate areas: a so called “surge tank operation” is observed. Reversions on pipeline flows are also modelled, but this operation is time-consuming at real operation and so be avoided.
3. The MILP Planning model The MILP objective function is presented in (1). This involves 5 terms. The first term optimizes the transference of the product p sent from area n to area n’ by route r. This transference tends to be made by the faster route (volr/flowp,r) that transports the volume defined by variable Q. The second term penalizes violations on capacity (VC) and minimum storage level (VIDmin) per area and product. The goal storage is also penalized if violated, as indicated in term 3. Term 4 minimizes the number of pipelines that operate in both directions (reverse flow - rev) and the violation on the desired pipeline utilization rate (dif). Finally, term 5 minimizes the tankage utilization for surge tank operations (tnk). The dimensionless terms 1 to 5 are weighted by factors Į1 to Į5. § · 3 ¨ Qn, n ', p, r ⋅ volr ¸ + α 2 VCn, p + VIDnmin VIDngoal ,p +α ,p + ¨ ¸ flow p,r ¹ ( n , n ', p , r )∈R © ( n , p )∈NP ( n , p )∈NP
min α 1
¦(
¦
)
term 2
term1
§ · α 4¨ revr , r ', d + difd ¸ + α 5 tnkr , r ', n, p ¨ ( r , r ', d )∈RD ¸ d ∈D ( r , r ', n , p )∈RP © ¹
¦
¦
term 4
¦
term3
(1)
¦
term 5
The MILP constraints defined take into account that the volume of sent product (Q) between n and n’ should respect availability in origin areas and the received quantity on n’ should attend the total demand. A minimum volume to be sent is also determined and respected as well as a product that is sent from n to n’ cannot be sent back from n’ to n through another route (cycle constraints). The desired utilization rate of pipelines should be considered for a pre-determined horizon. Finally, the capacity, minimum, and goal storage levels should be observed. Violations on these values can occur, but are minimized on the objective function. The results supplied by the MILP planning model are used as parameters in a scheduling phase (Fig. 2). In this way, the volumes determined in the planning phase are split in small volumes during the scheduling phase (operational batches) and detailed temporal aspects are determined during the scheduling horizon. Additional details about the scheduling phase can be obtained in Boschetto et al. (2010).
998
S.N. Boschetto et al.
4. Results The developed planning model was applied to eight real-world scenarios (S1 to S8) of one month on the described real pipeline network. The objective function weighting parameters (Į) and the desired pipeline usage rate were obtained after a series of tests and based on the know-how of the company’s specialists. Data as production, consumption, and storage limits vary according to the considered scenario. 4.1. Computational results The model was run using the software ILOG OPL Studio 6.3, CPLEX 12 on an Intel Core 2 Duo 6400, 2.13 GHz, 3GB RAM. No relative gap and time limit values were defined. The results are presented in Table 1, where it can be seen that optimal solutions from the MILP planning model were obtained in less than three seconds, for all the observed scenarios. Sparse sets were used to generate the model and, thus, the number of decisions variables (int. var.) could be significantly reduced. The scenarios were chosen considering different months in a year, and are numbered in chronological order. The total quantity of moved product (# moved – in volumetric units – vu) indicates demand variations along the observed time period. The maximum variation of total moved volume (comparing scenarios S3 and S5) is 12.5%. Table 1. Computational results S3 S4 S5
scenario
S1
S2
status time (s) obj. function best node gap (%) iterations variables int. var. constraints #moved (vu)
optimal 2.9 14782.65 14782.22 0.0000 272 1303 588 2106 2180599
optimal 2.221 2746.44 2746.24 0.0001 278 1256 541 2057 2014910
S6
optimal optimal optimal optimal 2.189 2.143 2.196 2.35 4664.56 0.5 19011.65 589.19 4664.29 0.5 19011.52 589.17 0.0001 0.0001 0.0000 0.0000 435 717 417 681 1247 1348 1356 1357 539 633 635 631 2051 2310 2354 2528 2278359 2137422 1992999 2197381
S7
S8
optimal 2.327 1771.03 1770.95 0.0000 408 1232 532 2056 2126734
optimal 2.239 7411.80 7411.65 0.0000 442 1335 626 2339 2267052
4.2. Comparison of Results: MILP Planning Model versus Company’s Approach This section compares the MILP obtained results for scenario 3 (S3) with the real planning approach adopted by the company’s specialists (historical data). Table 2 summarizes these results. It is important to notice that the MILP indicated solution was, afterwards, analysed and validated by the company’s specialists. Table 2. Scenario S3: MILP Planning Model versus Company’s Approach Operational Characteristic Observed MILP Model Company Solution Flow reversion: (number of pipelines) and label of pipelines Capacity violation - origin areas: (quantity) and volume Min. storage violation - origin areas: (quantity) and volume Total moved volume (without flow reversion) Volume in surge tank operations
(#3) 3,22,24 2,034,133vu 243,901vu
(#6) 1,3,21,23,24,30 (#6) 130249vu (#8) 179608vu 2,125,177vu 823,028vu
A MILP Planning Model for a Real-world Multiproduct Pipeline Network
999
Results indicate that the proposed model suggests a small number of pipelines flow reversions. Also when analysing the occurrence of violations on minimum storage and capacity levels in origin areas it can be seen that in the company solution there are six violations on capacity of origin areas and eight violations on minimum storage level of origin areas. The proposed model was able to attain an operational answer with no product degradation and tankage changes, without violating tankage limits. No significant observations were made on destination areas. Both approaches were able to attain the main operational conditions. Finally, it is possible to notice that the MILP Model suggested a smaller volume of product to attend the demand requirements. In addition, surge tank operations, were minimized causing less manoeuvres of valves and, thus, the respective operating cost.
5. Conclusions This work presents an MILP Optimization Model to aid the planning phase solution of a real-world network of multiproduct pipelines. The planning model determines the volume of products to be moved during a planning horizon (typically one month), in orders to satisfy demand requirements, while minimizing the use of the resources as tanks, pipelines, and valves. This minimization is made considering procedures such as flow reversions and surge tank operations. The model, which was implemented and has been solved in ILOG OPL Studio 6.3, has been extensively tested in typical operational scenarios (e.g. Table 1). Operational solutions have been obtained in few seconds of processing. The presented approach can be used together with the scheduling phase method of Boschetto et al. (2010) leading to complete operational solutions of the pipeline network.
6. Acknowledgements The authors acknowledge financial support from ANP and FINEP (PRH-ANP / MCT PRH10 UTFPR), CENPES (grant 0050.0017859.05.3) and CAPES / PDEE (grant 3262/08-1).
References S.N. Boschetto, L. Magatão, W.M. Brondani, F. Neves-Jr, L.V.R. Arruda, A.P.F.D. BarbosaPóvoa, S. Relvas, 2010, An operational scheduling model to product distribution through a pipeline network. Industrial & Engineering Chemistry Research, 49, 12, 5661–5682. L.F.L. Moro, 2000, Técnicas de Otimização Mista Inteira para o Planejamento e Programação de Produção em Re¿narias de Petróleo, Doctoral Thesis USP, São Paulo, Brazil. J.M. Pinto, M. Joly, L.F.L. Moro, 2000, Planning and scheduling models for re¿nery operations, Computers and Chemical Engineering, 24, 2259–2276.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Improving supply chain management in a competitive environment M. Zamarripaa,A. M. Aguirreb, C. A. Méndezb and A. Espuñaa a b
Universitat Politècnica de Catalunya (UPC), Chem. Engng. Dpt. Barcelona, SPAIN INTEC (UNL-CONICET), Santa Fe, Argentina.
Abstract This work addresses the development of a multi-objective MILP (Mixed Integer Linear Programming), devised to optimize the planning of supply chains introducing the use of game theory for decision making in cooperative and/or competitive scenarios. The model developed is tested in a real-world case study, based on the operation of two different supply chains; three different optimization criteria are consider, and both cooperative and non cooperative way of working between supply chain´s is considered. Keywords: Supply chain planning, multi-objective, MILP-based model, game theory.
1. Introduction The problem of decision making in the chemical process industry is becoming more complex as the scope covered by these decisions is extended. This increasing complexity is additionally complicated by the need to consider a greater degree of uncertainty in the models used to forecast the events that should be considered in this decision making. The problem of decision making associated to supply chain (SC) operational management (procurement of raw materials in different markets, allocation of products to different plants and distributing them to different customers), which is attracting the attention of the scientific community in the last years, is, in this sense, on the top level of complexity. And, in the case of the Chemical Processes Industry, the complexity associated to chemical operations and the market globalization should be added to the usual difficulties related to the integration of various objectives to be considered for Supply Chain planning But in this scope, it is not enough to study the problems that industry deals with and to apply optimization techniques to find isolated robust solutions. SCs are embedded in a competitive market, and managers have to take care of the decisions of the others, since these decisions will impact to the profit of their own SC. In such complex systems, some researchers have used the Game Theory (GT) to make predictions to assist decisions makers (Mahesh and Greys, 2006). Two important sections can be found in the GT: the cooperative games and the non-cooperative games, and this second section specifically deals with situations like the one described in the previous paragraph. The different concepts associated to Game Theory can be found in the literature, but nowadays only some aspects of this theory have been successfully applied to supply chain management: it is easy to find applications related to non-cooperative games and zero different sum, while the use of cooperative games, dynamics or asymmetric games for decision making has not been exploited yet (Cachon and Netessine, 2003). In this sense, GT has not been extensively used to analyze the behavior between SCs yet, and only some works can be found which address specific situations: Leng and Parlar
Improving supply chain management in a competitive environment
1001
(2009) use Nash and Stackelberg Equilibriums to determine production levels, playing different scenarios to fix the price between seller and buyer; this kind of game has been also successfully used by other authors (Cachon, 2004; Granot and Yin, 2008; Leng and Zhu, 2009; Wang, 2006), each one using different techniques from the GT. This paper addresses the problem of SC`s planning, where decisions are inventory and production levels, for independent supply chains working in a common scenario. The cases of their eventual cooperative work, and also their eventual competition, are analyzed using both traditional mathematical programming tools and also tools from non-cooperative Game Theory.
2. Problem statement. 2.1. Supply Chain planning The scope of the planning of SC`s problem is typically to determine the optimal production levels, inventories and product distribution in a organized network of production sites, distribution centers, consumers, etc., taking care of constraints associated to products and raw materials availability, storage limits, etc in such network nodes. The mathematical model associated to this problem is usually leading to a mixed-integer linear program (MILP) whose solution determines the optimal values for the variables mentioned before. In this paper it will be assumed the existence of a set of supply chains that may work in a cooperative or a competitive environment. In both cases, the mathematical constraints associated to the material balances and production/distribution capacities will be the same, as well as the cost structures, so the same basic model can be used. The model originally proposed by Liang (2008) has been adopted as a basis for the formulation presented in this paper, which will be complemented with additional constraints and will seek to minimize different Objective Functions according to the considered scenario. The logistic network considered in the model consist of several SCs that have multiple production sites and distribution centers (fixed locations and capacities) to produce several products to cover a common market demand over a planning horizon H. The capacity of the process is given by available labor levels and machine capacity for each production site; the transport between nodes is modeled as it was carried out by a set of trucks with fixed capacity whose costs and required transports time are related to the distance between nodes. 2.2. Supply Chain planning in a cooperative environment. On the basis of the previously cited model, this paper develops a multipurpose MILPbased model for the cooperative case of supply chains. The problem is equivalent to the one which should be formulated to solve the common SC formed by each one of the SCs in the original set. So, to determine the optimal production planning and distribution decisions, the original formulation (Liang, 2008) can be used. In order to better compare the different scenarios, the subcontracting service considered in the original work has not been considered. Also, in order to better reproduce a real scenario maximum and minimum of distribution capacities for each source i at the destiny j have been considered, Eq. (1). In the same way, also minimum and maximum production capacities in each source or production center have been considered, Eq. (2). ܺ ݀݊݅ܯ ܶ ܺ ݀ݔܽܯ ݆݄݊݅ (1)
1002
ܻ ݊݅ܯ ܳ ܻ ݔܽܯ ݄݊݅
M. Zamarripa et al.
(2)
The SC management is then characterized by the quantities produced in each source , the inventory levels, ǡ arriving to each distribution center,ǡ and the undelivered orders. 2.3. Non-cooperative game theory. The GT is based on the simulation of the results obtained by a set of players (i = 1, ..., I) following different strategies (Sn; n = 1, ..., N). These results are represented through a sort of payments (Pi,n; i=1…I; n=1…N) received by each player. In simultaneous games, the feasible strategy for one player is independent from the strategies chosen by each of the other players. Optimum strategies depend on the risk aversion of the players, so different strategies can be foreseen, as for example max-min strategy (which maximizes the minimum gain that can be obtained). Depending on the knowledge about the strategy of the other players, other solutions resulting from the concept of Nash equilibrium can be devised. The players (suppliers) can consider two types of games: zero-sum and nonzero sum. This article uses the nonzero-sum game, since the SC of interest will not try to maintain the overall benefit of the system; the strategy is implemented through a payoff matrix, which is made up by the different potential strategies and shows the behavior for each action of the SC against the actions of its competitors. To play this game, each player should deal with the demand that customers really offer to him (from the total demand), and this can be managed basically through their service policy: prices and delivery times. So, additionally to the cost of the supply chains, it is necessary to introduce as an objective the reduction of the buyers’ expenses (cost for the distribution centers). This has been done through the price rates (Prateg), thus to play with the prices associated at the source and the destiny of the products, Eq. (3). ܶܵܥ݊݅ܯሺ݃ሻ ൌ σאூ̴ீሺǡሻ σ σ σ ܲݏ ܶ ܲ݁ݐܽݎ ͳݖ (3)
3. Case study These concepts have been applied to a supply chain case study adapted from (Wang and Liang, 2004, 2005 and Liang 2008). The factory’s strategy is to maintain a constant work force level over the planning horizon, and supply as much product as possible (demand), playing with inventories and backorders. Two products are considered, P1 and P2, with a market demand for a 3 months horizon period from 4 distribution centers (Distr1 to Distr4). The information about the considered scenarios, production, etc. and the rest of problem conditions (initial storage levels, transport capacities, etc.) can be found at http://cepima.upc.edu/papers/Competitive_SCs.pdf (Tables 3-6). The cooperative problem is solved as a MILP, to minimize the total cost z1 (sum of transportation, production and inventory costs). Also, delivery time of the products to distribution centers (z2), and the benefit (difference between sales and total cost) of each SC is estimated to highlight the results (cooperative/competitive). Figure 1 shows the considered SCs basic configuration, composed by 2 SCs (2+2 plants, Plant1/Plant2 and Plant3/Plant4) which collaborate or compete to fulfill the global demand from the 4 distribution centers.
Improving supply chain management in a competitive environment
1003
Figure 1. Description of the problem. Plant1-4 serve Distr1-4.
4. Case study Results. To compare how the different supply chains interact in both cooperative and competitive frameworks, Tables 1 and 2 show the different obtained results: • The optimal solution for SC1 (standalone) is driven by the Table 1: Comparative results between supply geographical conditions (nearest chains (standalone cases) delivery), although different ^ϭ ^ϭ ^ϭ ^Ϯ solutions are obtained according >ŝĂŶŐϮϬϬϴ KƌŝŐŝŶĂů ƐƚĂŶĚĂůŽŶĞ ƐƚĂŶĚĂůŽŶĞ to the specific objectives ĚĂƚĂ considered. Differences between Kďũ͘&ƵŶĐƚ͘ ŵŝ Ŷnjϭ ŵŝ Ŷnjϭ ŵŝ Ŷnjϭ ŵŝŶnjϭ SC1 and SC2 standalone solutions njϭ;ΨͿ ϳϴϴϮϮϰ ϳϬϬϲϮϭ ϴϯϴϮϭϮ ϴϰϬϵϬϰ are associated to the different njϮ;ŚŽƵƌƐ Ϳ Ϯϭϭϱ ϮϯϬϬ ϭϲϴϭ ϭϳϰϳ distances from the production sites ĞŶĞĨŝ ƚ;ΨͿ ϯϴϬϯϯϳϴ ϯϲϲϱϳϴϳ ϯϲϲϯϬϵϱ of SC2 to the markets. Detailed ^d;ΨͿ ϱϮϬϰϲϮϭ ϱϯϰϮϮϭϯ ϱϯϰϰϵϬϰ results can be found at http://cepima.upc.edu/ papers/Competitive_SCs.pdf (Figures 2 and 3, and Table 7). • The optimal solution for SC1 when coexisting with SC2 is driven by the kind of relation (cooperative/competitive) and its margin/capacity to adapt the prices. Two cases have been analyzed: when the original demand is maintained (so both SCs are oversized), and when double demand is assumed (the global capacity is on the line of the global demand, and additional budget and storage limit at the distribution centres. For the competitive case (Table 2b), the model should take into account the consumers’ preferences. These preferences have been modelled as just based on service (due dates maintenance) and customers’ cost, so these elements have been introduced in the final objective function (overall cost CST) as previously indicated. A nominal selling price has been also introduced to maintain data integrity. Table 2a: Comparative results Table 2b: Comparative results (cooperative case) (non-cooperative case) ŽŽƉ͘;Žƌŝ Őŝ ŶĂ ů ĚŵĚͿ ^ϭ Kďũ͘&ƵŶĐƚ͘ njϭ;ΨͿ njϭƚŽƚĂ ů ;ΨͿ njϮ;ŚŽƵƌƐ Ϳ
^Ϯ
ŵŝ Ŷnjϭ;^ϭн^ϮͿ ϱϭϱϱϭϲ
Ϯϴϲϵϵϳ
ϴϬϮϱϭϯ ϭϭϯϴ
ŽŽƉ͘;ĚŽƵďů Ğ ĚŵĚͿ ^ϭ
^Ϯ
ŵŝ Ŷnjϭ;^ϭн^ϮͿ ϭϬϱϭϯϰϴ
ϱϵϮϰϴϳ
ϭϲϰϯϴϯϱ
ŽŵƉĞ ƚ͘;Žƌŝ Őŝ ŶĂ ů ĚŵĚͿ ^ϭ
^Ϯ
ŵŝ Ŷ^d;^ϭͿ ϳϬϮϱϱϵ
ϭϬϬϳϯϰ
ϴϬϯϮϵϯ ϭϭϭϳ
ϮϮϵϱ
ŽŵƉĞ ƚ͘;ĚŽƵďů Ğ ĚŵĚͿ ^ϭ
^Ϯ
ŵŝ Ŷ^d;^ϭͿ ϭϮϳϰϵϴϭ
ϯϳϬϰϮϭ
ϭϲϰϱϰϬϮ ϮϮϲϴ
Ğ ŶĞ Ĩŝ ƚ;ΨͿ
Ϯϯϭϵϰϴϯ
ϭϯϴϮϬϬϮ
ϰϲϭϴϲϱϭ
ϮϳϰϱϱϭϮ
ϯϭϰϴϳϮϮ
ϱϰϰϮϲϱ
ϱϳϱϬϯϯϵ
ϭϱϵϴϭϳϴ
^d;ΨͿ
ϯϯϱϬϱϭϲ
ϭϵϱϱϵϵϳ
ϲϳϮϭϯϰϴ
ϯϵϯϬϰϴϳ
ϰϱϱϯϴϰϭ
ϳϰϱϳϯϰ
ϴϯϬϬϯϬϮ
ϮϯϯϵϬϮϭ
1004
M. Zamarripa et al.
Obviously, in both cases the expected SCs’ benefits (- z1) are reduced for both SCs respect the corresponding cooperative cases (Table 2a). If the demand is maintained (both SCs are oversized), both SCs are able to play the game maintaining their respective geographical influence but, when demand is approaching to the SCs global capacity, a proper pricing policy is basic to reduce the loses associated to competition, as it can be seen in Tables 2a and 2b. In the case of competitive scenario, this corresponds to the Nash equilibrium point: SC1 selling price is computed in such a way that further reductions on the selling price of SC2 will not modify best choose for the buyers or, if so, this will not increase SC2 benefits. Detailed results, including the corresponding payoff matrix, are reported in http://cepima.upc.edu/papers/Competitive_SCs.pdf.
5. Conclusions This work introduces the use of GT as decision technique that determines the optimal SC production, inventory and distributions levels in a competitive planning scenario, when there is a change in the competition behaviour. The problem was modelled using a multi-objective MILP-based approach by introducing the use of game theory, obtaining improved solutions in typical SC planning problems.
References N. Mahesh and S. Greys, 2006, Game-Theoretic Analysis of cooperation Among supply chain Agents: review and extensions, European journal of operational research, 37. G. Cachon and S.Netessine, 2003, Game theory in supply chain analysis, Supply chain analysis in the eBusiness Era, 46. M. Leng, M. Parlar, 2010, Game-theoretic analyses of decentralized assembly supply chains: Non-cooperative equilibria vs coordination with cost-sharing contracts, European journal of operational research, 204, 96-104. G. Cachon, 2004, the allocation of inventory risk in a supply chain: push pull and advance purchase discount contracts, Management science 50, 222-238. D. Granot, S. Yin, 2008, Competition and cooperation in decentralized push and pull assembly sysems, Management Science 54, 733-747. M. Leng, A. Zhu, 2009, Side payment contracts in two person nonzero sum supply chain games: review, discussion and applications. European Journal of Operational Research, 196, 600-618. Y. Wang, 2006, Pricing production decisions in supply chains of complementary products with uncertain demand, Operations Research 54, 1110-1127. T. Liang, 2008, Fuzzy multi-objective production/distribution planning decisions with multi product and multitime period in a supply chain, Computers and industrial Engineering 55, 678-694. Wang, Liang, 2004, Application of fuzzy multi objective linear programming to aggregate production planning, Computers and industrial Engineering, 46, 17-41. Wang, Liang, 2005, Applying possibilistic linear programming to aggregate production planning. International Journal of Production Economics, 98, 328-341. [1] Complementary material can be found at: http://cepima.upc.edu/papers/Competitive_SCs.pdf
Acknowledgements Financial support received from the “Agencia Española de Cooperación Internacional para el Desarrollo” (Acción AECID PCI D-024726/09), the Erasmus Mundus Program (“External Cooperation Window”, Lot 18 Mexico) and the Spanish “Ministerio de Ciencia e Innovación” and the European Regional Development Fund (both funding the research Project EHMAN, DPI2009-09386) is fully appreciated.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimal Scheduling of Biodiesel Plants through Property-based Integration with Oil Refineries Vasiliki Kazantzi,a Stella Bezergianni,b Rene’ Elms,c Fadwa Eljack,d and Mahmoud M. El-Halwagi,e a
Technological Ecudational Institute of Larissa, Department of Project Management, 41110 Larissa, Greece b Center for Research & Technology Hellas (CERTH), Chemical Process Industry Research Institute (CPERI), Laboratory of Environmental Fuels and Hydrocarbons (LEFH), 6th klm Harilaou-Thermi Rd, Thermi, Thessaloniki 57001,Greece c Bryan Research and Engineering, Bryan, TX 77805, USA d Qatar University, Department of Chemical Engineering, P.O. Box 2713, Doha, Qatar e Texas A&M University, Department of Chemical Engineering, College Station, TX 77843-3122, USA
Abstract - This paper addresses the design and scheduling problem of biodiesel plants in conjunction with typical oil refineries via blending of biodiesel and petro-diesel. The feedstocks are often seasonal and their availability and cost usually vary with time. A multi-period scheduling framework is formulated as an optimization problem to determine the optimal feedstock utilization and blending of biodiesel with petro-diesel using a property-integration framework. A case study is solved to illustrate the applicability of the devised approach.
1. Introduction Biodiesel is emerging as one of the most attractive forms of biofuel. In addition to its role in reducing greenhouse gases and enhancing sustainability, biodiesel can also have a favorable impact on the properties of petro-diesel upon blending. Moreover, in blending diesel and biodiesel fractions, there is a need in estimating the resulting properties, and hence the quality, of diesel/biodiesel mixtures. However, since properties of biodiesel and diesel/biodiesel mixtures are influenced by the characteristics, availability and seasonality of the feedstock used for biodiesel production, as well as processing conditions, diesel and biodiesel blends may differ considerably in terms of their resulting properties and quality features. These issues are addressed by the paper using a property-integration optimization-based framework.
1006
V. Kazantzi et al.
2. Problem Statement Given is a biodiesel production process of given design that may use a number of available feedstock types, f, whose availability and price depend on the time period, t. Given are also the following components of the process: x A set of feedstock sources: FEED={f|f = 1,2, …, Nf}, which are available to be allocated to sinks. Each feedstock has a given flowrate, Ff, and is characterized by a set of properties: PROP={p|p = 1,2, …, Np}. The values of the properties of the feedstock are also given and designated by p f , p . x A set of process sinks (units): SINKS = {j|j = 1,2, …, Nj}. Sinks are process units that can accept the sources. Each sink requires a given flowrate, Fj, and property values,
p inj , p , that satisfy the following constraints:
in max p min j , p d p j, p d p j , p
p
min j, p
j ȫ SINKS, p ȫPROP
p
(1)
max j,p
where and are given lower and upper bounds on acceptable properties to unit j. x A set of product discharges for the process: PROD = {r| r =1,2, …, Nr} x A set of waste discharges for the process WASTE = {w|w =1,2, …, Nw} x A set of intermediate streams INTER={i| i =1,2, …, Ni} that are redirected back to the process x A decision-making time horizon th that is discretized into a number of time intervals defined as TIME= {t| t =1,2, …, Nt}, over which the process is optimized. x Available is also a petroleum diesel product of flowrate Fd with certain properties values pd,p that can be mixed with the biodiesel product to yield the final mixture. It is desired to develop a scheduling scheme for the feedstocks and the blending of biodiesel with petro-diesel.
3. Methodological Approach 3.1. Solution approach In this problem, an attempt to determine optimal usage of the appropriate feedstock types that maximize the overall profit for the process is made with respect to each time interval. In this regard, we assume that for each time interval a different solution may be obtained, which corresponds to the “optimum” feedstock type available at that time. In addition, prices of feedstock and fuels are subject to changes within the time horizon considered. Thus, optimization has to be carried out for individual time intervals. Constraints on the maximum availability (in quantity) of each feedstock type are imposed for each operation time interval, as well as additional constraints on the properties of the products (diesel/biodiesel mixtures) resulting from integrating biodiesel and oil refineries. A source-sink-interception representation is developed as an extension of the one developed by Elms and El-Halwagi (2009). The structural approach is shown by Fig. 1.
Optimal Scheduling of Biodiesel Plants through Property-based Integration with Oil Refineries 1007
Process Units/ Sinks
Feedstock/ Sources
in Fj,t p j,p,t
Ff,t pf,p,t
f=1
out Gj,t p j,p,t
Products gj,r,t
Gr,t pr,p,t
j=1
ff,j,t
r
j=2
. . . . . .
Fb,t pb,p,t
Mixing Gm,t pm,p,t x=Fb,t/Gm,t
gj,i,t
. . . . . .
f = Nf
Mixtures
Intermediates Gi,t pi,p,t
i
Fd,t pd,p,t
Waste Gw,t pw,p,t
gj,w,t
gi,j,t
Conventional Diesel
w
j=N j
Fig. 1. Structural Representation of the Solution Approach Time-dependent changes in feedstock and demand prices are taken into consideration for profit optimization for each operation interval. Moreover, property values of streams n, designated as pn,p, can be expressed in terms of their related property operators ȥ(pn,p) or ȥn,p for simplicity (the property operator of stream n on property p). Property operators are defined considering the following mixing rule for estimating the resulting property of a mixture (Shelley and El-Halwagi, 2000): Gm *\ m, p Fn *\ n, p (2) n property-mixing operator and G m is the total flow rate of the mixture where \ n, p is the Fn . which is given by G m n The property-mixing operators can be evaluated from first principles or estimated through empirical or semi-empirical methods. Thus, for the remainder of the paper instead of using property values, pn,p, the corresponding property operators, ȥn,p are used in all following equations. 3.1.1. Splitting of sources (feedstock) (3) Ff ,t f f , j ,t t TIME , f FEED j 3.1.2. Mixing of sources before entering process units Fj ,t f f , j ,t gi , j , t (4) j SINKS , t TIME Fj,t *\in jf, p,t f f , j,t *i \ f , p,t gi, j,t *\outi, p,t j SINKS , t TIME, p PROP (5) i 3.1.3. Process fUnits G j ,t Fj ,t (6) j SINKS , t TIME in G j ,t *\ out F * \ (7) j , p , t j SINKS , t TIME , p PROP j , p ,t j ,t in pout f ( F , p , z , o ) (8) j , p , t j SINKS , t TIME , p PROP j ,t p ,t j , p ,t j ,t were z and o are design and operating variables respectively.
¦
¦
¦ ¦
¦
¦
¦
1008
V. Kazantzi et al.
Material and property balances for splitting of streams leaving the j unit are given below G j ,t g j , r ,t g j ,i ,t g j , w, t (9) j SINKS , t TIME r i w 3.1.4. Flowrate and property balances for product, waste and intermediate streams Gr ,t g j ,r ,t (10) r PROD , t TIME Gr,t *\r, p,jt gj,r,t *\outj, p,t (11) r PROD , t TIME Gw,t g jj ,w,t (12) w WASTE , t TIME Gw,t *\w, pj,t gj,w,t *\outj, p,t (13) w WASTE , t TIME Gi ,t g j ,ij,t (14) i INTER , t TIME Gi,t *\i, p,jt gj,i,t *\outj, p,t (15) i INTER , t TIME j Splitting of sources leaving the intermediate block Gi ,t gi , j ,t (16) i INTER , t TIME j 3.1.5. Mixing of biodiesel and diesel products (17) t TIME G m , t F b , t F d , t F b , t (1 x ) * F b , t / x (18) t TIME G m , t * \ m , p , t Fb , t * \ b , p , t (1 x ) * Fb , t * \ d , p , t / x where x is the mixing ratio of biodiesel to the mixture and Gr,t=Fb,t and ȥr,p,t=ȥb,p,t. 3.1.6. Constraints Flowrate and property (operator) constraints for the input flows to the process units: (19) F jmin d F j ,t d F jmax j SINKS , t TIME ,t ,t min (20) \ j , p ,t d \ inj , p ,t d \ max , , j SINKS t TIME p PROPERTIES j , p ,t Property constraints on product (diesel/biodiesel mixtures) specifications: min (21) \ m , p ,t d \ m, p ,t d \ mmax t TIME , p PROPERTIES , p ,t Product demand: Gm ,t d Gmdemand (22) t TIME ,t Constraints on the availability of feedstock resources (quantities) for each feedstock type and time interval (23) F f ,t d F fmax t TIME , f FEEDSTOCK ,t * I f where If is the binary variable representing the presence or absence of the fth feedstock in the solution. Finally, design, z, and operating constraints, o, may be imposed for any possible process modifications needed for usage of alternative feedstock. 3.1.7. Objective Function Max{Cm,t *Gm,t C f ,t * Ff ,t Cd ,t * xk * Fb,t POCt TACt } t TIME (24) where Cm is the unitf selling price of product mixtures over time t, Cf and Cd are the cost of feedstock and petroleum diesel over time t, respectively, POCt the process operating cost during period t and TACt the total annualized cost during time t. Solutions are obtained for different time intervals related to different market conditions.
¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦
¦
¦
4. Case Study A base-case design is considered for the processing of soybean oil and/or waste cooking oil “WCO” (50% free fatty acid) to produce 40 MMPGY biodiesel via transesterification. The design, techno-economic data, monthly availability of feedstocks and simulations are based on the work of Myint and El-Halwagi (2009) and Elms and El-Halwagi (2009). The characteristics of biodiesel and petro-diesel and the
Optimal Scheduling of Biodiesel Plants through Property-based Integration with Oil Refineries 1009 property-mixing operators are given by Kalogeras et al. (2010). The case study was developed for a large city, such as Houston, Texas, having a population of 3,000,000. It was assumed that the monthly biodiesel production was the annual production divided by 12, or ~3.5 MM gallons/month. Availability of WCO was determined by use of the annual per capital WCO production of 10 gal WCO/person/year. It was assumed that no limit existed on the availability of refined soy oil. The WCO (50% free fatty acid content) cost remained stable at $0.034/lb ($0.249/gal) and the refined soy oil cost remained stable at $0.497/lb. It was also found that to satisfy property constraints (on density, viscosity, cloud point, pour point, volatility at 250 oC, cetane index, and sulfur content), a blend of 18% biodiesel to 82% petro-diesel is to be used. The resulting feedstock schedule for soybean oil and WCO is shown by Fig. 2.
Fig. 2. Optimal Scheduling of Biomass Feedstocks
References Elms, R. D. and M. M. El-Halwagi, “Optimal Scheduling and Operation of Biodiesel Plants with Multiple Feedstocks”, Int. J. Process Sys. Eng., 1(1), 1- 28 (2009) Kalogeras, K., S. Bezergianni,,V. Kazantzi, ,and P. A. Pilavachi, “On the Prediction of Properties for Diesel / Biodiesel Mixtures Featuring New Environmental Considerations”, 20th European Symposium on Computer Aided Process Engineering – ESCAPE20, S. Pierucci and G. Buzzi Ferraris (Editors), Elsevier (2010)
Myint, L. L. and M. M. El-Halwagi, “Process Analysis and Optimization of Biodiesel Production from Soybean Oil”, J. Clean Tech. and Env. Policies 11(3), 263276 (2009) Shelley, M. D. and M. M. El-Halwagi, 2000, "Componentless Design of Recovery and Allocation Systems: A Functionality-Based Clustering Approach", Comp. Chem. Eng., 24, 2081-2091
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integration of financial statement analysis in the optimal design and operation of supply chain networks Pantelis Longinidis,a Michael C.Georgiadis,a,b Panagiotis Tsiakisc a
Department of Engineering Informatics & Telecommunications, University of Western Macedonia, Karamanli & Lygeris Street, Kozani 50100, Greece b Department of Chemical Engineering, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece c IBM UK Ltd, 76-78 Upper Ground, South Bank London SE1 9PZ, U.K.
Abstract This paper introduces a mathematical model that integrates financial considerations with supply chain network (SCN) design and operation decisions under demand uncertainty. The model is formulated as a Mixed-Integer Linear Programming (MILP) problem which incorporates financial statements analysis through financial ratios and demand uncertainty through scenario analysis and is solved to global optimality using standard branch-and-bound techniques. The applicability of the proposed model is illustrated by using a case study along with a comparison with a non-financially constrained model supports the superiority of the proposed model and highlights the tradeoffs between these models. Keywords: Supply chain modeling; Supply chain optimization; Financial statement analysis; Demand uncertainty; Distribution networks.
1. Introduction Since companies recognized the potential competitive advantages, gained through a holistic management of their SCNs, the academic community has been developing several models that describe their design and operation. Although numerous successful models have been developed for the design and operation of SCNs, their vast majority ignores decisions involving revenues, marketing campaigns, hedging against uncertainties, investment planning and other corporate financial decisions (Shapiro, 2004). Integration of financial aspects in these models allows for the systematic assessment of the impact of production decisions in the financial operation and further selects their ideal combination thus providing a competitive advantage in the company (Guillén, Badell, Espuña, & Puigjaner, 2006). Current SCN models with financial aspects could be divided into two groups. Those where financial aspects are considered as endogenous variables which model the financial operation and are optimized along with the other SCN design and operation variables (Laínez, Guillén-Gosálbez, Badell, Espuña, & Puigjaner, 2007) and those where financial aspects are considered as known parameters used in constraints and in the objective function (see Melo, Nickel, & Saldanha Da Gama, 2006). In this work financial statement analysis is incorporated into a SCN design and operation model under demand uncertainty.
Integration of financial statement analysis in the optimal design and operation of supply 1011 chain networks
2. Mathematical formulation 2.1. Problem description The proposed model considers the design of a multiproduct, four-echelon SCN. The SCN decisions to be determined by the proposed model are the following: x The number, location and capacity of warehouses/distribution centers to be set up x The transportation links that need to be established in the network x The flows of materials in the network x The production rates at plants x The inventory levels at each warehouse/distribution center The objective is to maximize the company’s shareholder value, taking into account several design, operating, and financial constraints. 2.2. Mathematical model The above problem is formulated through a MILP problem. The model is dynamic in terms of time-varying uncertain demands. Uncertainty is modeled by postulating a number of scenarios s = 1,…, NS, each with a potentially different set of piecewise constant demand functions. The proposed model handles any one of these scenarios by multiplying each scenario with its probability to occur ȥs. The sum of these probabilities equals unity. The objective of the MILP problem is to maximize, for a planning period, a financial figure that expresses the company’s net created value. This figure is the Economic Value Added (EVA™) that is formulated as follows: ݉ܽ ݔσ௧ሺܱܰܲܶܣ௧ െ ܹܥܥܣ௧ ܥܫ௧ ሻ
(1)
The MILP model has the typical SCN constraints for: (1) network structure; (2) logical transportation flows; (3) material balances; (4) production resources; (5) capacities in warehouses and distribution centers; and (6) safety stocks. A detailed explanation of these constraints is provided in the work of Longinidis and Georgiadis (2010). Constraints (2) to (12) formulate the income statement and constraints (13) to (22) formulate the balance sheet of the company and the interactions between them. ܰܶܵ௧ ൌ σேௌ ௦ୀଵ ߰௦ ቀσǡ ܴܲܧܥܫ௧ ܯܦ௧ ቁ ǡ ݐ
ሾ௦ሿ
(2)
ܵܩܱܥ௧ ൌ ܲܥ௧ ܶܥ௧ ܥܪ௧ ܵܥ௧ ǡ ݐ
(3)
ሾ௦ሿ
ሾ௦ሿ
ܲܥ௧ ൌ σேௌ ௦ୀଵ ߰௦ ቀσǡ ܥ ܲ௧ ቁ ǡ ݐ
(4)
ሾ௦ሿ
ሾ௦ሿ
ሾ௦ሿ
்ோ ்ோ ்ோ ܶܥ௧ ൌ σேௌ ௦ୀଵ ߰௦ ቀσǡǡ ܥ ܳ௧ σǡǡ ܥ ܳ௧ σǡǡ ܥ ܳ௧ ቁ ǡ ݐ ሾ௦ሿ
(5)
ሾ௦ሿ
ௐு ு ܥܪ௧ ൌ σேௌ ௦ୀଵ ߰௦ ቀσǡ ܥ ቀσ ܳ௧ ቁ σǡ ܥ ቀσ ܳ௧ ቁቁ ǡ ݐ ூ ܵܥ௧ ൌ σேௌ ௦ୀଵ ߰௦ ቆσǡ ܥ
ሾೞሿ
ሾೞሿ
ூೕ ାூೕǡషభ ଶ
ூ σǡ ܥ
ሾೞሿ
ሾೞሿ
ூ ାூǡషభ ଶ
ூ σǡ ܥ
(6) ሾೞሿ
ሾೞሿ
ூೖ ାூೖǡషభ ଶ
ቇ ǡ ݐ
(7)
ܴܲܦ௧ ൌ ܴܦ௧ ܣܨ௧ ǡ ݐ
(8)
ܶܫܤܧ௧ ൌ ܰܶܵ௧ െ ܵܩܱܥ௧ െ ܴܲܦ௧ ǡ ݐ
(9)
ܲܫ௧ ൌ ܴܶܮ௧ ܮܶܮ௧ ܴܵܶ௧ ܵܶܮ௧ ǡ ݐ
(10)
ܶܫ௧ ൌ ܶܫܤܧ௧ െ ܲܫ௧ ǡ ݐ
(11)
Longinidis et al.
1012
ܱܰܲܶܣ௧ ൌ ሺͳ െ ܴܶ௧ ሻܶܫ௧ ǡ ݐ
(12)
ܰܧ௧ ൌ ܱܰܲܶܣ௧ ǡ ݐ
(13)
ܰܥ௧ ൌ ܲܨܥ௧ ܱܰܲܶܣ௧ ǡ ݐ
(14)
ܴܰܣ௧ ൌ ሺͳ െ ܲܨܥ௧ ሻܱܰܲܶܣ௧ ǡ ݐ
(15)
ܣܨ௧ ܣܥ௧ ൌ ܧ௧ ܵܶܮ௧ ܮܶܮ௧ ǡ ݐ
(16)
ܣܥ௧ ൌ ܥ௧ ܴܣ௧ ܴܰܫ௧ ǡ ݐ
(17)
ܣܨ௧ ൌ ܣܨ௧ିଵ ܫܣܨ௧ ǡ ݐ
(18)
ௐ σ ܥ
ܫܣܨ௧ ൌ
ܹܲ
σ ܥ ܲܥܦ ሾ௦ሿ
ǡ ݐ ሾ௦ሿ
(19) ሾ௦ሿ
ܴܰܫ௧ ൌ σேௌ ௦ୀଵ ߰௦ ቀσǡǡǡ ܥ ቀܫ௧ ܫ௧ ܫ௧ ቁቁ ǡ ݐ
(20)
ܧ௧ ൌ ܧ௧ିଵ ܰܧ௧ ܰܵܫ௧ ǡ ݐ
(21)
ܥܫ௧ ൌ ܧ௧ ܵܶܮ௧ ܮܶܮ௧ ǡ ݐ
(22)
A detailed explanation of all previous constraints is provided in the work of Longinidis and Georgiadis (2010). Then we formulate the financial ratios. These ratios are grouped in categories according to their economic role and are: x Liquidity ratios, which measure the ability of the company to pay its bills over the short run without undue stress. The current ratio (CUR), the quick ratio (QR), and the cash ratio (CR) defined by constraints(23), (24), and (25), respectively.
ܴܷܥ௧ ǡ ݐ
ௌ்
ିூேோ ௌ்
ܴܳ௧ ǡ ݐ
ܴܥ௧ ǡ ݐ
ௌ்
(23) (24) (25)
x Assets management ratios, which measure how efficiently or intensively a firm uses its assets to generate sales. The fixed assets turnover (FATR), and the receivables turnover (RTR), expressed by constraints (26) and (27), respectively. ே்ௌ ி ே்ௌ ோ
ܴܶܣܨ௧ ǡ ݐ
(26)
ܴܴܶ௧ ǡ ݐ
(27)
x Solvency ratios that measure the firm’s long run ability to meet its obligations. Total debt ratio (TDR), debt-equity ratio (DER), long-term equity ratio (LTDR), and cash coverage ratio (CCR), expressed by constraints (28),(29),(30), and (31), respectively. ௌ் ା்
ܴܶܦ௧ ǡ ݐ
(28)
ܴܧܦ௧ ǡ ݐ
(29)
ܴܦܶܮ௧ ǡ ݐ
(30)
ி ା ௌ் ା் ா ் ் ାா
ாூ் ାோ ூ
ܴܥܥ௧ ǡ ݐ
(31)
Integration of financial statement analysis in the optimal design and operation of supply 1013 chain networks x Profitability ratios, which measure how efficiently the firm uses its assets and how efficiently the firm manages its operations. Profit margin (PMR), return on assets (ROAR), and return on assets (ROER), defined by constraints (32) to (34). ேை் ே்ௌ
ܴܲܯ௧ ǡ ݐ
ேை் ி ା ேை் ா
ܴܱܴܣ௧ ǡ ݐ
ܴܱܴܧ௧ ǡ ݐ
(32) (33) (34)
For each one of the previous ratios, appropriate upper or lower bounds are imposed.
3. Case study The applicability of the proposed model is illustrated through a real case study from a UK multinational company. The planning horizon of interest is comprised of four oneyear time periods. The company has three plants in three different European countries and customers in eight different European countries. A set of candidates warehouses and distributions centers are considered for establishment, in four and six, European countries, respectively, in order to service the whole market. 3.1. Results The proposed financial model (FM) and an analogous model (NFM) that ignores financial ratios constraints were solved using ILOG CPLEX 11.2.0 solver incorporated in GAMS 22.9 software. Both models have the same objective function (EVA maximization) but the NFM ignores constraints (23) to (34). For the FM the optimal network structure is presented in Figure 1 while for the NFM in Figure 2. The total created shareholder value is 1,756,627 relative money units (rmu) for the FM whereas for the NFM 534,988 rmu. The superiority of FM against NFM is illustrated in the optimal values of financial ratios. In liquidity ratios the FM is performing greatly better than the NFM in all individual ratios. In assets management ratios the NFM is performing slightly better than the FM. In solvency ratios, the main drawback of the NFM is evident. The NFM decides to finance its operations with external funds from capital markets (almost 90% of its IC) and increases substantially its liabilities and its paid interest. In contrast, the FM due to financial constraints imposed in its financial operation takes into account the cost of high liabilities and only 10% of its IC comes from capital markets. The tradeoff between the models is ROE where for the NFM is extremely high in contrast to the FM’s. Although, ROE is a popular ratio for investment decisions, a holistic evaluation of company’s financial status is required in cases of effective financial decisions.
4. Conclusion This work presents a model that integrates financial statement analysis and product demand uncertainty in the optimal design and operation of SCNs. The modeling of financial statements enables SC managers to take holistic decisions without underestimating the basic objective of a profit company, which is the creation of value for shareholder. This objective dictates a satisfactory financial status in order to guarantee new funds from shareholders and financial institutions that will allow the continuously and uninterrupted financing of company’s operations. The proposed model could be extended by introducing more detailed financial engineering aspects.
1014
Longinidis et al.
Figure 1. Optimal networkk configuration for the FM. O
Figure 2. Optimal networkk configuration for the NFM. O
Referencces Guillén, G., Badell, M., Esspuña, A., & Puuigjaner, L. 200 06. Simultaneouus optimization of process operatioons and financial decisions to enhance the inttegrated planninng/scheduling oof chemical supply chains. Compu uters & Chemiccal Engineering g, 30(3): 421-4336. Laínez, J. M., M Guillén-Gossálbez, G., Badeell, M., Espuñaa, A., & Puigjanner, L. 2007. Ennhancing corporaate value in the optimal design of chemical su upply chains. In ndustrial & Enggineering Chemisstry Research, 46(23): 4 7739-77757. Longinidis,, P., & Georgiaddis, M. C. 20100. Integration off financial statement analysis in the optimall design of suppply chain netwoorks under demaand uncertaintyy. Internationall Journal of Producction Economiccs, Corrected Prroof, doi:10.101 16/j.ijpe.2010.10.018. Melo, M. T., F. S. 200 T Nickel, S., & Saldanha Da Gama, G 06. Dynamic muulti-commodityy capacitated facility location: A maathematical moddeling framewo ork for strategicc supply chain pplanning. Compu uters & Operatioons Research, 33(1): 181-208 8. Shapiro, J. F. Comp F 2004. Challeenges of strateggic supply chain n planning and modeling. m mputers & Chemiccal Engineeringg, 28(6-7): 855-861.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integrated production planning and scheduling optimization of multi-site, multi-product process industry Nikisha K. Shah, Marianthi G. Ierapetritou Department of Chemical and Biochemical Engineering, Rutgers University, 98 Brett Road, Piscataway, New Jersey 08854, USA
Abstract The current manufacturing environment has changed from traditional single-site plant to multi-site plants serving a global market. In this paper, the integrated planning and scheduling problem for multi-site batch plants is considered and solved using the augmented lagrangian method. To resolve the issues of non-separable cross-product terms in the augmented lagrangian relaxation, we apply diagonal approximation method. Results show that the proposed method yield significant computational time savings. Keywords: Planning and scheduling integration, Multi-site production, Continuous time, Decomposition method, Augmented lagrangian relaxation
1. Introduction Today’s industrial enterprises operate as large integrated complex that involve multi-product, multi-purpose, and multi-site production facilities serving a global market. Traditional process supply chain management planning decisions can be divided into three levels: strategic (long-term), tactical (medium-term), and operational (short-term). The long-term planning determines the infrastructure (e.g. facility location, transportation network). The medium-term planning covers a time horizon of between few months to a year and is concerned with decisions such as production, inventory, and distribution. Finally, short-term planning decision deals with determining detailed schedule and typically covers time horizon of between days to a few weeks. The overview of the advances and challenges associated with process industry supply chain are discussed in [1, 2]. The general multi-site production and distribution complex is shown in Figure 1, where individual process plant located at different geographical region may produce multiple products and then transport products to the distribution centers. These multisite plants produce a number of products driven by market demand under operating conditions such as sequence dependent switchovers and resource constraints. Demand forecasts of each product in each market are specified over planning time horizon. Each plant within the network may have different production capacity and costs, different product recipes, and different transportation costs according to the location of plants. To maintain economic competitiveness in a global market, interdependences between the different plants, including intermediate products and shared resources need be taken into account when making planning decisions. The simultaneous planning of multi-site facilities has been studied by [3-6]. Furthermore, since there is a significant overlap between different decisions levels, it is necessary to integrate planning and scheduling problems to achieve global
N. Shah et al.
1016
optimal solutions for supply chain and [7-10] have addressed the integration of planning and scheduling decisions level for single-site plant. Wassick [6] proposed a planning and scheduling model based on resource task network for an integrated chemical complex. They examined enterprise-wide optimization of the liquid waste treatment network with their model. The production planning and scheduling level deals with different time scales and multiple production sites, thus the major challenge for the planning and scheduling problem integration lies in addressing large scale optimization model. When typical planning horizon is considered, the integrated problem becomes intractable and a mathematical decomposition solution approaches are necessary. In this work, we apply augmented lagrangian relaxation method to solve multisite production facility problem. The paper is organized as follow. The problem formulation is presented in section 2, the general augmented lagrangian method and its application to mutli-site facility is given in section 3 and result of an example studied are shown in section 4. The section 5 concludes with future work.
2. Multi-product Multi-site Model Formulation The scheduling model for each batch facilities is based on continuous time representation and notion of event points while the planning problem model is based on discrete time where the planning horizon is decomposed into fixed length periods. The two decisions level problems are inter-connected via production and inventory targets and inter-production site material flow. To obtain a full-scale planning and scheduling model, these two decisions level models are integrated into one MILP model. We assume that each site contains only one production plant, there are no shipping delays in the network, and the length of time of the planning horizon is such that the effects of transportation delays are neglected. The full-space formulation is build upon the model proposed by [9] for single site. The multi-site model is as follows. m in
¦¦¦h t
p
t, p
Inv s
¦¦¦u t
m
¦ ¦ ¦ ( F ixC ost w i
i
s .t .
p s
s
j
t, p
¦ ¦ ¦ ¦ ds
m, p
t
m
p
s
t ,m , p
Ds
(1a)
t
V arC ost i bijn )
n
t -1, p
Inv s
t ijn
m t ,m s Us
s
Inv s
t, p
Ps
-
¦D
t ,m , p s
,
s S f , p PS , t
(1b)
s S f , m M , t
(1c)
m t ,m
t -1, m
Us
Us
t ,m
D em s
-
¦D
t ,m , p s
,
p t
st s , n
N
t, p
stin s
t
Ps
t -1, p
,
- stin s Inv s
t, p
,
s S f , p PS , t
s S f , p PS , t
(1d) (1e)
In addition to constraints (1a-1b), the model also includes detailed scheduling constraints for each production sites (p אPS) and planning period (t אT). These scheduling level constraints are allocation, material balance, capacity, and sequence constraints as presented in detail in [9]. The objective function (1a) in the integrated model includes inventory cost, backorder cost, transportation cost, and fixed and variable production cost for all the planning periods and sites. The model can be divided into planning and scheduling constraints, where, for each planning horizon time period, the detailed scheduling constraints of each production sites are incorporated into fullspace model.
Integrated production and scheduling optimization of multi-site, multi-product process 1017 industry If we denote the planning decision variables ( Inv , P , D , U ) as X and scheduling decision variables as Y , then the structure of the integrated model can be illustrated as shown in a constraints matrix (Figure 2). Where, the top part of the matrix corresponds to planning decisions variables and lower part is composed of scheduling decisions variables. The matrix has a block angular structure and these blocks are linked through planning decisions variables ( Inv , P ). These complicating variables can be handled using augmented lagrangian relaxation to obtain decomposable structure. t,p
t,p
s
s
t ,m , p
t ,m
s
s
t,p
t,p
t,p
t,p
s
s
Figure 1. Multi-site production and distribution network
Figure 2. Constraint matrix structure of an integrated mutli-site model.
3. Augmented Lagrangian Decomposition Before we apply augmented lagrangian relaxation, the integrated planning and scheduling model is first reformulated by introducing duplicating variables ( II , PP ) and coupling constraints (2a-2b) and the scheduling constraints (1d-1e) are rewritten as (1d’-1e’). t,p
s
t, p
t, p
II s
Inv s
t, p
t, p
P Ps
Ps
t
st s , n
s S f , p, t
(2a)
s S f , p, t
(2b)
,
, t
N
t, p
- stin s
t, p
P Ps
t -1, p
stin s
II s
t,p
s
,
,
(1d’)
s S f , p PS , t
(1e’)
s S f , p PS , t
Thus, the resulting reformulated integrated model includes equations (1a-1c), (1d’-1e’), (2a-2b), and the detailed scheduling constraints. The constraints matrix for this reformulated model has complicating equations (2a-2b) instead of complicating variables and by relaxing these two constraints. We apply augmented lagrangian method by dualizing the coupling constraints and then adding them to the objective function as shown in equation 3. Here Ȝ, ȝ are lagrangian multipliers and ı is quadratic penalty parameter. f Ȝ, ȝ , ı = m in
¦¦¦h t
+
s
p t, p Inv s s
+
¦¦¦u t
m
m s
t, m
Us
+
¦ ¦ ¦ ¦ ds
m,p
s
t
m
p
t, m , p
Ds
s
(3)
t t t, p t, p ¦ ¦ ¦ FixC ost i w ijn + V arC ost i b ijn + ¦ ¦ Ȝ Ps - P Ps t, p s
i
+
p
j
n
t
s ÎSf
t, p t, p t, p t, p t, p ¦ ¦ ¦ ȝ s Inv s - II s + ¦ ¦ ¦ ı ®¯ Ps - P Ps t
p
s S f
t
p
sS f
2
t, p
+ Inv s
t, p
- II s
2½
¾ ¿
The quadratic penalty term present in an objective function of the relaxation problem has non-separable terms Pst,p P Pst,p and Inv st,p II st,p . To resolve the non separabilility issues, we apply diagonal quadratic approximation (DQA) method to linearize the cross-
1018
N. Shah et al.
t, p
t, p
t, p
t, p
product terms around the tentative solution P s , P P s , In v s , II s and the objective function (3) can be rewritten in decomposable form given by equation (3’).
¦¦
f ( O , P , V ) = f pl
t
where, the
f pl
(3’)
t, p
f sc
p
represents the objective function of the planning problem (4a, 1b,1c) and
t, p
f sc represents the objective function of the scheduling sub problem (4b, 1d’-1c’, and
allocation, material balance, capacity, and sequence constraints) . f p l Ȝ, ȝ , ı = m in
¦¦¦h t
+
¦¦Ȝ
¦¦¦ p
t
t, p
f sc
t, p s
s S f
t
+
p
Ȝ, ȝ, ı =
m in
¦
s S f
¦¦¦u
t, p s
t, p
- PPs
m
m s
t, m
Us
+
t, p s
p
t
m
t
+ ¦ ¦ ¦ ȝ In v
t, p ° t, p ı ® Ps - P P s ° ¯
¦¦¦ ¦
s
t, p s
t, p
- II s
s S f
+ In v 2
t, p s
t, p
- II s
P 2
t, p s
t
t, p ȝs
j
t, p Inv s
t, p
- PP s
t, p
s S f
-
t, p II s
+ ¦ s S f
° t, p t, p ı ® P s - P Ps ° ¯
+ 2
t, m , p
Ds
(4a)
t
n
m,p
ds
sS f
p
¦ ¦ ¦ FixC ost i w ijn + V arC ost i b ijn + ¦ Ȝ s i
+
+
t
P
s S f
p t, p In v s s
s
t, p Inv s
-
In v
P
t, p II s
2
t, p s
2
t, p s
t, p
- P Ps
t, p
- II s
2
½ ° ¾ ° ¿
(4b)
½ ° ¾ ° ¿
These quadratic problems are solved using a general augmented lagrangian optimization and diagonal quadratic approximation (ALO-DQA) algorithm given in [11].
4. Numerical Example We studied a small example that has 3 production sites serving 3 markets. Each production site contains a batch process plant [12] that produces two products, P1 and P2. Continuous time scheduling problem is solved for each planning period using 6 event points and 8 hour time horizon. The problems are solved using GAMS, CPLEX 12.1.0, and 3.19GHz CPU, 2G RAM Dual Core. Model statistics of an example is shown in table 1. The results of limited product storage policy are shown in table 2. Table 1. Model statistics of full-space integrated problem.
Number of Period (T)
Binary variables
Continuous variables
Constraints
5 10 15 30 45 90
720 1440 2160 4320 6480 12960
4636 9271 13906 27811 41716 83431
10675 21355 32035 64075 96115 192235
From the results in Table 2, it can be observed that the augmented lagrangian algorithm converges to a feasible solution as the norm value of the error (||g||) converges to zero. Furthermore, the feasible solution is reached significantly faster using ALO-DQA method compared to full-scale model. The quality of the feasible solution (f*) obtained
Integrated production and scheduling optimization of multi-site, multi-product process 1019 industry using ALO-DQA method may be inferior than full-scale model since the ALO-DQA strategy solves an approximation version of the relaxation problem. Table 2. Computational results.
ALO-DQA method
Full-space model T
CPU sec
f*
Gap (%)
Iter k
CPU sec
f*
Ȝg + ı||g||2
||g||
5 10 15 30 45 90
3600 3600 3600 7200 7200 7200
127105.9 265188.7 405232.6 750699.1 1154225.4 2398293
8.62 10.23 11.59 10.86 11.47 12.54
15 16 16 16 18 20
192 291 611 1282 2219 3750
130644 270003.2 406427.3 760225.6 1162049.6 2383921
84.6 69.98 264.1 232.8 212.21 680.3
0.89 0.75 0.97 0.97 0.89 0.97
5. Future Work This work addresses the problem of integrated planning and scheduling for multi-site, multi-product and multi-purpose batch plants using the augmented lagrangian method. The connectivity between different sites is only addressed through the interactions with the markets. Current work focuses on considering the interactions and interdependences between different production sites, including intermediate products and shared resources and taking into account the uncertainty related to product demand and product prices.
6. Acknowledgements The authors gratefully acknowledge financial support from the National Science Foundation under Grants CBET 0966861 and GAANN.
References [1] L. G. Papageorgiou, Computers & Chemical Engineering, 33, no. 12 (2009) 1931-1938. [2] N. Shah, “Process industry supply chains: Advances and challenges,” Computers & Chemical Engineering, 29, no. 6 (2005) 1225-1236. [3] S. J. Wilkinson, A. Cortier, N. Shah, and C. C. Pantelides, Computers & Chemical Engineering, 20, no. Supplement 2 (1996) S1275-S1280. [4] Sukoyo, S. Matsuoka, and M. Muraki, Journal of the Japan Petroleum Institute, 47, no. 5 (2004) 318-325. [5] J. R. Jackson, and I. E. Grossmann, Industrial & Engineering Chemistry Research, 42, no. 13 (2003) 3045-3055. [6] J. M. Wassick, Computers & Chemical Engineering, 33, no. 12 (2009) 1950-1963. [7] C. T. Maravelias, and C. Sung, Computers & Chemical Engineering, 33, no. 12 (2009) 1919. [8] P. M. Verderame and C. A. Floudas, Industrial & Engineering Chemistry Research 47, 14 (2008) 4845-4860. [9] C. Sung and C. T. Maravelias, American Institue of Chemical Engineers Journal , 53, 5 (2007) 1298-1315. [10] Z. Li and M. G. Ierapetritou, Computers & Chemical Engineering 34, 6 (2010) 996-1006. [11] Y. Li, Z. Lu, and J.J. Michalek, Journal of Mechanical Design 130, 5 (2008) 051402-11. [12] E. Kondili, C. C. Pantelides, and R.W.H. Sargent, Computers & Chemical Engineering 17, 2 (1993) 211-227.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Simulation-based reactive scheduling in tomato processing plant with raw material uncertainty Alexandros Koulouris, Ioanna Kotelida Alexandrion Technological Educational Institute of Thessaloniki,57400 Sindos, Greece
Abstract Production scheduling in a real plant that processes tomatoes into various types of paste under raw material supply uncertainty is studied in this paper. Using simulation-based, finite-capacity scheduling software, a model of the production process is developed and used to generate feasible production plans that satisfy the tomato inventory constraint under an assumed supply profile. During actual production, real data on tomato supply are daily fed into the schedule and the feasibility of the plan is reassessed and, when necessary, updated to satisfy constraints imposed by the inventory and the limited shelflife of the raw material. In this way, the production planner has a continuously updated view of the overall production plan and can make well-informed and timely decisions in order to meet the production objectives. Keywords: supply uncertainty, reactive scheduling, tomato processing.
1. Introduction When it comes to planning and scheduling production, food processing plants using seasonally available crops as raw materials face a series of unique problems: (1) the quality and quantity of raw materials available every day are uncertain, (2) harvest season is time-limited, (3) raw materials have limited shelf-life. These problems, along with all other sources of uncertainty (e.g. equipment breakdown, variable processing times, fluctuating demand etc.) common to all production systems, pose serious challenges for demand and supply management with no room for failure. A plant working under raw material uncertainty should be able to both plan for the long term and to react flexibly to near-term change by dynamically re-sequencing production campaigns based on day-to-day changes in supply. Reactive scheduling has been proposed in the literature as a way to address uncertainty in scheduling. Extensive reviews of all methods proposed for scheduling under uncertainty have been compiled in Li and Ierapetritou (2008) and Verderame et al. (2010). For food processes, the suggested approaches range from formulating scheduling as an optimization problem (e.g., Jensson, 1988) to implementing empirical dispatching rules (e.g., Parthanadee and Buddhakulsomsiri, 2010). In this paper, production scheduling in a real plant that processes tomatoes into various types of tomato paste is analyzed. The complexity of the problem due to the daily supply of tomatoes during the approximately 75-day harvest season being uncertain, make the formulation and solution of production scheduling as an optimization problem
Simulation-based reactive scheduling in tomato processing plant with raw material uncertainty 1021 seem like an impossible task. A simulation-based approach is proposed as a way to cope with daily uncertainties without sacrificing the ability to plan for the long term.
2. Problem statement 2.1. Problem setup A mid-size tomato processing plant in Central Greece is considered in this study. Tomato harvesting in the fields in the vicinity of the plant lasts from the end of July till mid October (70-80 days each year). Tomato supply peaks around mid-end August at about 2000 tons per day. Despite any efforts to smooth harvesting rate in the fields, the daily supply of tomatoes to the plant remains a highly uncertain variable subject to weather and field conditions. The plant produces two different families of pastes: “Cold Break” (CB) and “Hot Break” (HB) that differ in the conditions used for breaking the tomato. By varying the degree of solid content and the type of packaging containers, the plant produces in total 11 different products. For each product, the target amount produced each year can be calculated from customer orders and demand estimates and will be assumed deterministic. From simple material balancing, the product amounts can be easily converted to raw material requirements for the entire seasonal campaign. Note that this conversion factor is different for each product depending on its final solid content. 2.2. Process description Production of tomato paste includes the following steps: washing, sorting and breaking (crushing) of tomatoes, preheating, finishing and evaporation of juice, pasteurization and aseptic packaging of paste. The evaporation step is typically the critical step both from a processing as well as scheduling point of view. Depending on the desired solid content, residence time in evaporators can range from 2 to 4 hours, making the evaporation step the process bottleneck. The plant includes 4 evaporators that differ in capacity (ranging from 300 to 600 tons/day) as well as the achievable solid content. Therefore, each evaporator can undertake only certain products. When all four evaporators are used, the plant capacity is 1500 tons of tomato per day. For the highest concentration products (which also happen to be of the highest demand), a preconcentration step is required which means that three of the four evaporators should be engaged in their production. This results in a drop of plant capacity to about 1300 tons/day when high solid content paste is produced.
3. Method and approach 3.1. Process simulation With the help of the finite-capacity scheduling tool SchedulePro (by Intelligen, Inc.), a model has been developed to simulate the production process of the actual plant. Unlike other general-purpose scheduling tools, SchedulePro is oriented towards the process industries using a recipe-based ‘language’ that is familiar to process engineers. Since the intention was to represent the production process and not make extensive use of automated scheduling, the process representation capabilities and the availability of
1022
A. Koulouris et al.
manual scheduling functionality were the only (but adequate) criteria for the selection of SchedulePro for this study despite the fact that no extensive search or comparison between tools was undertaken. The development of a process model in SchedulePro includes the declaration of: - facility resources (mainly equipment) and their capacities, - recipe (sequence of process steps) for each product, and, - production campaigns with target amounts for each product. Actual production data from the years 2007 and 2008 were used to develop and fine tune the model. The scheduling tool used offers a wide range of possibilities in generating a process schedule: from complete automatic with no restrictions on campaign release dates to complete manual. By reproducing through the model the product campaigns (with actual release dates) executed in the plant in 2007 and 2008, it was possible to confirm the accuracy of the modeled recipe and facility information by comparing the actual with the simulated results. Even though the discussion in this paper will be limited to the evaporators, all process steps have been included in the model to ascertain that the developed schedules are feasible with respect to all resources used. 3.2. Production scheduling Having an accurate representation of the production process, the question is how this can be used to schedule future seasonal production for given demand but in the face of daily feed uncertainty. For 2008, the requirements for tomato feed amounted to approximately 60000 tons. Based on historical data, a log-normal distribution of the daily tomato supply rate was developed as shown by the vertical bars in Figure 1. The parameters of the log-normal equation were fitted to match the peak demand and the overall seasonal tomato supply. With the help of this hypothetical feed profile, a production schedule can be developed for the entire production horizon. Development of the schedule involves the simultaneous solution of the ‘batching’ (i.e. the number and size of product campaigns) and the ‘sequencing’ problem (i.e. how to order these campaigns). For given daily tomato feed, an optimal schedule can be obtained by minimizing some norm of the daily tomato inventory in an effort to minimize the residence time of tomatoes in storage and thus avoiding their spoilage. This is under investigation and not part of this work that focuses solely on the effect of uncertainty. The initial production plan (shown in the form of a Gantt chart in Figure 2) was based on manual campaign ordering exploiting simple empirical rules (such as: products
Figure 1. Daily supply rates and inventory profile for initial production schedule.
Simulation-based reactive scheduling in tomato processing plant with raw material uncertainty 1023
Figure 2. Gantt chart of evaporator occupancy for initial production plan.
should be ordered from higher to lower solid content to avoid long evaporator dead times) and automated scheduling that satisfies equipment utilization and tomato inventory constraints. The long bars in Figure 2 represent the production of the highdemand, high solid content pastes that require the use of three evaporators. The demand for tomatoes is also the highest for these pastes and this is why their production is scheduled around peak harvest time. However, the tomato supply during that period exceeds the demand and as a consequence the inventory peaks (Figure 1). Still, the plan is feasible and constitutes a good starting point for scheduling the upcoming production. Currently, the planner’s view on the production under supply uncertainty is rather shortsighted: the decision on what products to produce and in what quantities is taken every two or three days based on the availability of tomatoes and what has already been produced. An approach that offers a more global perspective of the entire production season is lacking and this is what is expected to be accomplished with the use of the production simulation model. As real data on tomato supply from the fields become daily available, the plan can be reassessed for feasibility and, when necessary, updated to reflect the current situation. Figure 3 shows the updated production plan after 20 days into the harvest season. A series of decisions have been made during that period based on the actual supply data: overall production has started earlier since the initial supply was greater than anticipated and certain campaigns have been shifted earlier in time. At the 20th-day mark (indicated by the red vertical line in Figure 3), campaigns already executed or started have been frozen (indicated by the faded color pattern) and only the remaining campaigns are considered updatable. The updated tomato inventory profile shown in Figure 4 combines the actual supply data up to day-20 with future predictions based on the assumed distribution. Despite the earlier start of the harvest season, the already acquired feed amount is less than expected and as a result the projected inventory turns negative towards the end of the season.
Figure 3. Gantt chart of evaporator occupancy for updated production plan after 20 days.
1024
A. Koulouris et al.
Figure 4. Daily supply rates and inventory profile for updated production plan after 20 days.
Even though it might be still early for decisive actions, this is a warning that such actions might be needed in the future if the trend persists. In the absence of fresh tomatoes, for example, the plant might be forced to produce by re-concentrating and repackaging pre-processed tomato juice. With the use of the simulated production environment, the production planner has a continuously updated view of the overall production plan for the entire harvest horizon and can make well-informed and timely decisions in order to meet production objectives.
4. Conclusions When faced with supply fluctuations, food-processing plants must adopt a flexible production scheduling strategy without losing sight of the long term objectives. When the complexity of uncertainty proves too difficult for optimization methods to handle, simulation-based, finite-capacity scheduling tools that allow both automated and manual schedule updating can provide a framework upon which feasible production plans can be developed for the short and the long term.
References P. Jensson, 1988, Daily production planning in fish processing firms, European Journal of Operational Research, 36, 410-415 Z. Li and M. Ierapetritou, 2008, Process scheduling uncertainty: review and challenges, Computers & Chemical Engineering, 32 (4-5), 715-727 P. Parthanadee and J. Buddhakulsomsiri, 2010, Simulation modeling and analysis of production scheduling in real-time dispatching rules: a case study in canned food industry, Computers and Electronics in Agriculture, 70, 245-255 P.M. Verderame, J.A. Elia, J. Li and C.A. FLoudas, 2010, Planning and Scheduling under Uncertainty: A Review Across Multiple Sectors, Ind. Eng. Chem. Res., 49, 3993-4017
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Scenario-Based Strategic Supply Chain Design and Analysis for the Forest Biorefinery Behrang Mansoornejad,a Efstratios N. Pistikopoulos, b Paul Stuart a a
NSERC Environmental Design Engineering Chair Department of Chemical Engineering, École Polytechnique, 2920 Chemin de la Tour, Pavillon Aisenstadt, Montreal H3C 3A7, Canada b Center for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2AZ, UK
Abstract Supply chain (SC) design involves decisions for the long term, e.g. number, location and capacity of different SC nodes, production rates, flow of material between SC nodes, as well as determining suppliers, markets and partners. The forest biorefinery (FBR) is emerging as a new possibility for improving forestry company business models, however introduces significant technological, economic and financial challenges - which can be systematically addressed in strategic SC design. This paper presents a scenario-based approach to strategic SC design for the FBR, designing the SC based on the impacts of the design on tactical-operational SC activities. Two kinds of scenarios are used; market scenarios representing market volatility and SC network scenarios (alternatives) representing different biorefinery options/strategies. The SC analysis evaluates SC alternatives for the case of different market scenario. Keywords: Forest Biorefinery, Supply Chain, Partnership, Scenario-Based Approach
1. Introduction In the design of a SC, long-term decisions should be made, i.e. products, technologies, number, location and capacity of each facility, e.g. plants, warehouses and distribution centers, and the target markets [1]. In a practical problem, it is difficult to address such decisions in an optimization problem, because they are linked to aspects that cannot be modelled, e.g. understanding the market and market strategies, emerging products and processes, the capabilities of the existing assets of the SC, and the potential partners. It is thus preferable to pursue a systematic methodology that addresses these factors in a step-wise manner. The methodology which is presented in this paper, seeks a set of feasible biorefinery options, not the best one, which a company can strategically pursue considering practical aspects. Many of these aspects can be addressed in different scenarios instead of being modelled into an optimization formulation. This scenariobased methodology results in a set of solutions. A multi-criteria decision making (MCDM) framework can subsequently be used to find the best option from the
1026
B. Mansoornejad et al.
company point of view. In order to execute this step-wise methodology, certain decisions must be made via integration with other methodologies, i.e. product portfolio definition to determine the set of products, techno-economic study to choose technologies to produce the targeted products. What will be determined by the scenariobased approach is the SC network design including the number, location and capacity of warehouses and distribution centres as well as partners to collaborate with. 1.1. Problem definition A forestry company wants to implement the biorefinery by examining the portfolio of products which secure profit, using processes which enable better response to volatile market conditions, and companies with which a partnership can be made. On one hand, market conditions must be taken into consideration, and on the other hand, possible process/SC options to be implemented must be identified. Scenario generation is used to address both aspects. Market conditions are reflected into the problem via market scenarios. Also, possible biorefinery options, each implying specific implementation strategies, are made in terms of alternatives, each of which includes a product portfolio, a technology for the production of each product, and a SC network for each portfolio. In this paper, SC network alternatives are defined and combined with product/process portfolios. A margins-based SC optimization model calculates the profitability of each combined alternative, i.e. a biorefinery option, in case of market scenario realizations. The SC network must be designed in way such that, by optimizing the tacticaloperational SC activities, SC profit is maximized. As a result, this approach evaluates the SC network based on the impacts of the design on tactical-operational activities. The margins-based optimization model takes advantage of the flexibility of processes, and chooses orders and schedules production so that profit is maximized. 1.2. Margins-based optimization The operating policy in the pulp and paper (P&P) industry is said to be “manufacturingcentric”, i.e. the management focus is on capacity planning [2] assuming that minimizing production costs will result in the highest profitability [3]. Also, production planning assumes known orders and a fixed sequence of grades, no matter what the price and demand are. For the FBR, the operating policy would ideally shift to a margins-based approach, which maximizes profit over the entire SC [3]. In this approach, long-term contracts and short-term order selection are made with respect to not only process/production constraints, but also inventory and transportation constraints. Given the number and length of time intervals, price and demand data, capacity data, and direct cost parameters, the main decision variables to be determined for each time interval include; contracts to make, orders to fulfil, amount of feedstock, amount of products to be produced, flows of material between SC nodes. The objective function is the SC profit, involving revenue as well as production, inventory, transportation and changeover costs. 1.3. Manufacturing flexibility Today’s market is subject to significant volatilities in terms of price and demand. To mitigate risks against such uncertainties, it is of crucial importance to enhance the reactivity and proactivity [4] implying flexibility. In the chemical engineering context, four flexibility types have been studied widely; recipe flexibility, process flexibility, product flexibility, and volume flexibility [5]. An FBR is able to produce several
Scenario-Based Strategic Supply Chain Design and Analysis for the Forest Biorefinery
1027
products, i.e. P&P products, bio-products and energy. Given feedstock and product price, supply and demand, product/volume flexibility can be exploited to maximize profit. The company should analyze its access to feedstock, product price, as well as orders/demands and find the alignment between demands and its capacity [5].
2. Scenario-based approach for the strategic design of the SC network The methodology proposed for scenario-based SC network design is shown in Figure 1. Product portfolio definition
Techno-economic study
To identify the possible SC network alternatives
To evaluate the process/SC network alternatives
To identify the specifications of the new SC considering product options
To generate market scenarios for pessimistic likely, optimistic cases To calculate the SC profit for each scenario/alternative
To define SC network alternatives
To calculate ROI for each scenario/alternative
To combine process alternatives and SC network alternatives
To compare alternatives based on their ROI and screen out the non-profitable ones
Figure 1. Scenario-based methodology for the SC network design Product/Process portfolios are inputs. The methodology includes two parts; first possible SC network alternatives are identified and after being combined with product/process portfolios, product/process/SC network alternatives are evaluated based on their performance at the operational level. An illustrative example is presented to concretize the methodology. Two portfolios, A & B, are defined [6]. In A, FischerTropsch liquids (FTL) are produced and separated into waxes and diesel. Then diesel is converted to jet fuel (JF). In B, butanol, succinic acid (SA) and lactic acid (LA) are produced. Process alternatives for each portfolio are shown in Figure 2. Each alternative represents a specific level of flexibility in terms of product and throughput. In A1and A2, diesel is converted to JF completely and by half, respectively. A3 can be used in both ways. In B1, SA and LA are produced in fixed volumes, while in B2, an extra recovery system for SA and LA enable doubling the production of one at a time. Diesel
A-2
%100
Jet fuel
Waxes %50
FTL
Jet fuel
Diesel Blend tank
A-3
Waxes
FTL
%50
Diesel %50
Fermentation
B-1 Waxes
FTL
Jet fuel Jet fuel OR Blend tank
Reactor
Butanol
Reactor Reactor
SA
Reactor
LA
B-2 Reactor
Fermentation
A-1
Butanol
Reactor SA
Reactor Stand by
Reactor
Figure 2. Process alternatives for each portfolio
SA LA LA
1028
B. Mansoornejad et al.
2.1. Identifying possible SC network alternatives 2.1.1. Identifying the specifications of the new SC considering product options Forestry company SC networks are in place with their existing assets. Some processing steps/facilities are common among processes in the mill and thus similar facilities and assets can be employed or redesigned when implementing the biorefinery. It should be investigated how facilities should be modified or added to enable the mill to process more biomass. Also, each product has specific properties which have related facilities for transportation and storage. 2.1.2. Defining SC network alternatives Based on the Problem specifications, several SC network alternatives can bedefined which reflect the requirements of the new SC network. The issues to be addressed are; x Partnership: Partners can cooperate in providing technology, delivering the product, buying and/or selling the product. In this way, some or all of the partner’s SC assets are utilized and less capital is needed for the combined SC network. x Location and capacity of distribution centers: based on the location of the plant, several target markets might be around the plant. Thus, different distribution centers with different capacities can be assigned to target markets. x Transportation network: Based on the characteristics of the products, different ways of transportation, either by the company or via contract with other companies, can be utilized for product delivery. Examples of alternatives for case study portfolios are shown in Table 1. Table 1. SC network alternatives defined for each portfolio Alternative A-1 Alternative A-2 A-3 Alternative B-1 Alternative B-2 Waxes:Partnership Waxes:Partnership BuOH, SA, LA: BuOH: Partnership JF: on spot JF & diesel: on spot Partnership SA & LA: on spot Contract for Contract for Buy trucks Buy trucks transportation transportation 2.1.3. Combining process alternatives and SC network alternatives After defining the SC network alternatives, the capital investment required to redesign the SC network is calculated for each alternative and is added to the capital investment needed for the process technologies. Each combined alternative involves a process configuration with a targeted flexibility level and a SC network related to the products. 2.2. Evaluating the process design/SC network alternatives 2.2.1. Generating price/supply/demand scenarios In order to address market uncertainty, market scenarios representing a specific condition in the market with respect to feedstock availability, product demand, and feedstock and product price are generated for pessimistic, likely and optimistic market price cases. For strategic decisions, scenarios are generated for a period of one year. 2.2.2. Calculating the SC profit for each scenario/alternative In this step the SC profit for each alternative is calculated for the case of every scenario. To calculate the SC profit, a SC optimization model is used. The model optimizes the SC profit by determining the orders to fulfil and calculating the optimum value of production rate related to each product and flow of material between SC nodes.
Scenario-Based Strategic Supply Chain Design and Analysis for the Forest Biorefinery
1029
2.2.3. Calculating the profitability of each scenario/alternative In order to evaluate each alternative, the profitability of each alternative should be estimated. In this methodology, return on investment (ROI) is used as the measure of profitability. An example of the final result can be observed in figure 3. 60 50 40
ROI
SC normalized profit
2 1.5 1
30 20
0.5 Optimistic Likely Pessimistic
0 A1
A2
A3
B1
Combined alternatives
B2
10
Optimistic Likely
0 A1
A2
A3
Pesimistic B1
B2
Combined alternatives
Figure 3. Normalized SC profit and ROI for each combined alternative
3. Conclusions Biorefinery options involving product portfolio, process configuration and SC network, which can be considered by a company willing to implement the biorefinery can be evaluated using the scenario-based methodology proposed in this paper. By comparing the profitability of alternatives and screening out the non-profitable ones, a set of biorefinery options to be considered can be identified. Our current research focuses on designing and targeting SC flexibility, i.e. designing product/volume flexibility, to make FBR more efficient. In future works, SC robustness will be studied as a key metric for ensuring expected SC profitability in the presence of market uncertainty.
Acknowledgements This work was supported by Natural Sciences Engineering Research Council of Canada (NSERC) Environmental Design Engineering Chair at Ecole Polytechnique in Montréal.
References [1] P. Tsiakis, N. Shah, & C. Pantelides, 2001, Design of Multi-echelon Supply Chain Networks under Demand Uncertainty, Industrial and Engineering Chemistry Research, 40, 3585-3604. [2] P. W. Lail, 2003, Supply chain best practices for the pulp and paper industry, Atlanta, GA: Tappi Press. [3] L.P. Dansereau, M. M. El-Halwagi, & P. Stuart, 2009, Sustainable Supply Chain Planning for the Forest Biorefinery, 7th International Conference on the Foundation of Computer-Aided Process Design, Breckenridge, Colorado, USA, 1101. [4] P. Schiltknecht, & M. Reimann, 2009, Studying the interdependence of contractual and operational flexibilities in the market of specialty chemicals, European Journal of Operational Research, 198(3), 760-772. [5] B. Mansoornejad, V. Chambost, P. Stuart, 2010, Integrating product portfolio design and supply chain design for the forest biorefinery, Computers & Chemical Engineering,, 34, 9, 1497-1506. [6] V. Chambost, B. Mansoornejad, P. Stuart, 2010, The Role of Supply Chain Analysis in Market-Driven Product Portfolio Selection for the Forest Biorefinery, ESCAPE 21.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
The Role of Supply Chain Analysis in MarketDriven Product Portfolio Selection for the Forest Biorefinery Virginie Chambost, Behrang Mansoornejad and Paul Stuart NSERC Environmental Design Engineering Chair Department of Chemical Engineering, École Polytechnique, 2920 Chemin de la Tour, Pavillon Aisenstadt, Montreal H3C 3A7, Canada
Abstract The implementation of the forest biorefinery in retrofit to an existing forestry company requires a strategic shift in the core business from a commodity-driven manufacturing-centric culture, to a margins-driven supply chain culture. In order to diversify the set of traditional forest products to include biorefinery products it is critical to define the associated market and competitive strategies as well as new business models. The penetration of existing mature value chains by replacement and/or substitution biorefinery products requires that supply chain strategies be implemented that create and retain value over the longer term, and secure a unique competitive position. As part of the new product portfolio definition, key supply chain criteria must be identified and considered in the value chain assessment. The role of the supply chain in defining the product portfolio definition and in mitigating risks against price volatility is examined in this paper. Keywords: Forest biorefinery, product portfolio, value chain, competitive assessment, supply chain
1. Introduction The forest biorefinery (FBR) is increasingly being considered by forestry companies as a viable business option for diversifying and growing revenues. However designing the biorefinery that serves a promising business model is not obvious. Many possible biorefinery routes can be targeted but only certain of these will bring sustainable competitive advantages and substantial financial reward. The company’s biorefinery product portfolio must be systematically identified and the associated technical, technoeconomic and commercial risks associated with different options should be determined. For many companies, process design drives the development of the biorefinery, and the question of new product integration into an existing product portfolio is considered through the technology strategy. However for better ensuring the successful implementation of the biorefinery and attracting the interest of strategic investors, a
The Role of Supply Chain Analysis in Market-driven Product Portfolio Selection for the 1031 Forest Biorefinery robust business model accompanied by a technology strategy that mitigates technical risks is critical. Leading market analyst Roberts >1@ identifies four strategic business model elements for attracting strategic investors, being (1) minimal to no technology risks, (2) security and long-term plan for fibre through off-take agreements based on volume and price, (3) a credible business case with specific market strategies, (4) credible financial metrics. The forestry industry, vested in a commodity and manufacturing-centric culture, must be prepared to transform in order to retain value and create margins over the longer term. The unique set of competitive advantages a company may create and maximize through market strategies, including optimization of existing delivery systems and identification of strategic partnerships >2@, will be the cornerstone for successful biorefinery transformation strategies. Penetration of existing and mature value chains with replacement and substitution products, such as bio-fuels and added-value biochemicals, should be supported by adequate risk mitigation strategies against market uncertainties related especially to price fluctuation. Supply chain management strategies can provide control over product price volatility as well as the internalisation of market risks >3@. This paper examines the role of supply chain in the decision-making process for FBR business model definition based on the competitive assessment of a new product portfolio.
2. Value chain approach for product portfolio definition Key drivers have been identified for the development of winning business models for the FBR >4@ such as (1) the need to improve the variable and insufficient margins of the existing core business, (2) the need to transform the business model in place via the definition of a new product portfolio, (3) the need to secure future access to raw materials at a competitive price, and (4) the need to benefit from governmental. The new biorefinery product portfolio definition should respond to these drivers and lead to sustainable business profitability and growth >2@. In this regard, Porter >5@ emphasizes that both operational effectiveness and strategic positioning are necessary but not sufficient for improving company’s performance and creating profits. Taking these elements into account, a systematic approach has been defined (Fig.1) considering market-driven as well as preliminary techno-economic-driven factors for product portfolio definition. A classical “design funnel” approach is used to define a set of promising product options, and then eliminate those that are less promising through market analyses, termed “evaluation of entry point” in existing or new value chains.
Figure 1. Value Approach for Preliminary Business Model Definition
1032
V. Chambost et al.
Market Potential Assessment Bio-based replacement and/or substitution of existing products on the market requires a fundamental understanding of market dynamics, the potential for penetrating existing and mature value chains, and the related potential value proposals. Each product within a portfolio must be screened using a systematic assessment of its market potential taking into account a set of market, technology and techno-economic criteria such as market growth potential, product revenue potential, product yield potential to match market volume, margin creation, etc. The definition of the value chain point of entry is closely linked with the potential for partnering with a ‘quality’ third party >2@. As Hobbs observed >6@, the value chain is a strategic network between a number of independent business organizations within a supply chain that share the goal of satisfying customers while sharing the risks and rewards of the chain. Competitive Assessment A major effort should be put on the competitive analysis of the overall portfolio in order to identify a unique value proposition for product delivery to a value chain, involving trade-offs that are unique to those of the competition >6@. For highly competitive markets such as the commodity market, product manufacturing and delivery costcompetitiveness are critical. On the other hand, for specialty products, differentiation and first-to-market strategies will drive the competitive advantage. From a procurement perspective, access to economically viable biomass constitutes a competitive advantage in the context of rising biomass costs and the significant biomass cost component of overall manufacturing costs. Fibre security via supply agreements on volume and price must support the market strategy and ensure a high level of EBITDA/t, i.e. potential of high margins for every tonne of purchased biomass. From a process perspective, a unique technology strategy, e.g. including process flexibility potential, should enable the adaptation of the product portfolio under changing market conditions. From a marketing perspective, the potential for product differentiation in price, quality or functionality and the potential for market penetration, i.e. relative market shares, drive the product portfolio positioning on the market. From a sustainability perspective, product portfolio competitiveness under market fluctuations in price and volume reflects the company’s potential to control price volatility risks and maintain the EBITDA/t. From a product delivery perspective, the supply chain effectiveness and responsiveness represent a barrier to entry for the competition in terms of costs and uniqueness of the supply chain network >6@. All these competitive factors must positively impact the profitability of the product portfolio >5@.
3. Implications of supply chain factors for product portfolio design 3.1. Product portfolio alternatives Based on this approach, two example biorefinery product portfolios have been defined. In the first portfolio, Fischer-Tropsch liquids (FTL) are produced and separated to waxes and diesel. Then diesel is converted to jet fuel (JF). In the second portfolio,
The Role of Supply Chain Analysis in Market-driven Product Portfolio Selection for the 1033 Forest Biorefinery butanol, succinic acid (SA) and lactic acid (LA) are produced. The portfolios are summarized in figure 2.
Figure 2. Example of Biorefinery Product Portfolios 3.2. Supply chain-driven factors for price volatility mitigation Market price volatility is the result of supply issues as well as changes in global consumption, and demand due to economic factors and overcapacity [7]. For companies considering transformation to the biorefinery, this should be considered as a powerful opportunity for business model definition, and its impacts should be managed for gaining a potential competitive edge in both commodity and specialty markets (Fig.3). Responsiveness and effectiveness of the supply chain should be designed to respond to price movement systematically [8]. Key supply chain factors can help designing the product portfolio so that the market volatility risk is internalized. Spot versus contracts Considering that biorefinery product portfolios will be comprised of mixtures of commodity and specialty products, drivers for mitigating market volatility risks will be different from one product to the other. Commodity products are characterized by high volume and low margin markets (Fig.3), i.e. leading to low and often variable margin potential. Positioned upstream on the value chain and impacted by the variation of raw material costs, commodity prices tend to rise or fall with business cycles due to the lack of price control potential and the highly competitive environment [9]. Generally, specialty products have less volatility [9]. Specialty products, i.e. characterized by low volume and higher margins, are typically less exposed to changes in demand and prices, and lead to a more stable and higher EBITDA. Contract and spot sales strategies for both commodity and specialty products should take into account the potential for price control on the market, and benefits of price volatility mitigation on portfolio EBITDA.
Figure 3. Price volatility for different type of products
1034
V. Chambost et al.
Manufacturing Flexibility and SC - Based on the example of product families A3 and B2 in Fig. 2, the potential offered by the process to switch from one manufacturing regime to another as defined at the process design stage is an important element for product portfolio competitiveness. Reactivity of the product portfolio offering enables the mitigation of the market volatility risks. The added capital and operating costs of this strategy may be compared to the gain in EBITDA on the product portfolio under different scenarios. Adaptation of the product portfolio offering towards more specialty products leads to the potential of benefiting from price control and maximizing the overall EBITDA/t along the portfolio. Supply Chain Operating Strategy – The definition of the supply chain policy whose objective is supply chain profit maximization, should strategically consider both the choice of contract and spot strategy, and the manufacturing flexibility strategy, i.e. the potential for exploiting manufacturing flexibility [10]. As a critical element for designing and managing a competitive product portfolio, the supply chain policy should be part of the value proposal assessment of each product portfolio, representing the potential for margins improvement for the overall product portfolio.
4. Conclusions The unique supply chain strategy, combining procurement and product delivery strategies, is a critical decision-making factor for designing biorefinery product portfolios, and a necessary tool for internalizing market volatility factors. A key indicator in the potential of the supply chain strategy at the product portfolio level is the EBITDA/t. This work was supported by Natural Sciences Engineering Research Council of Canada (NSERC) Environmental Design Engineering Chair at Ecole Polytechnique in Montréal.
References [1] D. Roberts, 2010, Presentation at the Forest Product Association of Canada Biopathways Workhop, Montreal, QC. [2] V. Chambost, J. McNutt, P.R. Stuart, 2009, Partnerships for Successful Enterprise Transformation of Forest Industry Companies, 110,5/6, p. 19-24 [3] B. Mansoornejad, V. Chambost, P.R. Stuart, 2010, Integrating product portfolio design and supply chain design, Computers & Chemical Engineering, 34, 9, 1497-1506. [4] M. Janssen, P.R. Stuart, 2010, Drivers and Barriers for Implementation of the Biorefinery, Pulp & Paper Canada 111, 5-6, p. 13-17. [5] M. Porter, 2008, On Competition, Harvard Business School. [6] J. Hobbs, A. Conney, M. Fulton, 2000, Value Chains in the Agri-food Sector: What Are They? How Do They Work?, Deptt of Agricultural Economics, U Saskatchewan [7] E. Regnier, 2007, Oil and energy price volatility, Energy Economics, 29, pp. 405-427. [8] C. H. Kline, 1976, Maximizing profits in chemicals, Chemtech, 6, pp. 110-117. [9] PricewaterhouseCooper (PWC), 2009, Managing commodity risk through market uncertainty. [10] L.P. Dansereau, M. M. El-Halwagi, & P.R. Stuart, 2009, Sustainable Supply Chain Planning for the Forest Biorefinery, 7th FOCAPD Conference, Breckenridge, Colorado, USA.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Real-time Process Management in Particulate and Pharmaceutical Systems Arun Giridhar, Intan Hamdan, Girish Joglekar, Venkat Venkatasubramanian, Gintaras V. Reklaitis∗ School of Chemical Engineering; Purdue University; West Lafayette, Indiana
Abstract A real-time process management (RTPM) system is equivalent to an automatic control system for running a production plant continuously while maintaining product quality. In pharmacautical manufacturing, such a system provides comprehensive control capabilities in maintaining setpoints, deciding and changing setpoints, and in fault detection, diagnosis, and remediation. As pharmaceutical manufacturing moves towards continuous manufacturing from batch methods, an RTPM system would be a critical component to achieving product quality specifications. In this work, an RTPM system is presented. The RTPM system also makes use of an ontological knowledge management system called TOPS (The Ontologies for Particulate Systems). We show the uses of our TOPS-RTPM integrated system for a pharmaceutical manufacturing pilot plant that makes tablets. Keywords: pharmaceutical manufacturing, process management, ontological informatics, ontology, TOPS, RTPM, ERC-SOPS
1. Introduction Particulate systems, especially in pharmaceutical manufacturing, pose unique operations and control challenges not normally encountered in conventional fluids-processing systems. As the pharmaceutical manufacturing field moves from batch operations to continuous operations, closed-loop control systems become vital and must be able to handle changes in both raw material properties as well as environmental factors. Maintaining quality and quantity specifications of products and intermediates (often referred to as critical process parameters or CPPs) is made harder by the sensitivity of powder flow to external factors, such as humidity and particle size distribution. Process faults (or exceptional events) must not only be detected quickly, but also have their root cause diagnosed and where possible corrections carried out automatically. Additionally, setpoints will also need to be changed dynamically, both for safe and efficient operations (particularly at startup and shutdown), and also as a potential correction mechanism for parameter changes or to respond to process faults that have been diagnosed. Finally, the entire control system must be designed and implemented flexibly to accommodate a wide variety of equipment vendors, sizes, and production rates, without being subject to vendor lock-in. Such a flexible system requires novel knowledge management techniques. In this work, we present our key achievements in designing and implementing a real-time process management (RTPM) system and a knowledge management system for pharmaceutical manufacturing. We give a brief overview of pharmaceutical manufacturing tech∗ [email protected]
1036
Arun Giridhar et al.
niques, describe a combined RTPM and knowledge management system, and summarize our results to date.
2. Pharmaceutical Manufacturing Background Drug products, such as tablets, capsules, film strips, soft gels, skin patches, injectables, etc, are chosen based on the desired dissolution profiles within the human body as well as mechanical constraints of the drug under consideration. Tablets are manufactured ultimately by compressing powders; if the powders flow well and have no segregation issues within hoppers, they may be directly compressed after blending. Powders that flow poorly or segregate are generally formed into granules, either by precompression into a ribbon which is then milled (dry granulation) or by wetting with a liquid binder and extruding through a sieve (wet granulation). Film strips of various forms are generally made by suspending the active drug in a viscous polymer which is then formed into a film and baked dry. The main control issues are to ensure that the final drug product (tablet, film strip, etc) are of consistent weight and composition, and can dissolve as desired within the body. Dissolution is often correlated to density of the drug product for oral dosage forms; where intermediates are used (such as ribbons in dry granulation), ensuring their density and composition is an additional control task.
3. TOPS-RTPM System RTPM (Real-time Process Management) is an integrated control system that facilitates smart manufacturing (Christofides et al. (2007)) and uses the knowledge in TOPS. The RTPM system has three main layers. The lowest is the regulatory layer, which achieves setpoints and rejects external disturbances. This layer uses standard feedback and feedforward control logic, including PID and MPC (Garcia and Morari (1982)), with underlying models ranging from mechanistic to data-driven. The highest layer, real-time optimization (RTO), decides setpoints; it may decide optimal setpoints for a given time, or an optimal trajectory of setpoints to use in transitioning to a new steady state. The third layer, active in parallel with the first two, is the exceptional events management (EEM) layer, which detects, diagnoses and mitigates process faults as they arise. While other workers have examined fault-handling (Ohran et al. (2010)) and particulate process control (Liu et al. (2008)), this work is unique in integrating RTPM with knowledge management and applying it to the pharmaceutical domain. TOPS (The Ontologies for Particulate Systems) is the ontological knowledge management framework that serves as a storehouse and retrieval system for process knowledge, facilitating continuous and flexible manufacture. TOPS is written in the Web Ontology Language (OWL), and includes several ontologies describing to material properties, equipment parameters, process models, control models, and exceptional events. TOPS is a knowledge (as opposed to data) management system, hence can store complex entities like mathematical models and fault signatures than merely coefficients and data. TOPS builds on the prior work of Hailemariam and Venkatasubramanian (2010), Suresh et al. (2010) and Akkisetty et al. (2010). The TOPS ontologies are designed to be completely independent of the specific software and hardware used by the RTPM system, which enables the TOPS framework to be used with any specific RTPM realization. The RTPM used in our work is based on the DeltaV distributed control system (DCS) from Emerson, with advanced control routines written in Matlab. Data is exchanged between
Real-time Process Management in Particulate and Pharmaceutical Systems
1037
DeltaV and Matlab using the standard OPC protocol. However the TOPS-RTPM linkage is designed to accommodate a wide range of RTPM implementations. Next the use of TOPS in the RTPM system for each of its three layers is examined. 3.1. Exceptional Events Management Exceptional Events Management is split into two subcomponents. One is the EEM ontology that resides as part of TOPS, and serves as a fault library. The other is the EEM application, which resides as part of RTPM, and performs the actual fault detection/diagnosis/remediation. The EEM ontology therefore is like a medical encyclopedia, listing various symptoms, causes, and remedies in general, but not knowing the specific details of any given patient. The EEM application on the other hand is like a doctor, who applies the knowledge of the medical encyclopedia to one or more specific patients with known symptoms. The EEM ontology stores fault signatures, which describe the dynamic changes in several variables taken together. Thus a fault signature such as “feed screw speed increase, ribbon thickness decrease, hydraulic pressure constant” describes the direction of change in three different variables. The ontology also stores the magnitude of such changes and the curvature information of such changes. Likely causes of the fault are stored as strings, as are remedies that guide the plant operator in rectifying the fault. The EEM ontology is populated by a human expert; hence the annotations about fault causes and remedies are intended to be human-readable. The EEM ontology can be updated whenever needed, as new fault-handling knowledge becomes known. The design of the EEM ontology allows it to be used on a wide variety of processes beyond particulate or pharmaceutical systems.
Figure 1. EEM ontology and application
The EEM application (Figure 1), written in Matlab, receives operating data from the plant in real time (via the DeltaV DCS), and calculates trends in each variable using a variety of algorithms, including polynomial-fitting, Fourier analysis, and statistical tests. It then compares the observed trends with the set of all known fault signatures from the EEM ontology, and uses the results to diagnose and mitigate any faults that may be in progress.
1038
Arun Giridhar et al.
Our specific EEM application reads the contents of the EEM ontology exported as a text file, to accommodate different control system implementations. Currently all remediation is displayed to the human operator on a console, and will eventually be automated where possible. Automated remediation will depend on availability of suitable actuators (motors, pumps etc) to take control action. Further details of the EEM system can be found in Hamdan (2010). EEM provides a much-needed complement to traditional alarming systems. Alarms by themselves alert the operator about what is wrong, such as a process variable being out of bounds. However there is a need, especially in pharmaceutical manufacturing, for systems to inform the operator about why something went wrong, such as powder properties having changed, or powder bridging inside a hopper, and so on. Further there is also a need for operators to be guided on how to remedy the situation, especially if the solution is non-obvious, or if the remedy must be applied in a different part of the system. The EEM application seeks to automate such arcane production knowledge. 3.2. Regulatory Control RTPM also encompasses regulatory control, either in the form of PID or MPC, of the unit operations in the processing train. One such unit operation is roller compaction, in which powders are compressed into a compact ribbon before being granulated. An MPC model for roller compaction was developed in Matlab by Hsu et al. (2010), and was tested with simulations. Step tests on roller compaction indicate significant longitudinal and lateral variation in both ribbon density and composition, requiring the control system to take detailed geometric parameters of the equipment’s design into account while achieving ribbon density and composition. TOPS provides a convenient mechanism to store mathematical models for both process simulation as well as control system design. A further consequence is that TOPS enables a process engineer to develop a model without necessarily having to learn the control context, and a control engineer to define the control context without having to know details of how a process model was derived and how it is to be solved. 3.3. Optimal Control Similar to the EEM case, knowledge stored in the TOPS ontologies is exported for use in the RTPM system for changing process setpoints. The underlying model is linear and is based chiefly on mass balances, and has been implemented entirely as a DeltaV control module. The system easily decouples the setpoint-updating algorithms from the physical hardware being used in the process. Should material parameters change, the changes can be propagated online by exporting the new parameters from TOPS to RTPM, leading new setpoints to be computed without any parameters hard-coded into the control code. Additionally, the same setpoint-calculating routines can be used in multiple sites without changing any code: differences in equipment vendors or sizes or material properties are treated as parameter changes rather than code changes, which is not possible without a knowledge management framework independent of the process management framework.
4. Discussion The TOPS-RTPM framework achieves separation of knowledge capture and knowledge use across multiple layers of process management. Consequently, knowledge storage can be centralized within a company on an internal web server. Such centralization enables
Real-time Process Management in Particulate and Pharmaceutical Systems
1039
single-point validation of all process knowledge entered into the ontologies, and also its multi-point use across all the company’s production facilities. Further, the framework enables different control system architectures to use the same knowledge across multiple production facilities within the company. The TOPS-RTPM framework is therefore not subject to vendor lock-in, and greatly facilitates both internal and external technology transfer.
5. Summary and Future Work In this work we presented a system to achieve both real-time process management and knowledge management for pharmaceutical production. Both process and knowledge management are critically required by the pharmaceutical industry in transiting from batch to continuous production. We showed how the architecture of the ontological knowledge management system is deliberately kept independent of the RTPM system, enabling operational flexibility and preventing vendor lock-in. Practical consequences of our system are in facilitating technology transfer within and between corporations. Current and future work involves demonstrating RTPM on several alternative process configurations for making tablets and film strips. A large part of this work is also applicable to other process systems outside the pharmaceutical and particulate domains.
6. Acknowledgements This work was entirely funded by the National Science Foundation (NSF) as part of the Engineering Research Center for Structured Organic Particulate Systems (ERC-SOPS).
References Akkisetty, P. K., Lee, U., Reklaitis, G., Venkatasubramanian, V., 2010. Population balance modelbased hybrid neural network for a pharmaceutical milling process. J. Pharm. Innov. 5 (4), 161–168. Christofides, P. D., Davis, J. F., El-Farra, N. H., Clark, D., Harris, K. R. D., Gipson, J. N., 2007. Smart plant operations: Vision, progress and challenges. AIChE J. 53 (11), 2734–2741. Garcia, C. E., Morari, M., 1982. Internal model control. a unifying review and some new results. Industrial & Engineering Chemistry Process Design and Development 21 (2), 308–323. Hailemariam, L., Venkatasubramanian, V., 2010. Purdue ontology for pharmaceutical engineering: Part 1. Conceptual framework; Part 2. Applications. J. Pharm. Innov. 5 (3; 4), 88–99; 139– 146. Hamdan, I. M., 2010. Exceptional events management applied to continuous pharmaceutical manufacturing. Ph.D. thesis, Purdue University. Hsu, S.-H., Reklaitis, G. V., Venkatasubramanian, V., 2010. Modeling and control of roller compaction for pharmaceutical manufacturing. Part 1: Process dynamics and control framework; Part 2: Control system design. J. Pharm. Innov. 5 (1), 14–23; 24–36. Liu, J., de la Pena, D. M., Christofides, P. D., Davis, J. F., 2008. Lyapunov-based predictive control of particulate processes using asynchronous measurement sampling. Particle & Particle Systems Characterization 25, 360–375. Ohran, B., Liu, J., de la Pena, D. M., Christofides, P. D., Davis, J. F., 2010. Monitoring and handling of actuator faults in two-tier control systems for nonlinear processes. Chem. Eng. Sci. 65, 3179–3190. Suresh, P., Hsu, S.-H., Akkisetty, P., Reklaitis, G. V., Venkatasubramanian, V., 2010. Ontomodel: Ontological mathematical modeling knowledge management in pharmaceutical product development, Part 1: Conceptual framework; Part 2: Applications. Industrial and Engineering Chemistry Research 49 (17), 7758–7767; 7768–7781.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modeling Next Generation Feedstock Development for Chemical Process Industry Selen Cremaschi Department of Chemical Engineering, The University of Tulsa, 800 South Tucker Drive, Tulsa, Oklahoma 74104, USA
Abstract A new network representation to study the impact of capital and research & development (R&D) investment decisions on the evolution of biomass to commodity chemicals technologies is presented. The corresponding mathematical programming formulation is developed. The model is solved for a simplified ethylene production scenario to demonstrate its ability to predict the capacity expansion and R&D investment decisions. Keywords: biomass to commodity chemicals, optimization, technology evolution.
1. Introduction Biomass, as a renewable and locally available resource, has great potential for weaning the chemical process industry (CPI) from fossil based feedstocks. There are many different routes to transform biomass into feedstock for the bulk chemicals production. Similar to the use of biomass for fuel production, the technologies for bulk chemicals production can be classified under two main categories: thermo-chemical conversions and bio-chemical conversions. Thermo-chemical conversions are gasification, pyrolysis and liquefaction/hydro-thermal upgrading of the biomass, whereas bio-chemical conversions are fermentation and anaerobic digestion. Figure 1(a) gives a simplified overview of the biomass to commodity chemicals (BTCC) routes that are currently under consideration. Figure 1(a) is not comprehensive; rather the purpose here is to highlight the complexity of the decision space and its interconnections. Reviews of chemicals via bio- and thermo-chemical conversions can be found in (Werpy and Petersen 2004; Kamm, Gruber et al. 2006; Corma, Iborra et al. 2007; Holladay, Bozell et al. 2007; Kamm and Kamm 2007; Haveren, Scott et al. 2008). The switch from our current fossil fuel based CPI to a future CPI that utilizes biomass feedstock requires substantial amounts of R&D and capital investments for technology development. As such there is great need for investigating how these investments will impact the evolution of the biomass feedstock system. Given the vast number of routes that can be utilized to convert BTCC, the decisions of how much to invest in which technologies for the short, medium and long term in the resource-constrained environment of the CPI is a challenging task. Furthermore, a suitable framework which is amenable to support the investment decisions in this field and which can be used to represent and compare different technology options with their maturation levels and possible evolution paths is not available in the literature. In this paper, a new framework, based on graph theory, is proposed to fulfill this gap. In the following sections, the proposed framework to study the BTCC investment problem is defined in detail followed by the resulting nonlinear programming (NLP) formulation
Modeling Next Generation Feedstock Development for Chemical Process Industry
1041
of the investment problem. A simplified case study is presented to demonstrate the application of the proposed framework. Finally, last section provides conclusions and future directions. Aromatics Syngas
Methanol (CH3OH)
Carbohydrates
Ethanol (CH3 CH2OH)
-Glucose
Gasoline and Waxes
-Fructose
Acids -Acetic acid
-Xylose
Hemicellulose
Gasoline Biogas
s s a m io B
B
Propylene
Sugar Starch
Ethylene (H2C=CH 2)
Diesel Butanol
Cellulose Bio oil
-Glutamic acid -Citric acid -Lactic acid -Hydroxypropionic acid -Succinic acid -D-Gluconic acid
Dimethyl ether
Crude bio oil
Propane Butane
Lignin
A
H2 Methane Lignin Naphtha and Diesel
Lipids, Oil Lipids/oil
Polyol
Protein
Methyl ester
Biodiesel Protein
Glycerin
Polyaldehyde Epoxydized oil Gasification
Separation
Pyrolysis
Anaerobic digestion
Fermentation
Thermo chemical
Liquefaction / Hydrothermal upgrading
(a)
(b)
Figure 1. The technologies to transform biomass to commodity chemicals (a) and the corresponding network representation (b)
2. A New Framework to Study BTCC Investment Problem Drawing analogies to the graph theory, a network representation is developed as a suitable framework which is amenable to support the investment decisions for the BTCC system. In the network representation, nodes correspond to the materials, i.e., biomass, intermediate chemicals, and commodity chemical, and directed-arcs correspond to the technologies. This yields a directed network G = (V, E) with node set V, and arc set E. For example, Figure 1(b) shows the network representation of the BTCC technologies presented in Figure 1(a). Using index v to denote a node, and index e to denote an arc, the following variables are defined for each arc: cumulative capacity (CXe, cumulative installed capacity of technology e), transportation cost (CCe, unit capital cost for technology e), efficiency (Ke, the production efficiency of technology e, Ke, d1), Įe and ȕe are the learning-by-doing and learning-by-searching elasticities of the corresponding technology, respectively. The transportation cost of an arc can be reduced by expending R&D and capital investments. While the R&D investments do not directly impact the cumulative capacity, the capital investments result in capacity expansions. The relationship between the R&D and capital investments, and the transportation cost and capacity is defined using two-factor learning curve expression (Figure 2). Depending on the R&D and capital investment decision of the CPI players and the government, the BTCC network will evolve differently. Learning curve concept was first introduced by Wright (Wright 1936). Wright observed that the number of direct labor hours it takes to manufacture one unit of a product decreased at a uniform rate as the quantity of the units manufactured doubled. In their most general form, the learning curve models link the cumulative capacity, the output, or the labor to the technology's cost using the main phenomena observed by Wright: cost decreases uniformly as the cumulative learning source doubles. The original
1042
S. Cremaschi
learning curves included only the impact of one-factor (such as capacity) on the cost of the technology. These models are used to represent the learning-by-doing. However, the unit cost of a technology also changes with R&D expenditures, especially at the infancy of the technologies. This impact is called learning-by-searching, and resulted in twofactor learning curves (Kouvaritakis, Soria et al. 2000).
3. Modeling the Evolution of BTCC Technologies Using the framework described in this paper, the evolution of BTCC technologies is modeled with an NLP. There are three subsets of nodes that are used in the formulation: Raw materials, VR = {v|vV v is a source node}; Biomass, VRR ^biomass` ; Products, VP = {v|vV v is a sink node}. The connectivity of the graph is represented by a weighted incidence matrix, B, which is a |V|u|E| matrix B =(bv,e) (see Figure 2 for the elements of B). It is assumed that the demand for the products increases over time with an annual rate, the cost of biomass increases according to the inflation rate and the cost of nonrenewable raw materials increases linearly with the total resource depletion. With these assumptions, the resulting NLP formulation is given in Figure 2.
4. Case Study A simplified case study, the evolution of ethylene production from biomass (corn grain + corn stover) through two different technologies compared to conventional ethylene production from naphtha, is presented to illustrate the capabilities of the proposed approach. The network representation of the problem can be seen in Figure 3(a) and the evolution of the resulting production landscape is given in Figure 3(b). The evolution of the ethylene production technologies and production capacities was modeled for a 50 year period. Technology parameters are in Table 1. Initial raw material costs for biomass (grain+stover) and naphtha are assumed to be $262/dry ton and $685/ton. The model was solved using GAMS 23.4 – CONOPT in 0.15 CPU seconds. With the model parameters used, the production shifts to utilizing biomass as the technology capacities become available and the only new capacity expansions are biomass utilizing technologies (Figure 3(b)). Table 1. Technology parameters
Tec, e (1) (2) (3) (4) (5)
K (wt%)
D
E
0.25 0.80 0.30 0.55 0.25
-0.20 -0.28 -0.20 -0.20 0.00
-0.07 -0.05 -0.07 -0.07 0.00
Initial Cost $0.20/kg $10.0/kg $10.0/kg $1.0/kg $1.2/kg
Initial Capacity (106 tons) 45 0.01 0.01 0.01 28.3
5. Conclusions and Future Directions 5.1. Conclusions In this work, BTCC investment problem is described and drawing analogies to graph theory, a new network representation for this problem is proposed. Using the proposed representation, the NLP formulation of the investment problem is presented. A simplified case study, the production of ethylene from conventional naphtha cracking and from biomass via two routes, is modeled to demonstrate the application of the framework. The results suggest that the evolution of the BTCC systems can be modeled by the proposed framework.
Modeling Next Generation Feedstock Development for Chemical Process Industry Objective Function Min TC Subject to Cost Function
TC
¦¦ CC CX e,t
e
e,t
CX e,t 1
¦¦ CR
v ,t Rv ,t
1043
vVR t
t
¦¦ CRD
e ,t
e
t
Technology Costs (Two-factor learning curve) De
§ CX e,t · ¸ CCe,0 ¨ ¨ CX e,0 ¸ © ¹ Raw Material Costs CCe,t
§ CRDe,t · ¨ ¸ ¨ CRDe,0 ¸ © ¹
Ee
t
CRv,t
t, e
^
`
CRv,0 kv ¦ Rv , j
t , v v VR v VRR
CRv,0 1 IR t
t , v VRR
j 1
CRv,t Product Demands Dv,t
Dv,0 (1 J v )t
t , v VP
Meet Product Demands
Rv ,t t Dv ,t Rv,t
t , v VP
¦ bv,e Pe,t
t , v VP
e
No Accumulation of Intermediates ¦ bv,e Pe,t 0
t , v v VP v VR
Raw Material Requirements R v,t ¦ bv,e Pe,t
t , v VR
Capacity Constraints Pe,t d CX e,t
t , e
Capacity and R&D Stock Bounds CX e,t 1 d CX e,t
t, e
^
`
e
e
CRDe,t 1 d CRDe,t
t , e Nomenclature
TC: Total cost CCe,t: Unit capital cost for technology e at time t CXe,t: Cumulative installed capacity of technology e at time t Pe,t: Amount of production with technology e at time t CRv,t: Unit cost of material v at time t (only defined for raw materials, i.e., source nodes) Rv,t: Amount of material v produced or consumed at time t CRDe,t: Total R&D expenditure for technology e at time t kv: Constant cost increase coefficient for material v (defined for nonrenewable raw materials) IR: Inflation rate Dv,t: Demand for material v at time t (only defined for products, i.e., sink nodes) Jv: Annual increasing rate of demand for material v 1 / Ke if material v is a raw material for technology e ° bv ,e ®1 if material v is produced by technology e °0 otherwise ¯ Figure 2. The NLP formulation of BTCC technologies evolution
1044
S. Cremaschi
3
Tech 1
(4)
1
(3) 4
(2)
5 (5)
2 Nodes,v
Edges,e
1Æ Biomass
(1) Æ Fermentation
2Æ Naphtha (2)Æ Gasification 3Æ Ethanol
(3)Æ Catalyticconversion
4Æ Syngas
(4)Æ Catalyticdehydration
5Æ Ethylene (5)Æ Cracking
(a)
Cumulative Capacity (106 tons)
(1)
Tech 2
Tech 3
Tech 4
Tech 5
200 160 120 80 40 0 0
5
10
15
20
25 30 Year
35
40
45
50
(b)
Figure 3. The network representation of case study (a) and the resulting cumulative capacity (b)
5.2. Future Directions The solutions of the model are sensitive to the data used to construct the model. Therefore, a systematic sensitivity analysis will be performed to study the impact of model parameters on the evolution of the BTCC system and on the emergence of the "winner" technologies. The elasticities in learning-curve equation are usually determined through historical data by regression, hence the elasticities are normally distributed uncertain variables (Gritsevskyi and Nakicenovi 2000). For new technologies the uncertainty in the elasticity estimates will be higher due to the limited amount of data, i.e., the mean of the normal distribution might shift and the variance of the distribution will shrink as more data becomes available. Furthermore, the possible evolution paths of the technologies are dependent on the investment decisions of the individual CPI players as well as the decisions of the government. This is a stochastic optimization problem with endogenous and exogenous uncertainty, because the decisions may impact the distribution parameters and/or the observations of the uncertainties. We will investigate simulationbased optimization approaches to the stochastic BTCC investment problem.
References Corma, A., S. Iborra, et al. (2007). "Chemical Routes for the Transformation of Biomass into Chemicals." Chemical Reviews 107(6): 2411-2502. Gritsevskyi, A. and N. Nakicenovi (2000). "Modeling uncertainty of induced technological change." Energy Policy 28(13): 907-921. Haveren, J. v., E. L. Scott, et al. (2008). "Bulk chemicals from biomass." Biofuels, Bioproducts and Biorefining 2(1): 41-57. Holladay, J., J. Bozell, et al. (2007). Top Value-Added Chemicals from Biomas, Volume 2: Results of Screening for Potential Candidates from Biorefinery Lignin. U.S.-Deparment-ofEnergy. Kamm, B., P. R. Gruber, et al. (2006). Biorefineries - Industrial Processes and Products, Status Quo and Future Directions, Vol. 1 and 2. Weinheim, Germany, Wiley-Vch. Kamm, B. and M. Kamm (2007). Biorefineries – Multi Product Processes: 175-204. Kouvaritakis, N., A. Soria, et al. (2000). "Modelling energy technology dynamics: methodology for adaptive expectations models with learning by doing and learning by searching." International Journal of Global Energy Issues 14(1-4): 104-115. Werpy, T. and G. Petersen (2004). Top Value Added Chemicals From Biomass, Volume 1:Results of Screening for Potential Candidates from Sugars and Synthesis Gas. U.S.Deparment-of-Energy. Wright, T. P. (1936). "Factors Affecting the Cost of Airplanes." Journal of Aeronautical Sciences 3: 122-128.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Prediction of the Permeability and Filtration Performance of Packed Beds Mishal Islam, Xiaodong Jia, Michael Fairweather, Richard Williams School of Process, Environmental and Materials Engineering, University of Leeds, Leeds LS2 9JT, UK
Abstract This paper considers the development of a virtual permeameter where x-ray microtomography (XMT) is used to capture microstructural details which are subsequently used as input to a lattice Boltzmann method (LBM) for simulating the flow through porous media. The creation of such a permeameter that would enable the structure-flow relationships of bulk porous media to be assessed based on the microstructural details of small samples is of value in the performance assessment of filters used in a wide range of chemical and process engineering applications. The paper highlights a sensitivity analysis on the use of LBM for predicting, in conjunction with XMT, the permeability of packed beds, as well as its ability to predict permeability and filtration performance with validation against data gathered on the flow through beds of spherical particles. The work demonstrates that, with sufficient care, it is possible to use small representative samples of porous media for use as input to LBM to permit the accurate prediction of the bulk media permeability and filtration performance. Keywords: Permeability, filtration, packed beds
1. Introduction This study extends earlier work (Caulkin et al., 2006) that considered the packing of, and fluid flow through, large scale packed columns. Here, we describe work aimed at the development of understanding regarding use of the lattice Boltzmann method, in conjunction with x-ray microtomography, as a method that enables the structure-flow relationships of bulk porous media to be assessed based on the microstructural details of small samples. The study determines whether such an approach is feasible, establishes a routine for its implementation, and validates the approach against data gathered as part of the project. Potential applications for such a permeameter include its use in oil exploration where core plugs from costly test drills are usually too small to be measured accurately. It also has the potential for use as a design aid in the development of new filters for use throughout the chemical industry. Previous work, e.g. Vidal et al. (2009), assumes that packed beds of polymorphous structures can be represented by spherical particles for the purposes of LBM simulation. These authors considered packing compression of a bed, and the digitisation errors associated with the representation of particles in the bed, but related errors in predicted permeability to the ratio between the mean sphere diameter and the lattice spacing, rather than considering the resolution of individual particles. Additionally, this work considered errors arising from the size of the sample from a bed in terms of its volume rather than its specific dimensions. It was concluded that there was a 30-40% error between LBM predictions of permeability and data, with the predictions in accord with the Carman-Kozeny equation (McCabe et al., 2005). Another comparison with the latter
1046
M. Islam et al
equation was reported by Inamuro et al. (1999) for a small and ordered packing of 9 equal-sphere systems using a D3Q15 LBM, but the match with the Carman-Kozeny equation was only achieved by retro-fitting the equivalent sphere diameter. Maier et al. (1998) also used a D3Q15 LBM for small packed columns, with a column-to-sphere size ratio of 7-10, reporting excellent agreement with the Carman-Kozeny equation. However, given the relatively low size ratio, wall effects would have been significant, with comparisons with NMR results only described as "qualitative". This paper considers a more systematic approach to investigating the errors associated with simulating fluid flow through larger packed beds using a D3Q19 LBM, and starts with the simple case of a bed packed with spherical particles. The work considers the minimum digitisation requirements for individual spherical particles within a bed, and the dimensions of the sample from a bed required to give reliable predictions of permeability. The paper also compares predictions based on the developed methodology with experimental data gathered on the flow through packed beds using a filtration rig.
2. Experimental and Simulation Approach LBM uses a gas-kinetic approach to simulate fluid systems. This work employed the Bhatnagar-Gross-Krook (BGK) approximation with a D3Q19 formulation, i.e. it represents a three dimensional, 19 velocity lattice (Succi, 2001). Solid boundaries are accommodated using a bounce-back scheme, with periodic boundaries used for the inlet and outlet. The mono-disperse glass spheres used in this work, being well-defined, were considered suitable for giving clear XMT images for use as input to the LBM calculations in order to specify the location of the particles in the sample of the bed. The body-force, fb, was used in the present simulations to enforce a given velocity through a bed. It is equal to the change in pressure over the length of the simulated bed, with:
k
U s P / fb
(1)
with Us the superficial velocity, P the dynamic viscosity, and fb = 'P/L where L is the length of the system. Permeability is the most appropriate way to describe flow through a packed bed as it quantifies porosity with respect to particle diameter. The permeability of spherical particle beds can be determined using the Carman-Kozeny equation (McCabe et al, 2005): k = Sd2H3/(180(1-H)2)
(2)
where Sd is the sphere diameter and H is the porosity. Modifying the formula slightly, particle diameter can be removed to ensure that k is solely affected by porosity, and this modified relationship was used herein to compare with results obtained from LBM. The geometry information needed as input to the simulations was obtained using a Phoenix Nanotom XMT machine to take three dimensional scans of small samples of a packed bed. To simplify relating the XMT data to LBM dimensionless units, voxel length or lattice units (LU) were utilised, where one lattice unit equals one voxel. At the resolution (1.98 ȝm/voxel) used for reconstruction of the x-ray scanned images, the average size of 116 ȝm for the glass beads used in the beds corresponds to 58.5 voxels. Permeability was obtained experimentally using a single-run, pressure regulating filtration rig, with a Malvern Mastersizer 2000 used to measure individual particle size. The beds were formed by mixing washed and dried glass beads in a saline solution, then pouring the mixture into a filtration rig which comprised a steel tube with the bottom end screwed into a sieve filtration attachment permitting flow-through of water but not particles. The upper end of the tube was attached to a pressure regulating unit, which in
1047
Prediction of the Permeability and Filtration Performance of Packed Beds
turn was connected to an external pump. After adding the particle mixture the pressure regulating unit was connected and set to a given pressure. The mass flow rate was measured using the weight of water outflow as a function of time. To account both for gravity and water resistance from the sieve, an experimental run without a bed was performed under gravity, and the flow rate subtracted from measurements obtained using a given drive pressure to ensure that the rate recorded was purely due to the pressure employed. Converting mass to volume flow rate, k was then calculated using:
k
U P /( 'P / H )
(3)
where U is velocity, 'P is change in pressure, and H is height of the bed (| 4 cm).
3. Results and Discussion It is apparent that there are two major areas where errors can arise in simulating flow through particle beds using the method proposed: those associated with the accuracy of representation of the particles themselves, and those arising from the representativeness, or otherwise, of the specific bed sample used as the basis of the simulations. -4
2
6.0x10
1
5.0x10
1
4.0x10
1.0x10
-4
8.0x10
-4
2
k/Sd
2
k (m )
6.0x10
1
4.0x10
-4
1
2.0x10
2.0x10
0.0
-4
3.0x10
-4
0
20
40
60
80 100 120
Sphere diameter (voxels)
Figure 1: Predicted permeability with varying sphere digitisation accuracy.
1.0x10
6
10
7
8
10
10 3
Sample volume (LU )(LU) Dimension volume
Figure 2: Predicted permeability with varying volume of simulated sample.
Errors associated with the accuracy of representation of the particles in a bed were assessed by defining a square duct with a 160u160 voxel cross-section and a length of 384 voxels. Seven different shaped spheres were then individually embedded into the centre of the duct and flow through the duct simulated. The shape of each sphere was created using increasing numbers of voxels, and each sphere was then stretched to a 128 voxel diameter to ensure a constant particle size and identical wall effects. Figure 1 shows the influence of varying the accuracy of represenatation on the permeability, with the calculated k clearly asymptoting to a constant value with increasing numbers of voxels. The sudden rise in k from 2 to 4 voxels is due to the significant change in shape that occurs with the addition of extra voxels: from cuboid to cruciform. The subsequent drop is due to additional pixels being placed in a way that increased frictional effects. However, the error is significantly reduced with a resolution of 16 voxels across the sphere diameter, and is less than 10% of the asymptotic value when using 32 voxels. Another method of assessing this error is to keep constant the ratio of the sphere to the duct diameter, and alter the size of the sphere by increasing the number of voxels used to represented it. Hence, as the size of the sphere increases, and the sphericity improves, so does the duct volume. Simulations were again performed, but in this case the wall
1048
M. Islam et al
-3
2.0x10
-3
1.6x10
-3
1.2x10
-3
8.0x10
-4
4.0x10
-4
0.0
Simulation C-K Experimental
2
2.4x10
k/Sd
k/Sd
2
effect reduced, and the accuracy of representation of the sphere improved, with increasing numbers of voxels. This method was found to correlate well with the results of Fig. 1, with 32 voxels across the diameter again giving errors of less than 10%. For beds of spherical particles there is no theoretical method of determining the minimum sample size taken from a larger bed required for it to be representative of the latter’s permeability. Simulations were therefore performed based on XMT scans of varying sizes of sections of bed (at a fixed spatial resolution of 1.98 ȝm/voxel). Figure 2 shows how increasing the sample volume increases the permeability, and distinguishes between volumes roughly < and > 107 LU3, with larger volumes displaying a fairly constant permeability. However, these results are misleading, being based on volume, since clearly the length and cross-sectional area of the sample are important. LBM predictions were compared with data obtained using the filtration rig at 1.5 bar, and with results from the Carman-Kozeny (C-K) equation. LBM was applied using samples of 100-600 LU in length, and cross-sections of 100²-400² LU², although only results obtained using a 600 LU length and varying cross-sections, and a cross-section of 300² LU² and varying lengths, are discussed. Samples for measurement by XMT were taken from the bottom middle, central middle and the top side of the larger bed. The central sample is considered, since results from this location were the most variable due to the random nature of the packing at such locations, with results shown in Fig. 3. Predictions for a constant cross-section show a rise in permeability with sample length, with the increase in predictions at L=300 LU coinciding with a rise in the C-K results. Similarly, the constant length results indicate a rise in predictions that is parallel with that of C-K. Notably, the C-K equation fails to predict the data, and for all samples LBM is more accurate, with the level of agreement increasing with sample size. The data are also largely independent of sample size, indicating their representativeness of the bed as a whole, boundary effects notwithstanding. It may be concluded that a sample of dimensions of 300³ LU³ is acceptable in terms of its accuracy.
0
100 200 300 400 500 600 700 Length (LU)
2.4x10
-3
2.0x10
-3
1.6x10
-3
1.2x10
-3
8.0x10
-4
4.0x10
-4
0.0
Simulation C-K Experimental
0
100 200 300 400 500 600 700 Length of cross section (LU)
Figure 3: Permeability variation with fixed cross-section (left) and fixed length (right). The filtration rig was also used to measure flow rates at three different pressures: 0.5, 1.0 and 1.5 bar. Each experiment was run at least eight times, with averaged data used below. In line with the sensitivity studies reported, simulations were performed with bed sample sizes of 600u400u400 LU3. The results are shown in Fig. 4 where a linear relationship between pressure and superficial velocity is observed, confirming that pressure can be translated into body force and hence that there is a linear decrease in the latter with pressure. Differences between samples from various parts of the bed are small, with the largest variation from the average being 12.9%. Averaging the predicted
Prediction of the Permeability and Filtration Performance of Packed Beds
1049
permeability across the samples results in the comparisons of Table 1. The error range between the predictions and data is shown, with an average error over all pressures of 17.8%. Also, differences in permeability between the three pressures is low, indicating the uniform structure and homogeneity of beds made up of spherical particles. -4
Superficial Velocity (LU/time step)
8.0x10
TS CM BM Average
-4
7.0x10
-4
6.0x10
Sim. k/Sd2u104 3.50 3.50
Error /%
0.5 1.0
Expt. k/Sd2u104 4.23 4.28
1.5
4.31
3.54
17.9
ǻP / bar
-4
5.0x10
-4
4.0x10
-4
3.0x10
Table 1: Comparison between measured and predicted permeability.
17.2 18.3
-4
2.0x10
0.0
0.5
1.0
1.5
2.0
Pressure (bar)
Figure 4: Predicted superficial velocity variation with pressure for bottom middle (BM), central middle (CM) and top side (TS) samples, compared with overall average.
4. Conclusions LBM is capable of predicting the permeability of packed beds of spheres over a range of drive pressures when using input from XMT to define the bed structure. In doing this, LBM should use at least 32 voxels across the particle diameter to negate digitisation errors, with samples from the bed needing to be of dimensions 3003 LU3 or greater to give representative results. Compared to our data, the Carman-Kozeny equation is less accurate than the present approach, with LBM predicting the permeability of the beds with an average accuracy of 17.8%. Due to the beds’ homogeneity, the location from which the sample is taken appears not to significantly influence the predictions.
References R. Caulkin, M. Fairweather, X. Jia, R.A. Williams, 2006, Validation of a Digital Packing Algorithm for the Packing and Subsequent Fluid Flow Through Packed Columns, 16th European Symposium on Computer Aided Process Engineering, W. Marquardt, C. Pantelides (Eds.), Elsevier, Amsterdam, 395-400. T. Inamuro, M. Yoshino, F. Ogino, 1999, Lattice Boltzmann Simulation of Flows in a ThreeDimensional Porous Structure, International Journal for Numerical Methods in Fluids, 29, 737748. R. S. Maier, D. M. Kroll, Y. E. Kutsovsky, H. T. Davis, R. S. Bernard, 1998, Simulation of Flow Through Bead Packs using the Lattice Boltzmann Method, Physics of Fluids, 10, 60-75. W.L. McCabe, J.C. Smith, P. Harriot, 2005, Unit Operations of Chemical Engineering, McGrawHill, New York. S. Succi, 2001, The Lattice Boltzmann Equation for Fluid Dynamics and Beyond, Clarendon Press, Oxford. D. Vidal, C. Ridgway, G. Pianet, J. Schoelkopf, R. Roy, F. Bertrand, 2009, Effect of Particle Size Distribution and Packing Compression on Fluid Permeability as Predicted by Lattice-Boltzmann Simulations, Computers and Chemical Engineering, 33, 256-266.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Study of Closed Operation Modes of Batch Distillation Columns Laszlo Hegely, Peter Lang Budapest University of Technology and Economics, Dept. of Building Services and Process Engineering, H-1521, Budapest, Muegyetem rkp. 3-5.
Abstract Open and non-conventional closed batch column operation modes were compared by rigorous simulation. The configurations studied were the batch rectifier (“two vessel column”) and middle-vessel column (“three vessel column”). We simulated the separation of test mixtures (n-hexane-n-heptane and n-hexane-n-heptane-n-octane), and compared the recoveries of different methods under constant process duration. We proposed six different variants for the closed operation mode, which differ in the operation of the top (and in the case of middle-vessel column: bottom) vessel(s). The effects of the operational parameters were also studied. In case of negligible liquid hold-up, closed operation of batch rectifier provided better recoveries by 1.7-6%. However, the open operation mode of middle-vessel column proved to be better than the closed ones in every case. Keywords: batch distillation, open and closed modes, operational policies.
1. Introduction The advantages of batch distillation over the continuous one are well-known. The batch rectifier (BR) is the only widespread configuration in the industry operated in open system (with continuous product withdrawal). Middle-vessel column (MVC; Bortolini and Guarise, 1970) consists of two column sections connected through a middle vessel, and can produce three products simultaneously. The generalisation of the middle-vessel column is the multi-vessel column (Wittgens et al., 1996) which is built up from more column sections and intermediate vessels, and is generally operated without product withdrawal. These devices can be operated also in closed mode (without product withdrawal), which may reduce the energy consumption (Skouras and Skogestad, 2004). The closed operation mode of BR can be considered as the simplest case of multi-vessel column. The aim of this work is to study the competitiveness of the closed operation modes of BR and MVC with the open ones. The rigorous simulation calculations are made for a binary (n-hexane (A) – n-heptane (B)) and a ternary (n-hexane (A) – n-heptane (B) – noctane (C)) mixture with the dynamic module (CC-DCOLUMN) of the ChemCAD flow-sheet simulator. The recoveries obtained with the different methods are compared under constant process duration.
2. The configurations and operation modes Simplifying assumptions: theoretical plates, constant liquid hold-up on the plates, negligible vapour hold-up. For the VLE and enthalpy calculations, the SRK equation of state is applied.
Study of Closed Operation Modes of Batch Distillation Columns
1051
2.1. Batch rectifier The open and different closed operation modes of BR are compared. The charge is an equimolar mixture of A and B, its volume is 10 dm3. The prescribed purity of A is 99 mol%. The column operates with a heat duty of 500 W. 2.1.1. Open operation mode The reflux ratio is constant during the process. The operation is stopped when the A content of the accumulated product decreases to 99 mol%. The duration (ǻttotal) obtained in this way is prescribed for the closed operation modes.
Figure 1. ChemCAD models of closed operation modes with level control (a: BR, b: MVC)
2.1.2. Closed operation modes The models of the closed operation modes (Fig. 1a) are very similar, the only difference is the presence or lack of the control equipment. The A-rich product is accumulated in the upper vessel. Six different closed modes are presented for the BR (Table 1), which differ in the operation of the upper vessel, that is, in the method of varying the liquid flow rate from this vessel. The closed modes have at least one additional degree of freedom (DoF) compared to the open one. For closed modes, we define R as the ratio of the flow rate of reflux (L) and the rate of product accumulation (V-L). 2.2. Middle-vessel column The open and closed modes of the MVC are investigated by comparing the recoveries of A and C for the separation of an equimolar mixture (20 dm3) of A, B and C. The prescribed purities for A and C are 98 mol%. Both column sections have 10 theoretical plates.
1052
L. Hegely and P. Lang.
2.2.1. Open operation mode The charge is filled into the middle vessel. The operation begins with total reflux and reboil. The flow rate of the liquid stream leaving the middle vessel is 40 dm3/h. The reflux and reboil ratios are chosen so that not only the purity of top and bottom products but that of the B-rich product also reaches 98 mol% at the same time. The duration obtained in this way is prescribed for the closed operation modes. Table 1. The closed operation modes of batch rectifier. Mode
Principle of operation
Additional DoF(s)
Advantage
Disadvantage
Mode 1
Constant volumetric flow rate Constant level, vessel empty at start
Existence of initial reflux (contrary to Mode 2a) R= after initial period
R is always finite
Mode 2a
Liquid flow rate leaving the vessel Liquid level
Mode 2b
Constant level, charge is distributed between the vessels
Liquid level (same as the initial level)
Mode 3
Constant flow rate, then constant level (combn. of 1 and 2a) Temperature control, empty vessel at start
Flow rate (1st part), liquid level (2nd part) Temperature of the 4th plate (from the top) Temperature of the 4th plate, initial level
Mode 4a Mode 4b
Temperature control, charge is distributed
R=0 initially (accumulation of liquid) R= during the Slower whole operation dynamics, low purity of initial reflux Faster dynamics at Decreased the beginning duration of R= R§ in the 2nd half R is finite in of operation the 1st half of operation The results are virtually the same as those of Mode 4a
2.2.2. Closed operation modes The model of the closed modes of the MVC can be seen in Fig. 1b. The upper and lower vessels are operated according to one of the modes presented by the closed modes of batch rectifier. The advantages and drawbacks of the different modes (Table 1) are true for MVC as well, substituting R with the reboil ratio for the bottom vessel. Compared to the BR, the number of additional DoFs are doubled. The flow rate of the liquid stream leaving the middle vessel is 40 dm3/h. By Modes 4a and 4b the temperatures of the third plate of the upper and the eighth plate of the lower column are controlled, respectively. The operation mode applied by Skouras and Skogestad (2004) is also compared to the above ones. This mode is similar to Mode 4b, but the control is applied to the upper and middle vessels. The temperatures of the middle (fifth) stage of each column sections are controlled, the set-points are the averages of the boiling points of the two components separated in the column section in question. At the start, 94 % of the charge is filled into to the bottom, 5 % to the middle, 1 % to the top vessel, respectively.
3. Results For both configurations, recoveries obtained with the open and different closed operation modes are compared for the same product qualities. In the case of closed operation modes, the value of at least one (BR), or two (MVC) operation parameters have to be adjusted in order to maximise the recovery while satisfying the quality requirement. Additionally, the PI controllers for the closed operation modes (except Mode 1) are tuned by the Ziegler-Nichols method. In a few cases, even exponential
1053
Study of Closed Operation Modes of Batch Distillation Columns
filters have to be applied. The results of Mode 4b are not presented, as the division of charge doesn’t have significant effect on the recoveries, by our experiences. 3.1. Batch rectifier The effects of three operational parameters on the recovery are studied. The reflux ratio of the open mode (Ropen) and the number of theoretical plates (N) have only slight effects, while the influence of the plate hold-up is very significant (Table 2). Table 2. The effect of N and the hold-up on the recoveries of A.
8
N 3
Hold-up (cm /p.) Open Mode 1 Mode 2a Mode 2b Mode 3 Mode 4a
0 92.2% 88.7% 93.0% 93.9% 93.3% 94.1%
10 0 93.3% 92.0% 97.1% 97.7% 97.1% 97.4%
50 95.2% 91.7% 92.8% 93.3% 92.8% 93.2%
12 100 94.4% 89.8% 88.4% 88.9% 88.4% 88.8%
0 93.6% 92.5% 97.9% 98.5% 98.0% 98.0%
The hold-up of the column is very important: in case of negligible hold-up, the closed operation modes (except Mode 1) give better results than the open one. However, when the hold-up is greater, the open operation mode provides better recoveries (Fig. 2a).
Figure 2. The recoveries at different levels of hold-up (a: BR, N=10, Ropen=9; b: MVC)
The advantage of the closed operation modes over the open one is also affected by Ropen, (that is ǻttotal) and N, though to a smaller extent. The difference increases with increasing N, and decreasing Ropen. Mode 1 proved to be always worse than the open mode, and generally worse than the other closed modes. The order of the other closed operation modes (with decreasing recoveries): 2b, 4a, 3, 2a. Mode 2b gave the best recovery with the exception of one case (Ropen=7), in which it is preceded by Mode 4a. 3.2. Middle-vessel column We performed the calculation for negligible and 40 ml/plate hold-up (4 % of the charge), as well. The recoveries for 40 ml/plate hold-up are presented in Table 3, along with the recovery and purity of B. Unlike the BR, the open mode proved to be better
1054
L. Hegely and P. Lang.
than the closed ones (Fig. 2b) even at negligible hold-up. The order of the operation modes with respect to the recoveries is different not only for zero and non-zero holdups, but also for A and C. Table 3. The calculated results of MVC (hold-up: 40 cm3/plate). Hold-up: 40 cm3/p. Open Mode 1 Mode 2a Mode 2b Mode 3 Mode 4a Skouras
Recovery (%) Hexane (A) 99.2 96.34 96.55 95.25 95.56 93.61 93.7
Heptane (B) 73.14 85.63 84.03 85.39 86.09 80.94 83.4
Octane (C) 96.94 90.48 94.15 92.88 92.96 91.83 92.2
Purity of heptane (mol%) 98.03 94.37 96.63 96.66 96.75 96.69 96.62
The recovery of A is greater than that of C (with one exception). The recovery of B is the highest by Mode 3. Mode 1 is very good in recovering A but its C-recovery is the lowest. While Mode 2b is the best closed operation mode of the BR, it is not very favourable in the case of the MVC. In case of negligible hold-up, Modes 3 or 4a, in the case of 40 ml/plate hold-up, Mode 2a and 3 can be suggested. The operation mode of Skouras and Skogestad (2004) gives very similar results to those of Mode 4a, though the purity of A is slightly higher (98.4 mol%).
4. Conclusion The open and six different closed operation modes of batch rectifier and middle-vessel column were studied. The closed modes differ from each other in the operation of the upper (and in the case of middle-vessel column: lower) vessel(s). In case of negligible liquid hold-up, closed operation of batch rectifier provided better recoveries. Modes 2b (level control with initially filled up top vessel) and 4a (composition control with initially empty top vessel) proved to be the best closed modes. The decrease of the operation time and increase of number of stages increased the advantage of closed modes. For higher hold-ups, the open operation mode gave the highest recovery. The open operation mode of middle-vessel column proved to be better than the closed ones in every case. From the closed operation modes, it is not possible to choose the best, as the order of the closed modes with respect to the recoveries is not the same for the two products or for the two different hold-ups. For negligible liquid hold-up, Mode 4a or Mode 3 (constant flow rate then level control), for higher hold-ups Mode 2a (level control with initially empty top and bottom vessels) can be recommended.
Acknowledgement This work was supported by the Hungarian Scientific Research Foundation (OTKA, project No.: T-049184) and by the New Hungary Development Plan (project number: KMOP-1.1.1-07/1-2008-0031).
References P. Bortolini, G. B. Guarise, 1970. Quad. Ing. Chim. Ital., 6(9), 150 S. Skouras, S. Skogestad, 2004. Comput. Chem. Eng., 28, 829-837 B. Wittgens, R. Litto, E. Sørensen, S. Skogestad, 1996, Comput. Chem. Eng., 20, S1041
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Dynamic failure assessment of incidents reported in the Greek Petrochemical Industry Eftychia C. Marcoulaki, Myrto Konstandinidou, Ioannis A. Papazoglou Laboratory of System Reliability and Industrial Safety, National Centre for Scientific Research “Demokritos”, PO Box 60228, Aghia Paraskevi, Athens 15310, Greece
Abstract This paper presents a Bayesian statistical analysis on real incident data collected from the Greek Petrochemical Industry for a period of 6 years (1997-2003). The analysis provides an assessment of the database to support predictions of dynamically updated incident occurrence frequencies. Results are reported for two different categories of incidents, namely the industrial and the occupational ones. Keywords: Bayesian failure assessment, incidents database, petrochemical industry
1. Introduction Abnormal events occurring in the chemical industries may have significant economic and safety impact. These events can be divided into incipient faults, near-misses, incidents and finally accidents. Near-misses and incidents are referred to as accident sequence precursors, since they may propagate to accidents. Meel and Seider [1] developed a Bayesian approach to estimate the dynamic probabilities of accident sequences, tailored to chemical industries, and applied it on the analysis of incident databases [2]. The present work applies the above statistical analysis methodologies on accidents and near misses in various process plants in Greece. An extended database is used, which comprises data from all accidents and incidents of the Greek Petrochemical Industry [3]. The collected data are based on 1,115 reported incidents (including near misses, occupational and industrial accidents) for a 6 years period, from 1997 to 2003, from all the Greek Petrochemical Industry that includes, refineries, onshore and offshore facilities, storage locations and extraction sites. The data were acquired directly from the different establishments, and the participating companies gave access to their archives and to the initial reports of the incidents. During the database development, special care was given to include data from near misses too as well as to include as much detail as possible in what concern the causes and the conditions of the incidents. The resulting database assumes a user-friendly format with additional possibilities for its further use, such as: statistical analysis of the data, calculation of safety indicators, and generation of accident reports. The database allows the various stakeholders to compare the analysis of indicators in their own installations with the national average, as the database comprises data from the entire Greek petrochemical industry [4].
1056
E.C. Marcoulaki, M. Konstandinidou, I.A. Papazoglou
2. Modeling the frequency of incidents This work applies Bayesian inference methods for the analysis of failure data in five petrochemical companies in Greece over a six-year period. In the Bayesian approach, prior statistics of the system’s behavior are combined with new data to derive posterior statistics of improved confidence level. Consider a set DN ^ y1 , y2 ,..., y N ` of incident observations, y, over a period of N years. Let y be sampled from a statistical distribution F, thus y ~ F ( k1 , k2 ,..., k M ) . Let the hyperparameters km follow the prior distributions km ~ Gm ( am , bm ) . According to Baye’s theorem, the probability in the posterior distribution for ki is quantified as: p( k1 , k2 ,..., k M | Dn ) v l ( DN | k1 , k2 ,..., k M ) p( k1 ) p( k2 ) ... p( k M ) where l ( DN | k1 , k2 ,..., k M ) denotes the likelihood distribution. For certain combinations of priors and likelihood, the posterior and the priors are conjugate distributions, meaning that they belong to the same family of probability distributions. The distribution p y N 1 | DN of incidents over the next year N +1 conditioned over the data in DN is predicted by integrating over all the possible values of k1 k2,… kM. The present work assumes that the data are sampled from a Poisson distribution whose average, Ȝ, follows a Gamma distribution. Note that the two parameters of Poisson distributions, namely the expected value and the variance, are equal. Since the likelihood is according to Poisson, the conjugate prior and the posterior distributions are in the Gamma family and the predictive distribution is a mixed Gamma-Poisson. The choice of a Gamma-Poisson Bayesian model permits analytical evaluation of the predictive distribution parameters [1], otherwise, Monte Carlo simulations are performed [2].
3. Analysis of the database and model-checking The entries of the database [3-4] are divided into releases, fires, explosions and occupational incidents. In this presentation, data for the releases, fires and explosions are aggregated into the general category of industrial incidents. The occupational entries involve incidents threatening the health or the life of the company staff. The statistical analysis is carried out for each company, and consists of predicting annual failure frequencies and testing the accuracy of the applied prediction models. Due to insufficient knowledge for the years before 1997, a non-informative Gamma prior is used. Figures 1-2 report the annual observed and predicted values for the industrial and occupational incidents from 1997 (year 1), respectively. Dynamic predictions for 5 companies and their total are performed at each year n using as data the observations collected during the previous years (i.e. the sets Dn 1 , D0 { ). The square markers indicate expected values and the error bars give the standard deviation intervals. The figures show that the Bayesian models tend to misestimate the actual failures and their performance is quite poor during the first couple of years. As n increases, the size of the failure data sample increases and the predicted distributions should provide a more reliable approximation of the actual failures. Looking at the standard deviations, the predictive distributions tend to be narrower as the number of observed annual failures increases. These trends agree with the analysis of Meel et al. [2] which covered a period of 11 years for USA plants.
Dynamic failure assessment of incidents reported in the Greek Petrochemical Industry 15
35
Company A
Company B
30
incidents
1057
25
10
20 15
5
10 5
0
0
12 10
incidents
25
Company C
8
15
6
10
4 2
5
0
0
5
incidents
Company D
20
100
Company E
4
80
3
60
2
40
1
20
0
0 1
2
3
4
5 year
6
7
observed predicted
TOTAL 1
8
2
3
4
5 year
6
7
8
Figure 1: Data and Bayesian predictions on industrial incidents Using the model checking procedure in [1-2], the accuracy of the failure models is assessed as a hypothesis test that their annual predictive scores ^zn ` come from a standard Gaussian distribution. The zscores are calculated as: 0.5 zn yn E p yn | DN ^ yn ` V p yn | DN ^ yn ` where p yn | DN ^ yn ` is the distribution based on the data collected for all the years excluding year n, while E and V denote the expected value and the variance, respectively. Assuming constant failure rates, as n increases, the distribution p yn | Dn 1 moves closer to p yn | DN ^ yn ` . The z-scores can be assessed graphically (using Q-Q plots) or quantitatively using tests like the Shapiro-Wilk [5], a well-established test for small to medium sized samples [6].
E.C. Marcoulaki, M. Konstandinidou, I.A. Papazoglou
1058 15
30
Company A
Company B
incidents
25 20
10
15 10
5
incidents
5 0
0
6
30
Company C
5 4
20
3
15
2
10
1
5
0
0
20
Company D
25
70
Company E
60
incidents
15
50 40
10
30
TOTAL
20
5
observed
10
predicted
0
0 1
2
3
4
5 year
6
7
8
1
2
3
4
5 year
6
7
8
Figure 2: Data and Bayesian predictions on occupational incidents Table 1 reports the test statistics calculated using the R software [7]. The Wstatistic provides the goodness-of-fit and ranges between 1 (perfect fit) and 0 [5], while pvalue is the confidence level (this work assumes a lower threshold of 0.05 for p). Companies D, A and B appear to give the best fit for industrial incidents. These companies are complexes of significantly larger size compared to C and E and with far better incident monitoring systems. The poor fit for the industrial incidents in company E can also be attributed to the scarcity of data, since E is a storage and single process production line facility. Company C is similar to A, B, D but has smaller capacity and staff size. According to Table 1, the Bayesian models provide reliable predictions. In effect, with the exception of the occupational incidents of company D where the pvalue drops below 0.05 the model statistics remain far from the non-normality region.
Dynamic failure assessment of incidents reported in the Greek Petrochemical Industry
1059
Table 1: Shapiro-Wilk statistics for non-normality of the Bayesian model zscores Industrial incidents Occupational incidents A B C D E Total A B C D E Total W 0.898 0.886 0.836 0.938 0.836 0.911 0.864 0.823 0.831 0.697 0.922 0.898 p 0.319 0.252 0.092 0.619 0.091 0.401 0.163 0.069 0.082 0.003 0.485 0.317 In Figures 1 and 2 the occupational incident models seem better in predicting the expected number of incidents. However, the predicted variances for the industrial distributions are higher, therefore, looking at Table 1, the models for industrial incidents appear better in capturing the uncertainties and predicting the failure distribution trends. Additionally, the analysis indicates that company specific distributions are better suited, provided that the company has a sufficiently large data pool. The data could be used to inform predictions for similar companies not present in the database, should a reliable metric is established to dissociate the data from the size of the company. Such metrics could be the production capacity and the number of staff for industrial and occupational incidents, respectively. Similarly, information from aggregated samples (over all companies) could be used as prior input to improve the prediction quality when the data pool is small.
4. Conclusions This work has demonstrated a successful application of recently developed advanced statistical methods for the dynamic assessment of failure rates on an extended database of all the incidents reported in the Greek Petrochemical Industry during the period 1997-2003. Future work considers the test of Bayesian models other than the GammaPoisson presented here, using Monte Carlo simulations.
References [1] A. Meel , W. D. Seider, 2006, Plant-specific dynamic failure assessment using Bayesian theory, Chemical Engineering Science, 61, 7036 – 7056 [2] A. Meel, L.M. O’Neill, J.H. Levin, W.D. Seider, U. Oktem, N. Keren, 2007, Operational risk assessment of chemical industries by exploiting, Journal of Loss Prevention, 20, 113–127 [3] Z. Nivolianitou, M. Konstandinidou, C. Kiranoudis, N. Markatos,2006, Development of a database for accidents and incidents in the Greek petrochemical industry, Journal of Loss Prevention, 19 (6), 630-638 [4] M. Konstandinidou, Z. Nivolianitou, N. Markatos, C. Kiranoudis, 2006, Statistical analysis of incidents reported in the Greek Petrochemical Industry for the period 1997–2003, Journal of Hazardous Materials, A135, 1–9 [5] S. S. Shapiro, M.B. Wilk, 1965, An Analysis of Variance Test for Normality (Complete Samples), Biometrika, 52 (3/4), 591-611 [6] D.K. Srivastava, G.S. Mudholkar, 2003, Goodness-of-fit Tests for Univariate and Multivariate Normal Models, in R. Khattree & C.R. Rao “Handbook of Statistics”, 22, 869-906 [7] R. Gentleman, R. Ihaka, D. Bates, J. Chambers, J. Dalgaard, K. Hornik, 2005, The R project for Statistical Computing. [available at /http://www.r-project.org]
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
A continuous-time MILP to compute schedules with minimum changeover times for a make-andpack production Philipp Baumann, Norbert Trautmann∗ Department of Business Administration; University of Bern; Switzerland
Abstract In this paper we present a MILP formulation for the short-term scheduling of a realworld make-and-pack production facility of Procter & Gamble. The model accounts for sequence-dependent changeover times, multipurpose storage units with finite capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists in minimizing the total time required for changeovers while meeting given end-product demands within a prescribed planning horizon. Our computational results show that moderate-sized instances of the case study can be solved to optimality within short CPU times. Keywords: Make-and-pack production, mixed-integer linear programming, batch splitting, quarantine times, sequence-dependent changeovers
1. INTRODUCTION A make-and-pack plant comprises a make stage, during which some products are manufactured in a make-to-stock mode; a pack stage, during which the products are packed in various formats in a make-to-order mode; and some finite storage facilities, to decouple these two stages. Such plants are used for the production of a large variety of products, e.g. food and beverages, detergent, hair dyes, or toothpaste. If different products are produced and packed in various formats, then processing units, storage tanks, and packing lines have to be changed over. A changeover task involves the cleaning and setup of a processing unit or tank. The duration of a changeover task is usually dependent on the product sequence. Minimizing sequence-dependent changeovers in production planning reduces the loss of capacity and favours an environmentally conscious production management, as the cleaning of processing units consumes considerable energy and requires polluting materials. Here, we consider a case study Honkomp et al. (2000) of a real-world make-and-pack production plant from the consumer goods industry producing more than 200 different combinations of products and package types. Besides sequence-dependent changeovers, issues like partial equipment connectivity, material transfers, quarantine times and batch splitting are included in the case study and contribute to the complexity of the planning problem. The models from the literature on short-term scheduling deal with individual characteristics of this case study like sequence-dependent changeovers (e.g. Castro and Novais ∗ [email protected],
[email protected]
A continuous-time MILP to compute schedules with minimum changeover times for a make-and- pack production
1061
(2009)) or partial equipment connectivity and time-consuming material transfers (e.g. Giménez et al. (2009)). To the best of our knowledge, Baumann and Trautmann (2010) is the only paper presenting a model which covers all constraints contained in this case study. However, this model is tailored to makespan minimization and cannot consider changeover times in the objective function. Therefore, we develop a mixed-integer linear programP M PM3 PM1 τ ming approach to the probαi i ... t M lem of computing a feaFM5 FM6 FM1 FM2 τi ρi αiFM sible production schedule t such that a given customer demand is met within a 10 T1 . . . T4 . . . T80 αiT prescribed planning horizon t and the total time required PL1 αiPj for changeovers is minit ... P mized. Due to the great variPL7 αiPj ety of products and the broad t set of technological constraints, it is not possible to Figure 1. Production and packing process calculate weekly-production schedules with exact methods in reasonable CPU times. We propose to use the model for calculating optimal schedules for moderate demands and in order to evaluate heuristic solutions. We have applied our model to the test set of Baumann and Trautmann (2010) which was generated using the Procter & Gamble case study data. The results show that our model performs well in computing optimal solutions to moderate-sized instances. The remainder of this paper is structured as follows. Section 2 describes the production process of the case study. In Section 3, we develop our MILP formulation, and in Section 4, we report on our computational results.
2
PLANNING PROBLEM
The production and packing process is illustrated in Fig. 1. The make stage (M) consists of three premix (PM) units j ∈ J PM each connected to two final-mix (FM) units j ∈ J FM j . Products are produced in make-batches i ∈ I M of size β M = 10. Make-batches i ∈ I PM which require a premix operation are first processed in a suitable premix unit j ∈ JiPM . The processing time αiPM of the premix operation of batch i depends on the product. The resulting intermediate is immediately transferred into a final-mix unit where the final-mix operation with processing time αiFM is performed. During the transfer with duration τi , both units involved are occupied. Between the processing of make-batches belonging to specific pairs of wash-out families, the premix and the final-mix unit have to be cleaned. We denote the time required for cleaning the premix and the final-mix unit between makebatches i ∈ I M and i ∈ I M with ωiiPM and ωiiFM , respectively. After production a make-batch may be split, and must be transferred (ρi ) in one or two storage tanks t ∈ T and held for a product-specific quarantine time αiT . Six storage tanks with capacity κt = 10 and 74 storage tanks with capacity κt = 5 can be used to connect any final-mix unit at the make stage with one of seven packing lines j ∈ J P at the pack stage (P). It is not allowed to store different batches concurrently in the same tank. Packing
1062
Philipp Baumann et al.
lines usually pack in pack-batches i ∈ I P of size β P = 5 due to the intermittent supply of intermediates. The time needed to pack batch i ∈ I P on a suitable packing line j ∈ JiP is αiPj . Similar to premix and final-mix units, the packing lines have to be cleaned. Additionally each package type requires a specific packing-line setup. The time needed to clean and change over a packing line j ∈ J P between pack-batches i and i is denoted by ωiiP j . The planning problem consists in assigning a processing unit j ∈ Jil and a start time Sil for each batch i ∈ I l on all stages l ∈ {PM, FM, P} such that a given demand of packed final products will be met within a given planning horizon H, all technological constraints are satisfied, and the total time for wash-outs and package-type changeovers is minimized.
3
THE MILP MODEL
In order to model the allocation and sequencing decisions we apply the concept of unitspecific immediate precedence (cf. Cerdá et al. (1997)) for the premix, the final-mix and the pack stage. The binary variable Xiil j equals 1 whenever batch i ∈ I l is processed immediately before batch i ∈ I l on unit j ∈ J l with l ∈ {PM, FM, P}. The objective function minimizes the total time for wash-outs and package-type setups on all stages of the production process. The first constraint ensures for all stages l ∈ L that every batch i ∈ I l is processed either in the first place (XSil j = 1) or right after another batch i ∈ I l . Furthermore, every batch i is processed either in the last place (XEil j = 1) or right before another batch i ∈ I l , which is expressed through constraint (2). On each processing unit j ∈ J l at most one batch i can be in the first or in the last place of the processing sequence. This condition is enforced through constraint (3). Constraint (4) guarantees that the immediate predecessor and successor batches of batch i are assigned to the same processing unit j. Constraint (5) defines that if batch i is assigned to premix unit j, the same batch is assigned to a final-mix unit j ∈ J FM which can be connected to premix unit j j. Constraint (6) requires that every make-batch i ∈ I M must be allocated to a tank t ∈ T M with capacity β M or to two tanks with capacity β2 each. The binary variable Wit equals 1, if batch i is assigned to tank t. Constraint (7) allocates every pack-batch i ∈ I P to a single tank t ∈ T . This is sufficient since every tank t ∈ T has a capacity greater than or equal to the size of a pack-batch β P . Constraints (8)–(10) define the finish time of the premix, the final-mix and the packing operation of batch i ∈ I l with Sil denoting the corresponding start time, αil the duration of the operation on stage l, τi the transfer time between the premix and the final-mix stage, and ρi the pump out time. In contrast to the duration of the premix or the final-mix operation, the duration of a packing operation depends on the assigned packing line j ∈ J P . Whenever batch i ∈ I l is the immediate predecessor of batch i ∈ I l on unit j ∈ Jil ∩ Jil , constraints (11) and (12) guarantee that batch i begins after completing both the operation of batch i and the subsequent wash-out and changeover task in unit j. For each batch i ∈ I PM , constraint (13) ensures that the final-mix operation starts when the intermediate has been transferred into the final-mix unit. Constraint (14) enforces that each pack-batch i ∈ I P cannot start before the supplying make-batch i ∈ IiM has been stored for its quarantine time. By introducing the variable XiiT which defines the storing precedence between two make-batches (i before i if XiiT = 1 and i before i else) we can prevent with constraints (15) and (16) that different batches are stored simultaneously in the same tank. The continuous variable Fii t denotes the amount of material provided by batch i ∈ I M to batch i ∈ IiP through tank t ∈ T . Constraints (17)– (19) ensure that every make-batch feeds two pack-batches. Constraint (20) enforces that
A continuous-time MILP to compute schedules with minimum changeover times for a make-and- pack production
1063
whenever material flows from make-batch i to pack-batch i through tank t, both batches are allocated to tank t. Constraint (21) guarantees that the amount stored in a tank t ∈ T never exceeds its capacity κt . Constraint (22) enforces that the processing of any packing batch i ∈ I P is completed within the prescribed planning horizon H. In order to reduce CPU time, we additionally introduced preordering constraints for identical batches and symmetry breaking constraints, both WLOG. Min. s.t.
∑l∈{PM,FM} ∑i∈I l ∑i ∈I l :i=i ∑ j∈J l ∩J l Xiil j ωiil + ∑i∈I P ∑i ∈I P :i=i ∑ j∈JiP ∩J P XiiP j ωiiP j i
i
i
(l ∈ L, i ∈ I l )
(1)
(l ∈ L, i ∈ I l ) ∑ j∈J l XEil j + ∑i ∈I l :i=i ∑ j∈J l ∩J l Xiil j = 1 i i i (l ∈ L, j ∈ J l ) ∑i∈I l XSil j = ∑i∈I l XEil j ≤ 1 j j XSil j + ∑i ∈I l :i=i Xil i j = XEil j + ∑i ∈I l :i=i Xiil j (l ∈ L, i ∈ I l , j ∈ Jil ) j j PM PM = FM XEiFM ∑i ∈I FM ∑i ∈I PM ∑ j ∈J FM Xii j + XEi j :i=i Xii j + ∑ j ∈J FM j j :i=i j j j PM (i ∈ I , j ∈ JiPM ) M (i ∈ I M ) ∑t∈T Wit κt = β (i ∈ I P ) ∑t∈T Wit = 1 PM PM PM Ci = Si + αi + τi (i ∈ I PM ) FM FM FM Ci = Si + τi + αi + ρi (i ∈ I M ) CiP = SiP + αiPj ∑i ∈I P :i=i ∑ j∈J P ∩J P XiiP j + ∑ j∈J P XEiPj (i ∈ I P ) i i
(2)
∑ j∈J l XSil j + ∑i ∈I l :i=i ∑ j∈J l ∩J l Xil i j = 1 i
i
i
i
Sil ≥ Cil + ωil i − M 1 − ∑ j∈J l ∩J l Xil i j i
i
(l ∈ {PM, FM}, i ∈ I l , i ∈ I l : i = i )
SiP ≥ CiP + ωii j − M(1 − XiiP j ) (i ∈ I P , i ∈ I P : i = i , j ∈ JiP ∩ JiP ) FM PM PM (i ∈ I PM ) Si = Si + αi P FM T P Si ≥ Ci + αi − M(1 −Uii ) (i ∈ Ii , i ∈ IiM ) FM FM P T Si + τi + αi ≥ Ci + ωii t − M(4 −Wi t −Wi t −Uii − XiiT ) (i, i ∈ I M : i < i , i ∈ IiP ,t ∈ T ) FM FM P T Si + τi + αi ≥ Ci + ωi it − M(3 −Wi t −Wit −Ui i + XiiT ) (i, i ∈ I M : i < i , i ∈ IiP ,t ∈ T ) M (i ∈ I M ) ∑i ∈IiP ,t∈T Fii t ≤ β (i ∈ I P ) ∑i∈I M ,t∈T Fii t = β P i (i ∈ I M , i ∈ IiP ) ∑t∈T Fii t = β PUii β PWi k ≥ Fii t ≤ β PWik (i ∈ I M , i ∈ IiP ,t ∈ T ) (i ∈ I M ,t ∈ T ) ∑i ∈IiP Fii t ≤ κt P Ci ≤ H (i ∈ I P )
4
(3) (4) (5) (6) (7) (8) (9) (10)
(11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22)
COMPUTATIONAL RESULTS
We have applied the proposed model to an illustrative example (IE) and 8 problem instances developed in Baumann and Trautmann (2010). We computed upper bounds (UB) on the minimal-makespan of these instances with the model of Baumann and Trautmann (2010) (cf. Table 1). For some instances the stated makespan was proven to be minimal, which is indicated by ∗ in Table 1. We have multiplied these upper bound values by ν = 1, 1.1, 1.2, 1.3 in order to simulate different planning horizons H for each instance. We have applied the model to all instances for all four values of H on a 3 GHz Intel Core 2 Quad PC with 8 GB RAM using AMPL and Gurobi 4.0. Model sizes and computational requirements are shown in Table 1 and 2, respectively. We set a CPU time limit (lim.) of 5
1064
PM1
2
Philipp Baumann et al.
3
1 2
FM1
5
4 3
1
4
6
FM2 E-S1
5 6
2
E-S2 E-S3 PL1
4
PL2
3
75
4
1
3
5
1
3
5
300
11 1
12 5
2
525
7
6
750
8
975
9
10
min
Figure 2: Optimal schedule of illustrative example with ν = 1.1, OF = 339 min
ν =1 Inst. ie 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0
1881 13 22 189 lim. lim. 2359 2228 lim.
UB Bin. vars Cont. vars Constr. 943.0∗ 478 140 1037 489.0∗ 1440 856 5552 536.5∗ 2066 1194 9988 684.0∗ 3511 2204 26742 684.0∗ 4757 3678 55265 786.0 5638 5616 95439 664.0 3237 3312 36945 715.5∗ 3454 5548 61815 616.0 3508 6830 75916
Table 1: Model sizes
ν = 1.1
OF CPU (s) GAP (%)
410 152 321 179 368 no 199 105 no
Inst. IE 1 2 3 4 5 6 7 8
0.0 0.0 0.0 0.0 0.5 0.0 0.0 -
OF
ν = 1.2
CPU GAP
339 2979 74 9 179 16 179 97 179 916 226 lim. 74 112 0 182 no lim.
0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 -
OF
ν = 1.3
CPU GAP
195 45 74 12 179 17 179 87 179 333 179 3064 74 136 0 151 0 1856
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OF
CPU GAP
148 7 74 5 179 15 179 77 179 284 179 2970 74 106 0 150 0 1160
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Table 2: Computational results
hours, which was reached five times, sometimes without finding a feasible solution (no). The results show that moderate instances can be solved in short CPU times to optimality, and that a substantial reduction in changeover time can be achieved by enlarging the planning horizon by 10%. Figure 2 shows the optimal schedule for the illustrative example with ν = 1.1. This work was supported by the Swiss National Science Foundation, grant 205121-125106.
References Baumann, P., Trautmann, N., 2010. An MILP approach to short-term scheduling of an industrial make-and-pack production facility with batch splitting and quality release times. In: Lian, Z., Wu, Z., Xie, M., Jiao, R. (Eds.), Proc. IEEE Intl. Conf. on Industrial Engineering and Engineering Management. Macao, pp. 1230–1234. Castro, P., Novais, A., 2009. Scheduling multistage batch plants with sequence-dependent changeovers. AIChE J. 55, 2122–2137. Cerdá, J., Henning, G., Grossmann, I., 1997. A mixed-integer linear programming model for shortterm scheduling of single-stage multiproduct batch plants with parallel lines. Ind. Eng. Chem. Res. 36, 1695–1707. Giménez, D., Henning, G., Maravelias, C., 2009. A novel network-based continuous-time representation for process scheduling: Part I. Main concepts and mathematical formulation. Comp. Chem. Eng. 33, 1511–1528. Honkomp, S., Lombardo, S., Rosen, O., Pekny, J., 2000. The curse of reality - why process scheduling optimization problems are difficult in practice. Comp. Chem. Eng. 24, 323–328.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
An Evaluation Method for Plant Alarm System Based on a Two-Layer Cause-Effect Model Naoki Kimura,a Kazuhiro Takeda,b Masaru Noda,c Takashi Hamaguchid a
Faculty of Engineering, Kyushu University, 744 Motooka, Fukuoka 819-0395, Japan Faculty of Engineering, Shizuoka University 3-5-1 Johoku Hamamatsu466-8555,Japan c Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma 630-0192, Japan d Graduate School of Engineering, Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya 466-8555, Japan b
Abstract Industrial plant alarm system forms the core element of almost all modern operator interfaces used to automatically monitor plant conditions and alert plant operators to any significant changes that require diagnosis and/or countermeasures. In this paper, we propose a method for quantitatively evaluating the diagnostic and timely characteristics of alarm system that uses a two-layer cause-effect model to measure three rates used as indices: effective, recall, and timeliness rates. The effective and recall rates are used to evaluate the diagnostic abilities of the alarm system in identifying root causes of assumed malfunctions. The timeliness rate is used to evaluate the plant alarm system’s ability to generate diagnostic alarms quickly enough for operators to respond in a timely manner and correct the problem. The case study demonstrated the feasibility of the proposed method. Keywords: Plant alarm system, Effective rate, Recall rate, Timeliness rate, Two-layer cause-effect model.
1. Introduction Plant alarm system is important for the safe and reliable operation. When process variables become abnormal, alarms notify operators by sound, visual indication, message, etc. A poorly designed alarm system causes nuisance alarms, standing alarms, and alarm flooding and can even result in incidents or accidents (ISA, 2010). The Engineering Equipment and Materials Users’ Association (EEMUA, 2007) issued a comprehensive guideline for designing, implementing, evaluating, improving, and buying an alarm system. This guideline summarizes some of the characteristics that each alarm should have; namely that it be relevant, unique, timely, prioritized, understandable, diagnostic, advisory, and focused. The diagnostic and timely elements are the most important characteristics of alarms. Izadi et al. (2009) proposed using the receiver operating characteristic (ROC) curve to illustrate the false alarm rate and the missed alarm rate trade-offs in alarm design, but did not mention a quantitative evaluation method from the viewpoints of diagnostic and timeliness characteristics. In this paper, we propose a method for quantitatively evaluating the diagnostic and timely characteristics of alarm system that uses a two-layer cause-effect model to measure three rates used as indices: effective, recall, and timeliness rates. In this study, a case study demonstrated the feasibility of the proposed method.
N. Kimura et al.
1066
2. Evaluation Method for Plant Alarm System 2.1. Diagnostic Alarm Variables Derived by Two-Layer Cause-Effect Model Takeda et al. (2010) proposed an alarm variable selection method based on a twolayer cause-effect model. The model represents the cause and effect relationships between the deviations of state variables, such as process variables and manipulated variables, from normal fluctuation ranges. It is represented by a directed graph, where two types of nodes are defined. i+: Upward deviation of state variable i from normal fluctuation range ií: Downward deviation of state variable i from normal fluctuation range In the two-layer cause-effect model shown in Figure 1, a single direction arrow links the deviation of a state variable and its affected state variable. The letters F and L indicate flow rate sensor and valve positions, respectively.
Fig. 1 Example of two-layer cause-effect model An evaluation method for plant alarm system derives the sets of the state variables with the direction of their deviation from the normal fluctuation range. The derived sets are theoretically guaranteed to be able to qualitatively distinguish all assumed malfunctions in a plant when alarm limits are adequately set to those state variables. In this study, the derived sets are referred to as the sets of the diagnostic alarm variables. 2.2. New Indices for Evaluating Alarm System In a previous study, we introduced two indices, the effective and recall rates, used to evaluate the diagnostic characteristic of a plant alarm system (Kimura et al., 2010). Alarms are classified by diagnostic characteristic and generation. As shown in Table 1, w is the number of diagnostic alarms generated, x is the number of not generated diagnostic alarms, and y is the number of non diagnostic alarms generated. The effective rate, that is, the percentage of diagnostic alarms generated by the alarm system to all generated alarms, is calculated using Eq. (1). The recall rate, that is, the percentage of diagnostic alarms generated to all diagnostic alarms, is calculated using Eq. (2). High effective and recall rates indicate that the alarm system possesses strong enough characteristic to identify the root causes of assumed malfunctions. In this study, we propose also using a timeliness rate, calculated using Eq. (3), for evaluating the timeliness characteristic of a plant alarm system. In Eq. (3), te is the elapsed time from the beginning of the malfunction to when all diagnostic alarm is generated, and ta is the longest available time considering the time it takes for operators to respond and correct the problem generating the alarms after the malfunction occurs, which is determined in accordance with plant dynamics. A low timeliness rate indicates that the plant alarm system generates diagnostic alarms too late for operators to respond and correct the problem in a timely manner. Alarm
An Evaluation Method for Plant Alarm System Based on a Cause-Effect Model
1067
system must be modified by alarm limits setting and so on. The effective, recall, and timeliness rates for each malfunction are calculated in accordance with simulation results. Effective rate [%] = w / (w + y) * 100
(1)
Recall rate [%] = w / (w + x) * 100
(2)
Timeliness rate [%]
100 ° ° § te - ta ®100¨¨1 ° © 0.5t a °0 ¯
if 0 d t e d t a · ¸¸ if t a t e d 1.5t a ¹ if 1.5t a t e
(3)
Table 1 Criteria of diagnostic alarm system Generated
Not generated
Diagnostic alarms
w
x
Non diagnostic alarms
y
í
2.3. Procedure for Conducting Evaluation First, the sets of diagnostic alarm variables that can be used to identify all assumed malfunctions in a plant are derived using the two-layer cause-effect model. The assumed malfunctions are then simulated using a model of the plant, and all alarms generated after each assumed malfunction occurs are recorded.
3. Case Study 3.1. Example Plant and Plant Alarm System The proposed indices are demonstrated through a case study that uses the two-tank system in Fig. 2 as an example plant. Product is fed to Tank 1 and transferred to Tank 2. A certain amount of the product is recycled to Tank 1 from Tank 2. The letters P, F, L, and V in Fig. 2 indicate pressure, flow rate and liquid level sensors, and valve positions, respectively. In this example plant, five types of malfunctions are assumed to be distinguishable from the operation of the plant alarm system. Mal-1: High feed pressure (ta = 120 min.) Mal-2: Low feed pressure (ta = 120 min.) Mal-3: Blockage in recycle pipe (ta = 30 min.) Mal-4: Wrong valve operation of V4 open (ta = 80 min.) Mal-5: Wrong valve operation of V4 close (ta = 80 min.) Figure 3 shows the two-layer cause-effect model of the example plant. To distinguish the above 5 malfunctions, 2 types of alarm limits, high limit (PH) and low limit (PL), for 12 measured process variables were set as shown in Table 2. If the value of a state variable exceeds the corresponding alarm limit, the corresponding alarm is generated. The alarm settings in Table 2 were determined by taking account of plant dynamics.
1068
N. Kimura et al.
Fig. 2 Example plant of two-tank system
Fig. 3 Two-layer cause-effect model
Table 2 Alarm system and their PH and PL limits Type
Flow rate
Liquid level Valve position
Alarm variables F1 F2 F3 F4 F5 L1 L2 V1 V2 V3 V4
Normal values 5603 16806 22409 5603 22409 2.20 50.0 0.714 0.876 0.815 0.777
PH/PL settings 5883/5323 17647/15966 22656/22083 7128/4505 1328/1183 2.31/2.09 52.5/47.5 0.750/0.678 0.919/0.832 0.856/0.774 0.816/0.738
Units kg/hr kg/hr kg/hr kg/hr kg/hr m % -
3.2. Results of Diagnostic Alarm Selection All the sets of diagnostic alarms for Table 3 Example of sets of diagnostic alarm the example plant, which can be and alarm generation patterns theoretically used to distinguish all assumed malfunctions, were derived Alarm F1 L1 V4 from the two-layer cause-effect model variables PH PL PH PL PH PL by using our previously reported Mal-1 ż ż diagnostic alarm selection method Mal-2 ż ż (Takeda et al., 2010). The minimum Mal-3 ż number of diagnostic alarms was three. Mal-4 ż ż Table 3 shows an example of the sets Mal-5 ż ż of the minimum number of diagnostic alarms and the alarm generation patterns used to distinguish each assumed malfunction. 3.3. Evaluation Results for Each Assumed Malfunction Table 4 shows the generated alarms and their generation time after each assumed malfunction, which were obtained using a dynamic simulator (Visual Modeler, Omega Simulation Co., Ltd.)
An Evaluation Method for Plant Alarm System Based on a Cause-Effect Model
1069
Table 4 Simulation results for each assumed malfunction Malfunction Mal-1 Mal-2 Mal-3 Mal-4
Mal-5
Generated Alarm generation alarms times [min.] L1.PH 32 F1.PL 10 L1.PL 27 L2.PL 118 L2.PH 19 L1.PL 36 V4.PH 10 F4.PH 10 L2.PL 16 L1.PL 120 F4.PH 10 L2.PH 23 L1.PH 175
Table 5 summarizes the evaluation results for each assumed malfunction. The effective rates were 100% for all assumed malfunctions, meaning that all generated alarms were diagnostic alarms. The recall rates for Mal-1 and Mal-5 were less than 100%, meaning that some diagnostic alarms were not generated and that distinction of the two failed. The operators could not distinguish Mal-3 and Mal-4 in the available time using the alarm system because their timeliness rates were less than 100%. Table 5 Evaluation results for each malfunction Index
Mal-1
Mal-2
Mal-3
Mal-4
Mal-5
Effective rate
100%
100%
100%
100%
100%
Recall rate
50%
100%
100%
100%
75%
Timeliness rate
Not distinguished
100%
60%
0%
Not distinguished
4. Conclusion We proposed a method based on a two-layer cause-effect model for evaluating the characteristics of an alarm system. Simulation results using an example plant demonstrated its feasibility. We plan to develop a method for rationalizing plant alarm system in accordance with proposed indices, which are effective, recall and timeliness rates.
References EEMUA, 2007, Alarm Systems - A Guide to Design, Management and Procurement, London ISA, 2009, Management of Alarm Systems for the Process Industries, North Carolina Izadi et al., 2009, Proc. SAFEPROCESS 2009, 651–656, Barcelona Kimura et al., 2010, Human Factors (in Japanese), 15(1), 28–35 Takeda et al., 2010, Use of Two-Layer Cause-Effect Model to Select Source of Signal in Plant Alarm System, KES 2010, Part II, LNAI 6277, 381–388
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Generating cause-implication graphs for process systems via blended hazard identification methods Erzsébet Németh,a Benjamin J. Seligmann,a Kim Hockings,b Jim Oakley, b Con O’Brien,c Katalin M. Hangos,d Ian T. Camerona a
School of Chemical Engineering, The University of Queensland, Brisbane, Australia 4072 b BlueScope Steel Ltd, Port Kembla, Australia 2500 c BP Refinery (Bulwer Island), Brisbane, Australia 4008 d Process Control Research Group, HAS Computer and Automation Research Institute, Budapest 1111, Hungary
Abstract Causal knowledge in complex process systems is a powerful representational model that permits a range of important applications related to process risk management. These include the development of operator training systems, diagnosis tools, emergency response planning as well as implications on process and control system retrofit and design. Using a blended hazard identification approach we show how causal knowledge can be generated from design documentation and represented in a structured language, which is then amenable to display cause-implication graphs that explicitly show the links between failures, causes and implications. A case study illustrates the application of the methodology to a safety system in an industrial coke making plant. Keywords: hazard identification, diagnosis, causal analysis, risk management, operator training
1. Introduction Complex industrial failures continue to occur with regular frequency. As part of the design and operational strategies towards resilient systems (Hollnagel et al. 2006), the particular concepts of ‘monitoring’, ‘anticipating’, ‘learning’ and ‘responding’ play vital roles in improving risk management. This work addresses aspects of ‘monitoring’ and ‘anticipating’ through improved representation and understanding of failure and causality in process systems. The effective and systematic generation of causal knowledge to aid diagnosis, operator training and process design is a vital aspect in overall process risk management. Despite much software development around classical hazard and operability studies, much remains to be done to generate and capture knowledge for intelligent use across the process life cycle (Nemeth et al. 2099a). This work addresses these issues through the development and application of a blended hazard identification (BLHAZID) approach, based on a functional systems framework (Seligmann et al. 2010). This structured approach generates knowledge that is easily accessible to inference engines that can elucidate root causes and implications related to failures. The paper briefly describes the BLHAZID methodology, its functional systems foundation, the development of causal graphs and then a case study to show the application of the methodology to a current industrial process safety system.
Generating cause-implication graphs for process systems via blended hazard 1071 identification methods
2. Blended hazard identification as a knowledge generation mechanism for advanced process diagnosis 2.1. Functional systems framework that supports HAZID and advanced diagnosis The functional systems framework (FSF) (Cameron et al., 2008;) is a formalism that represents the way function is generated in complex processes through the interaction of plant, people and procedural components. This formal framework provides the basis for considering hazard identification methods that utilize both a component driven analysis and also a function driven analysis. The FSF is seen in Figure 1.
Figure 1: Functional Systems Framework (FSF)
The system’s view shows important internal features used by the current developments. In particular, the “intended function” is seen as the result of the interaction of capabilities provided by the major system components of people, plant and procedures. The function-driven analysis considers the intended function and then looks at functional failures, their causes and implications. These techniques drive from right to left in the FSF diagram, since the functional failures are often traced to component failures. The component-driven analysis identifies failures in the components of the system, which affect the delivered functions. It drives from left to right in terms of the FSF. 2.2. Blended hazard identification method In the blended hazard identification (BLHAZID) approach (Seligmann et al., 2010), the workflow of the method considers both functional failures and component failures. BLHAZID can be advantageous since the strengths of each method can be combined and the effect of the weaknesses diminished. This blended approach provides a complementary focus in generating knowledge for use in a range of applications. BLHAZID analysis of process systems is performed within subsystems. The decomposition of a system into subsystems is done using a decomposition strategy described in Németh et al. (2009a). The two main sections of the BLHAZID analysis are shown in Figure 2, where the streams and components of each subsystem are analyzed to identify hazards. 2.3. Formal language for representing BLHAZID used knowledge and outcome A formally represented knowledge is based on a conceptualization that involves the entities and the relationships that exist amongst the entities defined by the FSF. An ontology – as an explicit formal specification of terms in the domain and relationships among terms defines a common vocabulary, shares a common understanding of
1072
E. Németh et al.
System selection and decomposition 1. Select the system to be analysed (M) 2. Decompose the system into subsystems (SA)
Hazard identification 3. For each subsystem:
Execution modes: (M) – Manual (SA) – Semi-automatic, needs engineer input and review (A) – Automatic
i. Identify initial set of characterising variables (A) ii. For each characterising variable: Generate deviations (A) iii. For each deviation: Elicit possible causes (SA or M) For each possible causes: Elicit its implications (SA or M) iv. For each component: Elicit failure modes (A) Component For each failure mode: failure analysis Elicit implications (SA) v. Collate consequence list (A) vi. Collect new characterising variables from possible causes and implications and add them to initial c-var set. Go back to substep ii. (A)
Figure 2: Blended hazard identification technique steps
structured information among people or software systems and enables reuse of domain knowledge (Gruber, 1993). For BLHAZID analysis, three different knowledge types have been distinguished: 1. General process system (a priori) knowledge: contains the generic knowledge about different component types and variable types with their relevant guide words; 2. Process-specific knowledge: describes the components, their connections, and subsystem decomposition related to process system be analyzed; 3. BLHAZID generated knowledge: all type of failures (functional, component, environment, part), and the causal relationships between them. 2.4. Cause-implication graphs The adopted structured language, including the causal relationships, captured as semantic triplets during the BLHAZID workflow, facilitates the determination of the failure propagation through the system, and the determination of potential root causes and possible consequences of a deviation using backward and forward reasoning. As a graphical representation of the causality information, we introduced the causeimplication graph to visualize it. The nodes of the graph are the failures and each edge represents a causal relationship between nodes. Beyond the graphical illustration of the causality information, applicable graph results can be used to trace consistency checking issues such as missing causality relations, orphaned failures or sub-graphs within the blended method and provides formal means of auditing hazard identification across the life cycle. Cause-implication graphs have great utility in operator training, on-line diagnosis and application to design decisions.
3. Case study A case study of an industrial coke ovens gas bleeder system is used to highlight some advantages in the representation form of the BLHAZID results and the generated causeimplication graphs. The bleeder (or flare) releases coke ovens gas (COG) when the gas collector main rises above a certain pressure. The released gas is ignited by the pilot flames at the flare tip. There are two operational modes: x No flaring: under normal operating pressure conditions the Pullman valve at the base of the flare is shut, with flushing liquor (FLIQ) providing a continuous valve seal as well as reducing the temperature of the COG in the flare feed main through sprays. x Flaring: when the pressure in the collector main rises to a critical value, the Pullman valve opens allowing COG to enter the flare stack, while the FLIQ flow is stopped, and steam is injected into the flare. The simplified P&ID of the bleeder system is shown in Figure 3, with the subsystem decomposition being highlighted in 3 different colours.
Generating cause-implication graphs for process systems via blended hazard 1073 identification methods
Figure 3: Simplified P&ID of the bleeder system with decomposition
Under normal operating pressure, FLIQ is supplied as a seal liquid for the Pullman valve. If, in this operational mode the control valve FLIQ-BV1 is in a ‘closed’ state when it should be ‘open’, results in no FLIQ supply to the Pullman valve. Reasoning over the BLHAZID result of the flare system under the normal pressure situation, the possible causes of FLIQ-BV1 ‘closed’ state are mined out by building the causality graph. The resultant cause-implication graph, seen in Figure 4, illustrates how each possible cause, shown as source nodes of the graph lead to the ‘closed FLIQ-BV1 position’. The graph shows how the failures travel through the subsystems, where ovals represent functional failures and shaded boxes are component failures.
Figure 4: Cause-implication graph for representing the possible causes of the failure ‘closed FLIQ-BV1 valve position’.
If we look for the implications/consequences of the ‘closed’ FLIQ-BV1 state in the BLHAZID result, the resultant cause-implication graph shown in Figure 5. The sink nodes of the graph show the effect on the system output, or on the environment.
1074
E. Németh et al.
Figure 5: Cause-implication graph for representing the possible implications of the failure ‘closed FLIQ-BV1 valve position’
4. Conclusions This work has been prompted by industry challenges in capturing process knowledge from design reviews and operational experience in a structured and re-usable form that is far more powerful than current practices. It provides the basis for a wide range of applications as well as aiding in auditing requirements under major hazard facility regulations. The immediate utility of the BLHAZID approach and the resultant causeimplication graphs is in the building of operator guidance systems and enhancing operator training regimes. The industrial example, although relatively simple, illustrates well the outcomes of these new, structured hazard identification methods and the power of visualizing the causality structures.
Acknowledgement The authors acknowledge support from the Australian Research Council under Linkage Grant LP0776636. We acknowledge the financial and technical support of BlueScope Steel (BSL), Australia and BP Refinery (Bulwer Island), Australia.
References I.T. Cameron, B. Seligmann, K.M. Hangos, E. Németh, R. Lakner, 2008, A functional systems approach to the development of improved hazard identification for advanced diagnostic systems, In 18th European Symposium on Computer Aided Process Engineering – ESCAPE18, on CD, paper ID FP-00463 T.R. Gruber, 1993, A Translation Approach to Portable Ontology Specification, Knowledge Acquisition, vol. 5, pp. 199-220 E. Hollnagel, D. Woods, N. Leveson, 2006, Resilience Engineering: Concepts and Precepts, Ashgate Publishing, UK. E. Németh, R. Lakner, I.T. Cameron, K.M. Hangos, 2009a, Fault diagnosis based on hazard identification results’, In Proceedings of the 7th IFAC Symposium on Fault Detection, Supervision and Safety of Technical Processes (SAFEPROCESS 2009), pp. 1515-1520, on CD E. Németh, K. Hockings, C. O’Brien, I.T. Cameron, 2009b, Knowledge representation, extraction and generation for supporting a semi-automatic blended hazard identification method, CHEMECA 2009, on CD, paper ID 227 B. Seligmann, E. Németh, K. Hockings, I. McDonald, J. Lee, K.M. Hangos, I.T. Cameron, 2010, A structured, blended hazard identification framework for advanced process diagnosis, In 13th International Symposium on Loss Prevention and Safety Promotion in the Process Industry (Loss Prevention 2010), pp. 193–200
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Integrated Supply Chain Planning for Multinational Pharmaceutical Enterprises Naresh Susarla, I A Karimi Department of Chemical & Biomolecular Engineering, National University of Singapore, Singapore 117576.
Abstract The management of global supply chains is nowadays one of the most vital research topics for multinational enterprises. In order to deal with the fierce market competition, industry personals are in urgent need of fast and intelligent models that support longterm and sustainable decision making. With a special focus on multinational pharmaceutical enterprises, we develop a simple and powerful LP model to assist in long-term production planning by maximizing the net profit after taxes of the whole enterprise. We structure our model to consider many real-life scenarios such as variable transfer prices, import duties, taxes, and different stock keeping policies. To demonstrate the performance of our model, we study the effect of different operational policies on the bottom-line using a small example. We then solve an industrial scale planning problem for a supply chain network consisting of 49 different entities, distributed across 14 different locations, and producing 23 different products, for a period of 5 years. Keywords: Integrated planning, pharmaceutical supply chains, transfer prices, regulatory matters.
1. Introduction Supply chain functions of large pharmaceutical groups are commonly performed across a number of geographical regions by a number of entities in the group. These functions include raw material sourcing, API manufacturing, secondary manufacturing, warehousing, and distribution. Operations in such supply chains typically involve huge inventories for high customer satisfaction and several cross border transactions. High inventories result in high working capital of the company, i.e. a huge sum of cash is locked in terms of the material inventory. Also, cross border transactions involve a plethora of regulatory matters, which has a direct impact on the bottom-line of the company. Our interaction with such a company revealed that the decision makers are under immense pressure to reduce the working capital by efficient manufacturing thereby decreasing the inventory levels and to improve the overall performance of the company by intelligently handling the cross-border regulatory issues. While the company officials are still extensively dependent on ad-hoc decision making, there is an increase in the interest of researchers for developing efficient models to resolve these issues. McDonald and Reklaitis (2004) presented a comprehensive review of the existing works and highlighted the need for further research. Papageorgiou et al. (2001) developed a MILP formulation for the capacity planning problem considering tax differentials and transfer prices using a scenario based approach. Levis and Papageorgiou (2004) extended the previous model by considering uncertainties in product development. Oh and Karimi (2004, 2006) studied in detail the implications of regulatory affairs and duty drawbacks in a capacity planning problem. The results
1076
N. Susarla et al.
highlighted the effects of considering regulatory matters on new facility selection, investment profiles, sourcing decisions, and overall NPV of the company. Sundaramoorthy et al. (2006) presented a LP model for a global chemical company considering production details and constant transfer prices. It is evident that an operational planning model that considers operational details along with strategic decisions of inventory management and financial impacts of cross border transactions does not exist in the literature. In this paper, we modify and enhance the model of Sundaramoorthy and Karimi (2006) to model disparate entities and their activities in a global supply chain in a seamless fashion with a granularity at the level of production lines. We develop a multi-period Linear Programming (LP) model with special focus on pharmaceutical industry to address the integrated problem of production planning, sourcing, distribution, inventory management, and the effects of regulatory affairs (including transfer prices and taxes). We extend our model to consider variable transfer prices and different perspectives of an enterprise over inventory keeping policy. We consider two case studies of global supply chain network of multi-national companies with multiple API manufacturing facilities, several secondary manufacturing facilities, and distribution facilities located around the world. We further evaluate our model by studying different scenarios for the effect of a change in company policy on the overall profit of the enterprise.
2. Problem Statement Global supply chain of a multinational pharmaceutical company consists of several individual but geographically distributed entities. Each of them are either operating at one or more different site s (s = 1, 2, …, S). These are sub-grouped into suppliers (sups), API manufacturing plants (APIs), secondary manufacturing plants (sms), distributors (dists), and customers (cuss). The supply chain operation of the company involves production and distribution of one or more products. This requires purchase and sales of several raw or intermediate materials, often among the subsidiaries of the same company situated in different countries. Let, the supply chain operations involve m (m = 1, 2, …, M) materials. Now, a same material can be a product at one site but a raw material at another site. For this, we define PMs = {m | m is a product at site s} and RMs = {m | m is a raw material at site s}. We consider the transfer price for the selling/buying of intermediate or final products among different entities of the same company, as a variable. However, the choice of such a price is highly governed by several legal authorities and government policies. They provide a specific range for the selection of a suitable transfer price. Also, various entities in the global supply chain often are located in many different countries with varying taxes (taxs) and import duties (cdmss'). This offers several opportunities for the company to maximize its after tax profits by carefully using its resources under different tax jurisdictions. Of all the individual sites of a supply chain operation, value addition is mostly done at API and secondary manufacturing plants. The production process is expressed in details with a recipe diagram for each such plant. Each of these sites consists of several production lines l (l = 1, 2, …, L) that perform campaigns of i tasks (i = 1, 2, …, I) according to the recipe diagram of each product and convert the raw materials into intermediate or final products. To denote the suitability of various tasks on production lines and the sites that house these production lines, we define Il = {i | i is processed on production line l} and Ls = {l | l is housed at site s}. Now with this, the planning problem can be stated as follows. Given the (1) supply chain configuration and number of entities involved (2) products and production recipes (3) demands, due dates, and costs of products and raw materials (4) initial, safety, and
Integrated supply chain planning for multinational pharmaceutical enterprises
1077
maximum inventory limits (5) applicable transfer price range for the inter-site transfers, and (6) applicable import duties and taxes for each state/country, we determine (1) campaign length and schedules on each line (2) inventories at each site (3) amount of inter-site materials transfer (4) the transfer prices, and (5) cost of production, assuming deterministic scenario, and aiming for maximizing the net profit after taxes of the entire supply chain.
3. Mathematical Formulation Unless otherwise indicated, an index takes all its legitimate values in all the expressions or constraints in our formulation. We use (Karimi & McDonald, 1997; Sundaramoorthy & Karimi, 2004) the known order delivery dates (DD0 (= 0) < DD1 < DD2 < DD3 < …) to segment the planning horizon [0, H] into T non-uniform intervals (t = 1, 2, …, T) of length ht = DDt – DD(t–1). Let CLilst denote the campaign length of task i in line l at site s during interval t. We demand that the total length of all suitable campaigns in line l at site s must not exceed the available time during an interval. So, we write, l Ls, s APIs SMs (1) ¦ CLilst d ht iIl
U The parameters Rils and RilsL give the maximum and minimum rates of production during a campaign of task i in line l at site s. Let dqilst denote the differential quantity of materials produced from CLilst, which is over and above the minimum possible quantity based on the minimum rate of production. Now, the total amount produced from CLilst must be less than or equal to the quantity based on maximum rate of production. So, we have, U i Il, l Ls, s APIs SMs (2) RilsL CLilst dqilst d Rilst CLilst We assume that every material m has a dedicated but limited storage M at each site s, where m is either produced or consumed. Let Invmst denote the available inventory of material m after adjustment with the amount of material transferred in and out of the site s at the end of interval t. We define mtmss't as the amount of material m transferred from site s to s' during interval t. We write the following balance on the inventory, Invmst In0ms ¦ ¦ U mi ( RilsL CLilst dqilst ) ¦ mtms ' st ¦ mtmss ' t
iIl
Invmst
lL s s API s SM s
Invms (t 1) ¦ iIl
¦
lL s s API s SM s
s 'PM s
s 'RM s
m Ms, s sups, s cuss, t = 1 ¦ mtms ' st ¦ mtmss ' t
U mi ( RilsL CLilst dqilst )
s 'PM s
(3a)
s 'RM s
(3b) m Ms, s sups, s cuss, t > 1 where, In0ms is the initial inventory of material m before the beginning of the horizon (i.e. before t = 0), and ȡmi is the mass ratio of the material m to the total quantity of all materials produced by task i. We consider that demands for all products are always satisfied. Thus, the total amount of products delivered is more than the given demand. So, s' cuss, m PMs (4) ¦ mtmss 't t Dms 't sdists
where, Dms't is the demand of material m at customer site s' during interval t. To satisfy the safety stock keeping policy of a company, we penalize any violation of such inventory levels. Now, the safety stock policy is usually monitored at two different perspectives, overall safety stock and site specific safety stock. The overall safety stock
1078
N. Susarla et al.
(OSSmt) of a product is for the total inventory (raw material, intermediates, and finished products) at different stages of a product throughout the supply chain. The site specific safety stock (SSmst) is for each finished product of the site. Now, we compute these YLRODWLRQVǻOSSmt IRUYLRODWLRQRIRYHUDOOVDIHW\VWRFNDQGǻSSmst, for the violation of safety stock at site s) by the following. 'OSS mt t OSS mt ( ¦ Invmst ¦ (5) ¦ Invm ' s ' t ) sSM s dists
s ' API s SM s m ' AIRm
m Ms (6) 'SS mst t SS mst Invmst where, AIRm is a set of all intermediate products or APIs at different sites that are used for producing the final product m'. The overall profit of the enterprise and each entity thus, involved is highly sensitive to the choice of an appropriate transfer price (TP). Usually, the governing authorities provide a range of transfer price following strict guidelines for each material. We U L consider this range for TP ( TPmss ' t and TPmss ' t ) and allow our model to choose the best suitable price for each material sold by site s to s' during interval tZHGHILQHǻTPmss't as the differential selling price for a given amount of material that is above the minimum possible price based on the least possible TP. Also, such selling price for the amount of material sold must be below the price based on the highest possible TP. Thus, we write, L U m Ms, PMs (7) TPmss ' t mt mss ' t 'TPmss ' t d TPmss ' t mt mss ' t We compute the overall operating cost for each site as the sum of costs for processing, material procurement, inventory holding, safety stock violation, and duties for all material imports. L CLilst dqilst )) ¦ ¦ hcs Invmst ¦ ¦ ¦ cost s ¦ ¦ E sm ( ¦ ¦ U mi ( Rilst t
iI l lLs
mSm
t
Umi ! 0
(TPmsL ' st mtms ' st 'TPms ' st ) ¦ t
¦ ¦
mRM sm s 'SM s
mM s
t
mRM sm s 'SM s
dutyms ' st (TPmsL ' st mtms ' st 'TPms ' st ) ¦¦ (v1m m
t
'OSSmt ) ¦¦¦ v 2ms 'SS mst
(8)
The net profit before taxes paid for each site is then given by the following. PBTs ¦ ¦ ¦ (TPmsL ' st mtms ' st 'TPms ' st ) cost s
(9)
m
t
s
t
mPM sm s 'SM s
Finally, the planning objective is to maximize the total net profit after deducting applicable taxes (taxs) of all the sites. NetP ¦ PBTs (1 taxs ) (10) s
Eqs. 1 – 10 complete our model for the integrated supply chain planning for multinational enterprises.
4. Results We present two case studies of multinational pharmaceutical companies, to demonstrate the performance of our model. We consider a planning horizon of up to 5 years. For our evaluation, we used CPLEX 12/GAMS 23.2 on a DELL Precision T5500 workstation with 2 Intel Xeon Processors of 2 GHz each, 4 GB RAM, running Windows 7 Professional.
Integrated supply chain planning for multinational pharmaceutical enterprises
1079
4.1. Example 1 This is a small example with 1 API plant and 2 secondary manufacturing plants housing 8 production lines, 3 distributors, and 10 materials. We solve this example for a planning horizon of 4 months. We solve 3 different scenarios for this example to evaluate the effect of changing overall and local storage policies of the company on the overall profit. We observe that the inventories at each site change even due to a change in the overall storage policy of the company in order to maximize the overall profit utilizing the tax differentials and different transfer pricing. Table 1 consolidates the model and solution statistics for all the scenarios. 4.2. Example 2 This is an industrial scale example consisting of 4 API plants and 7 secondary manufacturing plants housing 42 production lines, 12 distributors, and 63 materials. We solve this example for a planning horizon of 5 yrs (i.e. 60 months). A few of the major model output includes the optimal transfer prices for various materials, targets for each production line, and optimal inventory profiles at all locations. Table 1 Model and solution statistics Example Scenario
1
a b c
2
H Variables Equations Objective # months # # $1,000 70632 4 711 732 67144 63595 60 29534 21752 6847380
CPUs Notes s 0.017 Initial case 0.017 Changed overall storage polices 0.017 Changed overall and local storage policies 0.325 -
5. Conclusion We successfully modify the model of Sundaramoorthy et al. (2006) and extend it to develop a simpler LP model for integrated production planning in multinational pharmaceutical companies. We also demonstrate the usefulness of our model by evaluating 3 scenarios for an illustrative example and an industrial case study for a planning horizon of 5 years. Our model successfully captures the effect of tax differentials across various geographical locations by optimal transfer pricing, inventory holding, and other real-life factors on the overall profit of a company. We are currently working closely with a local company to include other features such as transportation selection and contract selection to improve the utility and acceptability of our model by the industry. Also, we are developing a decision-support tool that can readily generate several scenarios and adapt to the dynamic business and market requirements.
References McDonald, C. M., Reklaitis, G. V. Design and Operation of Supply Chains in Support of Business and Financial Strategies. Presented at the Foundations of Computer-Aided Process Design, FOCAPD, Princeton, NJ, July 11-16, 2004. Papageorgiou, L. G., Rotstein, G. E., Shah, N., 2001. Strategic supply chain optimization for the pharmaceutical industries. Industrial and engineering chemistry research, 40, 275-286. Levis, A. A., Papageorgiou, L. G., 2004. A hierarchical approach for multi-site capacity planning under uncertainty in the pharmaceutical industry. Comp and chem engg, 28, 707-725. Oh, H. C., Karimi, I.A., 2004, Regulatory Factors and Capacity-expansion Planning in Global Chemical Supply Chains. Industrial and engineering chemistry research, 43, 3364-3380. Oh, H. C., Karimi, I.A., 2006, Global multiproduct production-distribution planning with duty drawbacks. AIChE Journal, 52 (2), 595-610. Sundaramoorthy, A., Xianming, S., Karimi, I. A., Srinivasan, R., Presented in PSE-2006, July 0913. An integrated model for planning in global chemical supply chains.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Data Mining and Decision Making Tool Development for an Industrial Dual Sequential Batch Reactor Soledad Gutiérrez, Adrián Ferrari, Alejandra Benítez IIQ - Department of Reactors Engineering, Engineering School - University of the Oriental Republic of Uruguay, Herrera y Reissig 565, CP 11300, Montevideo, Uruguay.
Abstract A full scale dairy wastewater treatment plant for carbon and nitrogen removal fed with a seasonal variable flow rate and composition is considered in this paper. It consists of two anaerobic ponds followed by a 4600 m3 dual sequential batch reactor (DSBR). The main purpose of this contribution is to develop a friendly off-line model based decision making tool (DMT) for DSBR in order to aid for its aeration management. To develop the DMT, a kinetic and hydraulic DSBR model was proposed and in-situ parameters estimation (data mining) was previously conducted with reasonable accuracy. DMT achieved acceptable numerical precision and computational burden (1 min/run). Despite the current off line operating mode of this tool, real time optimization and model predictive control are the challenges to be aborded in future works. Keywords: SBR, dynamic optimization, particle swarm optimization, data mining, decision making tool.
1. Introduction Since many years ago, the sequential batch reactor (SBR) technology has become an attractive biological solution for carbon, nitrogen and phosphorous removal in industrial wastewaters (WW) (Artan and Orhon; 2005). SBR is an activated sludge process designed to operate under non-steady state conditions, allowing reaction and sludge settlement both occurring in the same tank. SBR is a useful conception due to its versatility for carbon and nitrogen removal owing to alternate anoxic and oxic stages. Its presents an important degree of flexibility associated with working in a time rather than in a space basis (Chambers, 1993). The high decision support level presented by these equipments, encourages the use of decision making tools (DMT) in order to achieve optimal performances. Despite real time optimization (RTO/DRTO) is not frequent in these systems (Cristea et. al; 2010), off-line non linear model based decision support systems plays nowadays an important role. Previous work aiming to optimize the aeration management in SBRs has been carried out in which at least the initial values
Data Mining and Decision Making Tool Development for an Industrial DSBR
1081
for states variables (oxygen, substrate, biomass, nitrogen) should be known (Souza et al, 2008). The development of an off line tool which does not require all initial values is one of the main challenges posed in this paper. A full scale dairy wastewater treatment plant (WWTP) for carbon and nitrogen removal is considered in this paper with a variable flow rate: 400-1700 m3.d-1 and composition: 0.38-2.1 kgCOD.m-3. It consists of two anaerobic ponds followed by a 4600 m3 dual sequential batch reactor (DSBR) which does not account for any sensors (DO, redox potential). The main purpose of this contribution is to develop an off-line model based DMT for DSBR in order to aid for its aeration management. A kinetic and hydraulic DSBR model was proposed and previously calibrated trhough an in-situ parameters estimation procedure by means of sensors placed occasionally and sampling for chemical analysis during calibration.
2. Mathematical Modeling and Applied Methods 2.1. DSBR Model and Kinetic Description A kinetic-hydraulic model was developed taking in consideration aerobic carbon removal, nitrification (ammonia removal), denitrification (nitrate removal), oxygen transfer, convective mass exchange between reactors, and a dynamic hydraulic behavior forced by the discontinuity in the effluent discharge. Figure 1 shows the proposed flow model for the DSBR system. Figure 1: DSBR Flow Diagram
The dimensionless transient mass balances applied to this system are presented below: dci ,1 (t ) dt
CSTR 1 q(t ) . c f ,i (t ) ci ,1 (t ) Sh.>ci ,1 (t ) ci , 2 (t )@ Dai .ri >c1 (t )@ Fe. v1 (t )
dci , 2 (t ) dt
CSTR 2 q' (t ) .>ci ,1 (t ) ci , 2 (t )@ Sh.>ci ,1 (t ) ci , 2 (t )@ Dai .ri >c2 (t )@ Fe. v2 (t )
>
@
Hydraulic Model dv1 (t ) q (t ) q ' (t ) dt dv2 (t ) q' (t ) qd (t ) dt q' (t ) qd (t ) q (t ) q ' (t ) v1 (t ) v2 (t )
Where ci,1 and ci,2 are the component i concentration in CSTR 1 and 2; cf,i, the component i feed concentration; t, the time variable; q, q’ and qd, the influent, intermediate, and effluent flow rate; v1 and v2, the volume of CSTR 1 and 2; ri, the component i consumption rate; c1 and c2, the column vector of the concentration of all components in CSTR 1 and 2; Dai, the Damköhler number for component i; Fe, the Feeding number; and Sh, the Sherwood number. More details about these dimensionless numbers are presented in Ferrari et al, (2010). The components considered are heterotrophic and autotrophic biomass (XH, XA), chemical oxygen demand (COD), amonia and nitrate concentration (NH4+, NO3-), and dissolved oxygen (DO). The
S. Gutiérrez et al.
1082
biomass growth kinetics adopted in this work are based in ASM1 model (Artan and Orhon; 2005). The complete kinetic and oxygen transfer description is presented in Ferrari et. al, (2011). 2.2. Parameters Estimation (PE) Problem In order to calibrate the entire model, using plant information [detailed in Ferrari et. al (2011)], the following constrained dynamic NLP data mining problem is solved. m 2 i Where J is the objective function; c, a 15exp 2 min J ( x) ¦ ¦ ¦ >c j ,i , k x c j ,i , k @ dimensional vector of states (components in both x k 1 i 1 j 1 reactors, v1, v2 and q’) with known initial s.t.: conditions c0; j, i and k, indexes that identifies x c f(c,x) c( 0 ) c0 the experimental point for each state [m = x ! 0 max(j)], the state (only COD, NH4+, NO3- and DO are involved in the objective function), and the reactor respectively; x, a 10-dimensional vector c j ,i , k x c i , k x, t j c i , k x, t t of decision variables (autotrophic/heterotrophic biomass distribution, maximum specific growth rates, saturation and inhibition constants, and mass transfer coefficients); cj,i,kexp, the experimental data; and f, the vector function who defines the ODE/DAE system. This problem was solved through and hybrid stochastic [particle swarm optimization (PSO) (Kennedy and Eberhart; 1995)] Æ deterministic [sequential shooting/conjugate gradient (Srinivasan et. al; 2003)] algorithm in order to overcome non-convexity issues. max
i
j
2.3. DMT Problem The problem to be solved consists to find an optimal aeration time in a DSBR steady state cycle: oxic/anoxic/idle/settling/draw sequence, for a given WW input quality and flow rate in order to meet desired standards in the discharged effluent. The solution of a steady state cycle problem was considered adequate as DSBR input is seasonal variable depending on the milk production in the factory. As some of the initial values for states are unknown, the DMT problem was formulated in order to include them as decision variables. The constrained dynamic NLP problem to be solved for DMT is presented below. min S( z ) z
imax
¦¦ w >c 2
i
end ,i , k
k 1 i 1
s.t.: x
c
f(c,z)
z ! 0 T>c(t f )@ d 0
c( 0 )
c0
z c0,i ,k @2
cend,i,k z ci,k z, t f ci,k z, t t c0 ,i,k
f
ci,k 0 ci,k t t 0 ; c0 ,i,k c 0
Where S is the objective function; z, a 6dimensional vector of decision variables (oxic time, NH4+ and NO3- initial concentration in both reactors, and COD initial concentration in CSTR 1); w, a weight function vector; i and k, indexes that identifies the state (only COD, NH4+, and NO3- are involved in the objective function) and the reactor respectively; c0,i,k and cend,i,k, initial and final concentration vectors; T, a vector of terminal constraints in accordance to
Data Mining and Decision Making Tool Development for an Industrial DSBR
1083
environmental regulations (only COD concentration in CSTR 2 is considered); and tf, the cycle time. This problem was solved through a sequential shooting (adaptive finite differences method)/conjugate gradient algorithm. Due to requirement of low investment and the need of implementation for more than one computer, the solution was implemented in a Microsoft Excel interface.
3. Results and Discussion 3.1. Data Mining Predicted and experimental data after PE and the optimal results for decision variables are presented in Figure 2 and Table 1 respectively. As can be seen, and even though data reconciliation (DR) techniques weren’t possible due to the ignorance of estimated values for decision variables, PE procedure presented a reasonable performance. Points affected by gross errors (data not shown) represented 16% of total sample. Table 1: Optimal Parameters Symbol XA0/XH0 Sh KS KSN KIO KSn H,max Figure 2 : Predicted vs. experimental component values (ż); Predicted = experimental curve (···). PSO with 500 particles, deadening constant = 2, and 20 iterations. Conjugate gradient with steepest descent/linesearch algorithm.
A,max kla1 kla2
Name & Dimension Autotrophic to Heterotrophic Initial Biomass Ratio (%) Sherwood Number Saturation Constant for Organic Matter (gCOD.m-3) Saturation Constant for Nitrate (gN.m-3) Inhibition DO Constant in Denitrification (gO2.m-3) Saturation Constant for Ammonia (gN.m-3) Maximum Specific Aerobic Growth Rate for XH (d-1) Maximum Specific Growth Rate for XA (d-1) Oxygen Mass Transfer Coefficient for CSTR 1 (d-1) Oxygen Mass Transfer Coefficient for CSTR 2 (d-1)
Value 2.98 20.6 0 1.99 E-5 3.46 0 0.0160 0.0655 0.655 24.1
3.2. Decision Making Tool The software takes into account the available aeration power, whether conditions and the number of desired reactor cycles. Table 2 presents the software inputs and outputs data. It is possible for the user to graphically follow the evolution of most of the states in the system (components and total volume), and as example Figure 3 shows the COD and DO profiles obtained during an optimal cycle for known reactor inlet flow rate and composition.
4. Conclusion A simple model for DSBR was proposed and calibrated with reasonable accuracy. A friendly off-line model based DMT was built in order to aid for its aeration management. The achieved numerical precision and computational burden (1 minute each run) were acceptable. Despite the good results obtained at this stage, the decision support system can be improved in order to get better precision solving the ODE/DAE,
S. Gutiérrez et al.
1084
getting feedback for gross error detection, data reconciliation and parameter estimation, and improving the global optimization search to be able to make more complex decision strategies. Real time optimization and model predictive control will be addressed in future works. Table 2. DMT inputs and outputs Input data Flow rate (m3/d) COD feed conc. (mg/L) Nitrogen feed conc. (mg/L) Aeration power CSTR 1 (Hp) Aeration power CSTR 2 (Hp) Minimum liquor level (m) Settling time (h) Sludge concentration (mgTSS.l-1) Number of cycles per day Desirable COD conc. in final effluent (mg/L) Season
Output data Oxic time (h) Drain time (h) Cycle time Idle time (h) Maximum level (m) Drain flow rate (m3/h) Initial ammonia conc. in CSTR 1 and 2 Initial nitrate conc. in CSTR 1 and 2 Initial COD conc. in CSTR 1
Figure 3. COD and DO evolution
References
Artan, N.; Orhon, D. (2005). Mechanism and design of sequencing batch reactors for nutrient removal. Scientific and Technical Report No. 19, IWA Publishing. Chambers, B. (1993). Batch Operated Activated Sludge Plant for Production of High Effluent Quality at Small Works. Water Science and Technology. 28 (10) 251-258. Cristea, S.; De Prada, C.; Sarabia, D.; Gutiérrez, G. (2010). Aeration control of a wastewater treatment plant using hybrid NMPC. Computers and Chemical Engineering, In Press, Corrected Proof. Ferrari, A.; Gutiérrez, S.; Biscaia Jr., E.C. (2010). Development of an optimal operation strategy in a sequential batch reactor (SBR) through mixed-integer particle swarm dynamic optimization (PSO). Computers and Chemical Engineering, 34(12), 1994– 1998. Ferrari, A.; Gutiérrez, S.; Benítez, A. (2011). Kinetic and hydraulic dynamic model calibration for a real dairy cheese activated sludge process. (Paper under construction) Kennedy, J. and Eberhart, R. (1995). Particle swarm optimization. Proceedings of IEEE International Conference on Neural Networks (Perth - Australia), 1942-1948. Souza ,S.M; Araújo, O. and Coelho, M. (2008). Model-based optimization of a sequencing batch reactor for biological nitrogen removal. Bioresource Technology, 99(8), 3213-3223. Srinivasan, B.; Palanki, S.; Bonvin, D. (2003). Dynamic optimization of batch processes. I. Characterization of the nominal solution. Computers and Chemical Engineering, 27, 1–26. Acknowledgement. The decision making tool described in this paper is a property of Conaprole Company.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Novel CP Approach for Scheduling an Automated Wet-Etch Station Juan M. Novas, Gabriela P. Henning INTEC (UNL-CONICET), Güemes 3450, Santa Fe, CP 3000, Argentina
Abstract This paper presents a novel, efficient, and expressive Constraint Programming (CP) model to address the Automated Wet-Etch Station (AWS) scheduling problem. This CP model overcomes some of the shortcomings of previous approaches since it can explicitly take into account the time employed by any empty robot to move itself from the bath where it dropped a wafer lot to another bath where it is required to pick up a different lot. Such transfer time, which depends on the travel distance, needs to be considered to avoid overexposure of wafers in chemical baths. The formulation has been extensively evaluated in productivity maximization scenarios Keywords: Automated wet-etch station, Resource-constraint scheduling, Constraintprogramming, Multiproduct batch plants
1. Introduction Wet-etching processes are used to chemically remove layers from the wafers´ surface during their manufacturing. An automated wet-etch station (AWS) includes a series of successive chemical and water/de-ionizing baths and a shared material handling system, having one or more devices/robots, in charge of moving wafer lots from one bath to the next one. It poses a tough scheduling problem in which processing tasks are to be properly synchronized with wafer transfer activities between baths, while productivity is to be maximized by minimizing the makespan (objective function pursued in industry). This problem has called the attention of several researchers in the last two decades. Various efficient MILP models (Karimi et al., 2004; Aguirre and Méndez, 2010), and heuristic algorithms (Bhushan and Karimi, 2004) have been proposed. This paper presents a novel and efficient Constraint Programming (CP) approach that addresses the problem and takes into account an important feature that up to now has been neglected by other authors.
2. Problem Main Characteristics An AWS is a non-intermediate storage (NIS) flowshop consisting of a sequence of alternating bath stages, through which wafer lots pass for etching and rinsing. The first one is a chemical stage, followed by a water/de-ionizing rinsing one; then, both continue to alternate. If a wafer lot stays in a chemical bath longer than the prescribed etching time it is damaged. In consequence, chemical stages (EtchSt) must stick on a zero wait (ZW) storage policy. Conversely, rinsing stages (RinseSt) can hold wafers longer without damaging them; hence, they follow an unlimited wait (UW) storage policy. Thus, the entire process is a NIS flowshop with a mix of ZW and UW operating policies. Other authors have neglected the movement of unloaded robots, leading to agendas that may be infeasible or can lead to damaged wafer lots due to overexposure in chemical baths. This limitation is overcome in the proposed model.
J.M. Novas et al.
1086
3. Constraint Programming Formulation The CP model has been implemented in the OPL language supported by ILOG Solver (ILOG, 2000a) employing specific scheduling constructs available in ILOG Scheduler (ILOG, 2000b). Three types of activities are explicitly taken into account in the formulation: (i) The processing of a wafer lot j that belongs to the set of Jobs, to be treated at stage st in a chemical or water/de-ionizing bath b belonging to the set of Baths. This activity is captured by Taskj,st, and is described by means of duration, start and end time variables (i.e. Task.duration, Task.start and Task.end), which are related among themselves, but only two are independent variables. (ii) The transport of a wafer lot j to the chemical or rinsing bath of stage st from the previous stage bath, or from the input buffer if st = 1. This activity is executed by a transfer device/robot r, belonging to the set of Robots. It is captured by Transfj,st and is also described in terms of start, end, and duration variables. (iii) The movement of unloaded transfer devices/robots between baths. This activity is characterized by the time employed by the device to move from the place it has just left a wafer lot, to another bath/input buffer to pick up a different lot to be transported. So, it can be modeled as a bathdependent changeover of each transfer device/robot. 3.1 Assignment and precedence constraints for etching and rinsing activities. Constraint (1) is an assignment relation prescribing that each processing or rinsing operation of wafer lot j must be assigned to a bath belonging to the set of baths (Baths). This constraint works as described if, in addition, this set has been declared as unary in the ILOG environment. Constraint (1) is complemented with the second one that negates the ActivityHasSelectedResource predicate to forbid the assignment of any bath that does not belong to type of operation of stage st, required by the Taskj,st activity. Constraint (3) enforces a proper sequencing of all the operations corresponding to any two consecutive stages (st and st’) that need to be performed on each wafer lot j. Task j ,st requires Baths
∀ j ∈ Jobs, ∀ st ∈ Stages
not ActivityHasSelectedResource (Task j ,st , Baths , b ) ,
(1) (2)
∀ j ∈ Jobs , ∀ st ∈ Stages , ∀b ∉ Baths st
(3)
Task j ,st precedes Task j ,st ' , ∀ j ∈ Jobs , ∀ st , st ' ∈ Stages , st ≠ last ( Stages ) , Ord ( st ' ) = Ord ( st ) + 1
3.2 Assignment and precedence constraints for wafer lot transfer activities. Constraint (4) is an assignment relation prescribing that each transfer activity Transfj,st of a wafer lot j demands a robot/transfer device. Constraint (5) enforces a proper sequencing of all the transfer operations demanded by a wafer lot j corresponding to consecutive stages (st and st’). Transf j ,st requires Robots ∀ j ∈ Jobs, ∀st ∈Stages
(4)
Transf j ,st precedes Transf j ,st ' , ∀ j ∈ Jobs , ∀ st , st ' ∈ Stages , st ≠ last ( Stages ) , Ord ( st ' ) = Ord ( st ) + 1
(5)
3.3 Precedence constraints for manufacturing and transfer activities. Constraint (6) prescribes that the transfer of a wafer lot j to a stage st precedes the manufacturing operation (either etching or rinsing) that is carried out in such stage. Transf j ,st precedes Task j ,st
∀ j ∈ Jobs, ∀st ∈Stages
(6)
A Novel CP Approach for Scheduling an Automated Wet-Etch Station
1087
3.4 Timing constraints for etching and rinsing activities. The residence time of a wafer lot j at a bath b, belonging to an etching stage should be exactly equal to the processing time ptjb to avoid wafer damage. On the other hand, the residence time of a wafer lot j at a bath b, belonging to a rinsing stage, can be greater than the rinsing time rtjb. These two conditions are captured by constraints (7) and (8), respectively. (7)
ActivityHasSelectedResource (Task j , st , Bath, b) Task j , st .duration = pt jb , ∀ j ∈ Jobs, ∀ st ∈ EtchSt , ∀b ∈ Bathst
(8)
ActivityHasSelectedResource (Task j ,st , Bath, b) Task j ,st .duration ≥ rt jb , ∀ j ∈ Jobs , ∀ st ∈ RinseSt , ∀b ∈ Bathst
3.5 Timing of transfer activities and coordination with the manufacturing ones. The processing of a wafer lot j at a bath belonging to the first stage st = 1 starts ttst time units after the movement of such lot from the input buffer (ib) has already started. The transfer activity is allowed to have a duration greater than ttst just in case the robot remains idle after leaving the wafer lot. These conditions are captured by constraint (9). Also, when a wafer lot is transferred between two consecutive stages st and st’, the movement starts as soon as the processing in the predecessor bath finishes, and the processing in the successor bath begins ttst time units after the lot transport has already started. Once again the transfer activity is allowed to have a duration greater than ttst’ just in case the robot remains idle after dropping the wafer lot. These conditions are captured by constraint (10). Likewise, when the processing in the last water/deionizing stage ends, the wafer lot can start its movement towards the output buffer (ob), which for simplicity reasons has been modeled as a final stage. This transport is also allowed to have a duration greater than ttst’ just in case the robot remains idle after leaving the wafer lot into the output buffer. These conditions are modeled through constraint (11). Finally, constraint (12) captures a synchronization condition prescribing that the transfer of a wafer lot j from a bath b belonging to stage st to the following stage has to take place before another wafer lot j arrives to such bath. ActivityHasSelectedResource (Task j ,st , Bath, b) Task j ,st . start = Transf j ,st .start + tt st ∧ Transf j , st .duration ≥ tt st , ∀ j ∈ Jobs , st = First ( Stages ), ∀b ∈ Bath st
(9)
ActivityHasSelectedResource (Task j ,st , Bath, b) ∧ ActivityHasSelectedResource (Task j ,st´ , Bath, bb)
(10)
Transf j , st ' .start = Task j , st .end ∧ Task j , st ' .start = Transf j , st ' .start + tt st ' ∧ Transf j , st ' .duration ≥ tt st ' ∀ j ∈ Jobs , ∀ st , st '∈ Stages , st ' ≠ last ( Stages ), Ord ( st ' ) = Ord ( st ) + 1, ∀b ∈ Bathst , ∀ bb ∈ Bathst '
ActivityHasSelectedResource (Task j ,st , Bath, b) Task j ,st . end ≤ Transf j ,st´ .start ∧
(11)
Transf j , st´ .duration ≥ tt st ´ , ∀ j ∈ Jobs , st´ = Last ( Stages ), Ord ( st ' ) = Ord ( st ) + 1, ∀b ∈ Bathst
ActivityHasSelectedResource (Task j ,st , Bath, b) ∧ ActivityHasSelectedResource (Task i ,st , Bath, b)
(12)
Task i , st .start ≥ Task j , st .end + tt st ' + tt st ∨ Task j ,st .start ≥ Task i ,st .end + tt st + tt st ' ∀ j ∈ Jobs , ∀i ∈ Jobs , j ≠ i, ∀st , st´ ∈ Stages , st ≠ last ( Stages ), Ord ( st ' ) = Ord ( st ) + 1, ∀b ∈ Bathst
3.6 Movement of the unloaded transfer device/robot between baths. To incorporate the transition time associated with this movement, every stage st, is assigned a state[st], as indicated in (13). In addition, transfer tasks and devices need to be declared in a
1088
J.M. Novas et al.
special way by using the state and the transition type constructs. The declaration shown in (14) associates each transfer activity required by wafer lot j with the state of stage st which is its destination. Furthermore, transition times need to be declared. Expression (15) defines them as a matrix having as many rows and columns as the number of stages. Additionally, transfer devices/robot are declared to include transition times, as shown in expression (16). Stages state[Stages]
(13)
Transf j ,st Transition Type state st
(14)
∀ j ∈ Jobs, ∀ st ∈ Stages
TransitionTime[Stages,Stages]
(15)
UnaryResource robot[Robots](TransitionTime)
(16)
3.7 Objective Function related constraint. Since makespan is the performance measure to be minimized, constraint (17) should also be included in the model. Tranf j ,st .end precedes Mk
(17)
∀ j ∈ Jobs, st = last ( Stages)
4. Computational Results and Discussion The CP model was tested with various case-studies having different dimensionalities. Data were generated following the approach proposed by Karimi et al. (2004). Two scenarios have been considered: (i) ignoring (as previous authors) the movements of the unloaded transfer device, and (ii) taking them into account. Table 1 presents the loaded transfer times, as well as distance-dependent times associated with empty trips. Table 2 shows the computational results for various examples of a one robot AWS. Table 1. Loaded transfer times (first row) and times for empty transfer trips ib b1 b2 b3 b4 b5 b6 b7 b8 ob
ib 0.0 1.0 3.0 4.5 6.3 8.8 10.3 11.5 12.8 14.3
b1 1.0 0.0 2.0 3.5 5.3 7.8 9.3 10.5 11.8 13.3
b2 2.0 2.0 0.0 1.5 3.3 5.8 7.3 8.5 9.8 11.3
b3 1.5 3.5 1.5 0.0 1.8 4.3 5.8 7.0 8.3 9.8
b4 1.8 5.3 3.3 1.8 0.0 2.5 4.0 5.2 6.5 7.9
b5 2.5 7.8 5.8 4.3 2.5 0.0 1.5 2.7 4.0 5.4
b6 1.5 9.3 7.3 5.8 4.0 1.5 0.0 1.2 2.5 3.9
b7 1.2 10.5 8.5 7.0 5.2 2.7 1.2 0.0 1.3 2.7
b8 1.3 11.8 9.8 8.3 6.5 4.0 2.5 1.3 0.0 1.4
ob 1.4 ----------
Table 2. Computational results for a one robot AWS Without empty robot transition With empty robot transition Makespan CPU Time (s)* Makespan CPU Time (s)* 4x4 58.84 <1 60.00 <1 4x6 74.03 <1 75.70 <1 4x8 84.13 8.3 86.86 20.5 4 x 10 99.50 356.5 102.31 610.5 6x4 72.73 <1 73.70 <1 6x6 88.04 <1 90.51 2.2 6x8 101.25 39.1 104.43 67.6 8x4 96.29 <1 97.11 3.9 8x6 111.58 4.2 114.01 586.5 8x8 123.30 18.6 127.57 850.7 * On a Notebook AMD Turion 64x2 Mobile Technology TL-60 2.00 GHz, 2.00 GB de RAM
Stages x Jobs
The model has rendered optimal results for small and medium size problems in low CPU times. When empty transfer trips of the robot/s are neglected, the model finds the same solutions described by other authors and, in some cases, it renders solutions exhibiting a lower makespan value. Moreover, when the transfer times of the empty
A Novel CP Approach for Scheduling an Automated Wet-Etch Station
1089
robot are included in the model, the approach yields solutions exhibiting greater values of makespan, thus demonstrating the importance of this problem element. The impact of these times can be seen when comparing the Gantt diagrams of Figures 1.a and 1.b.
(a)
(b) Figure 1. Optimal schedules for the 4x4 problem. (a) Without considering the times of empty transfer trips. (b) Taking into account the times of the unloaded transfer device.
To assess the impact of times associated with the movement of the empty robot, let´s consider a partial view of the Gantt diagram shown in Fig. 2. In Fig. 3, the A interval represents the time taken by the robot to go from b2, where it dropped wafer lot j1 to the ib input buffer to pick up wafer lot j3. Similarly, interval B represents the time required by the robot to move from b1 to b4 in order to pick up wafer lot j2, which then needs to be moved to the output buffer. It can be seen that these times cannot be neglected if a proper synchronization of chemical baths is indeed pursued. This can only be done if transfer tasks Transfj,st, their associated constraints, and their transition times (representing empty movements) are included in the model. In this way we overpass the limitation of previous approaches that have neglected the movement of unloaded robots.
Figure 2. Detailed view of the times taken by the transfer device to make all the required movements.
References A. Aguirre, A., C. Méndez, 2010, A Novel Optimization Method to Automated Wet-Etch Station Scheduling in Semiconductor Manufacturing Systems, Proc. ESCAPE20, 883- 888. S. Bhushan, I. A. Karimi, 2004, Heuristic algorithms for scheduling an automated wet-etch station, Computers and Chemical Engineering, 28, 363-379. ILOG, 2000a, ILOG Solver 5.0 User’s Manual. France: ILOG. ILOG, 2000b, ILOG Scheduler 5.0 User’s Manual. France: ILOG. I.A. Karimi, Z.Y.L. Tan, S. Bhushan, 2004, An improved formulation for scheduling an automated wet-etch station, Computers and Chemical Engineering, 29, 217-224.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Agent-based coordination framework for disruption management in a chemical supply chain Behzad Behdani a, Zofia Lukszo a, Arief Adhityab, Rajagopalan Srinivasan b, c a
Faculty Technology, Policy and Management, TU Delft, the Netherlands Institute of Chemical and Engineering Sciences, A*STAR, Singapore c National University of Singapore, Dept of Chemical and Biomolecular Eng, Singapore b
Abstract In most situations, managing disruptions in a chemical supply chain demands for more than one actor to be involved and more than one activity to be managed. Consequently, proper coordination is a key requirement for successful disruption management in a supply chain. This, particularly, calls for a clear understanding of what coordination means and how coordination mechanisms can be designed. This is the focus of this work. The conceptual framework for coordination presented is implemented in an agent-based model that allows decision makers to evaluate and compare the effect of different coordination mechanisms on the performance of system. This would enable an improved understanding of coordination in disruption management before implementation in the real-life system. Keywords: Agent based modeling, coordination, disruption management, supply chain.
1. Introduction A chemical supply chain faces many types of disruptions in its daily operation. Although in most cases the disruption may happen at one specific entity in the network, disruption management generally necessitates more than one actor to be involved and more than one activity to be managed. As an example, when a disruption affects one of the suppliers, the procurement department in a production plant may start negotiating with alternative suppliers for emergency procurement of raw material. However, some other solutions for managing this disruption might need involving other departments (e.g., scheduling department to reschedule its current orders considering the new constraints imposed to the inventory management process or sales department to consider the new raw material constraint in accepting orders from customers). Moreover, if the production plant is part of a multi-site enterprise, there are more options for managing disruption that calls for more than one plant to be involved, e.g., by exchanging the orders that must be processed by each plant (Behdani et al., 2010b). On the other hand, disruptions in a supply chain may lead to the shortage of resources and efficient use of available resources is crucial for disruption management. It might be even essential that the existing resource for each actor be known and shared with others and this requires coordination across a network of actors. In addition, time pressure and urgency is one of the key challenges to emergency response and in most situations, available time for decision-making and coordination is severely constrained (Chen et al., 2008). Given these points, proper coordination is a key requirement for managing disruptions in a supply chain. However, the main challenge to an organization is how to design the coordination structures and evaluate their effectiveness to cope with disruptions. Simulation and modeling address this by providing an experimental setting for comparing the benefits of coordination mechanisms with respect to each other. In particular, using agent-based modeling and simulation is a suitable way to represent
Agent-based coordination framework for disruption management in a chemical supply chain
1091
heterogeneous, autonomous actors in the disruption management process, enabling the simulation of aggregate behavior out of local agent behaviors. With such a model, it is possible to operationalize different coordination mechanisms in order to compare and evaluate them in an artificial setting, before moving on to training, planning and actual use during a real disruption. In this paper, we present a generic framework describing the main concepts that must be considered in defining the coordination mechanisms. Based on this conceptualization, coordination structures for disruption management can be designed and implemented in an agent-based model for a multi-plant enterprise. The next parts of the paper are structured as follows. In Section 2, an overview of the literature on coordination in supply chain is presented, followed by a framework to design coordination mechanisms for disruption management in Section 3. Section 4 discusses the application of this framework with an illustrative example. Finally, Section 5 gives some concluding remarks.
2. Coordination in supply chains In simple and general terms, coordination can be considered as “the act of managing interdependencies between activities performed to achieve a goal” (Malone and Crowston 1994) and the methods used to manage the interdependencies are called as coordination mechanisms (Malone et al., 1999). Malone et al. (1999) classify these interdependencies into three basic groups: flow, sharing, and fit. These three types of dependencies arise from resources that are related to multiple activities. Flow dependencies arise whenever one activity produces a resource that is used by another activity (e.g., sequential operations in a production line). Sharing dependencies occur whenever multiple activities all use the same resource (e.g., two activities need to be done by the same person or with the same machine). Finally, fit dependencies arise when multiple activities collectively produce a single resource (e.g., when several manufacturing line or plants produce different parts of a specific product). This way for describing coordination problems is similar to the Resource-Task Network (RTN) representation for scheduling processes originally discussed by Pantelides (1994). Besides the activity and resources, the other concept that is mostly emphasized by the researchers is "actor" (Crowston and Osborn, 2003); the activities must be done by different actors and these actors may have several intra and inter-organizational dependencies. These dependencies stem from the lack of ability for one actor to control all the conditions (and resources) necessary to perform an activity or make a specific decision. As an example, for a sales department to make a commitment for the customers' order due date, it might need to have the information (as a resource) of the current production schedule determined by operation department. The role of "actors" in the coordination is discussed with the "decision-making pattern" in the literature as well. For instance, Anand and Mendelson (1997) describe the firm's coordination structure by two main dimensions; (1) its decision-rights structure, and (2) its information structure. Based on this conceptualization, they model a case of a manufacturing firm that sells its product in several horizontal markets that are subject to demand uncertainty. The firm has to design its coordination structure, which determines (i) who makes the quantity decisions for each market, and (ii) what information is available to each decision maker. Consequently, they compare the performance of three coordination structures: (i) centralized, where the center makes all the decisions using all the data (e.g., past sales of all branches); (ii) decentralized, where each branch makes its own sales decision based on its own local knowledge and data; and (iii) fully
1092
B. Behdani et al.
distributed, where all data are shared and hence each branch makes its decisions based on both its own local knowledge and all the data.
3. Conceptual framework for Coordination in disruption management Considering the literature on coordination in supply chains, the coordination mechanisms for managing disruption might be differentiated based on three elements: x Activity Pattern: this is the list and sequence of activities that must be done and coordinated to manage a disruptive event. The resources needed for each activity and the resources that are created by an activity must be described. x Actor Pattern: this is a list of all actors that are involved in the disruption management process. The activities must be done and the decisions must/can be made by each actor are also part of “actor pattern” in a coordination mechanism. x Resource Pattern: this is the list of resources existing after a disruption occurs. To design a coordination mechanisms, it is necessary to know if the resources can be substituted with each other (Carley, 2003) and if so, how these resources are/must be shared between several actors (Xu and Beamon, 2006). Most of previous works on coordination limits their consideration to “information sharing”. Although the “information” can be considered as an important resource for each actor, there might be several other resources to be shared and communicated within the context of coordination. For example, the production plants in an enterprise might share their production facilities or raw material with each other when there is a disruption in the supply chain of one of them.
Figure 1: Coordination framework for disruption management in supply chains
Clearly, these dimensions are not isolated. As an illustration, the information on the first possible time for fulfilling an order (as a resource) sent by scheduling department of a production plant (as an actor) will influence the order negotiation with customers (as an activity) done by global sales departments (as an actor). Indeed, if just one element was different in this setting (for example, if the information from scheduling department was the cost of fulfilling that order), then the order negotiation might be organized in different way and the production/sales integration would be different. In conclusion, the presented framework for coordination can be useful to have a clear understanding of what coordination means and how it can be achieved for managing disruptions in the supply chain domain. Accordingly, different disruption response plans can be designed by determining the activity pattern (which activities and in which sequence must be done), actor pattern (who must decide about what and who must do which activities) and the resource pattern (who must send which resources to whom
Agent-based coordination framework for disruption management in a chemical supply chain
1093
and which resources must be shared). Moreover, this framework provides a conceptual basis for developing an agent-based model to experiment with different coordination configurations coping with several disruptions, as discussed in the next section.
4. Illustrative Case Study The case study described in this section is a chemical enterprise which consists of a global sales department (GSD) and 3 production plants in different locations. Each plant has several functional departments with particular roles and tasks. The internal operation of the plants and its departments are discussed by Adhitya and Srinivasan (2010) and Behdani et al. (2010a). The goal is to fulfill a set of customer orders by assigning them to different plants and coordinating the behavior of different departments in each plant. The plants work on a make-to-order basis. The GSD receives orders from customers and passes the order information to the scheduling department of each plant. Based on its current schedule, each scheduling department sends the earliest date for fulfilling that order and then, GSD assigns the order to the plant with first possible time for fulfilling that order. Within each plant, the scheduling department determines the schedule of orders must be processed following the “Processing Earliest Due Date” (PEDD) policy. The scheduling department of each plant can also accept the orders from local customers of that plant. Based on this description, an agent-based model for this multi-plant enterprise supply chain with 10 agent types (Customer, GSD, Production Plant, Scheduling Dep., Operations Dep., Storage Dep., Packaging Dep., Procurement Dep., Logistics Dep., and Supplier) has been developed in Java using Repast simulation platform. This model is used here to experiment with different coordination configurations to cope with disruptions that enterprise might face during its daily operation. The disruption considered is a seasonal high order pattern from customers, in which the orders from customers will increase abruptly. For the sake of numerical simulation, we assume that there is a high-order period from day 180 to 240 in which the average order rate is 2.5 orders per day and for the rest of the simulation time, it is 1 order per day (Figure 2). The challenge for enterprise is to define the organizational structure to better manage its activities in coping with this abnormal pattern. Our main attention in this paper is on the downstream of enterprise and the process of order assignment to different plants. Accordingly, there are two main activities (order acceptance and order processing or scheduling) that must be coordinated and there are two main actors (GSD and scheduling department of each plant) to perform these activities. Two different structures are considered here: 1) in the first setting called as Base Case, both GSD and scheduling department of each plant are responsible for order acceptance (as mentioned above). 2) In the second case (Coordinated), although for low-demand periods, scheduling department of each plant can accept the orders from local customers, for high-demand period, the GSD is the only actor who has the right to accept orders from customers. Therefore, in this disruption period, scheduling department of each plant just process the orders sent by GSD and does not accept orders from its local customers. Consequently, the main determinant of coordination structure in these cases is the division of responsibilities between two actors (the decision making pattern for order acceptance during disruption period). Figure 3 shows the enterprise profit for the two cases. Although the enterprise’s profit is approximately the same for low-demand periods in two cases, the final annual profit is 10 percent higher in coordinated case. However, the impact of this new organizational setting on the performance of enterprise is not immediate; it takes time for the
1094
B. Behdani et al.
disruption management practices to show their full effects on the system performance. Consequently, if the seasonality in the demand is predictable, it might be preferable to switch to the new organizational setting before start of high season.
Figure 2. Seasonal order pattern
Figure 3. Profit with and without coordination
The coordination structure presented in this section is just one of the possible settings and many other disruptions can be defined and the methods to cope with them can be studied with the developed model.
5. Concluding remarks This paper discusses the concept of coordination to manage disruptions in a chemical supply chain. Based on the literature, a conceptual framework for describing the coordination mechanisms to manage disruptions is presented. Moreover, this framework is used as a conceptual basis for developing an agent-based model to experiment with different coordination configurations coping with disruptions in the operation of a chemical multi-plant enterprise. Of course, the application of this framework is not limited to managing disruptions; it can be used for modeling different organizational structures for the normal operation of a supply chain as well. For example, it can be a basis for developing models for integration of sales/marketing and production in a plant or division of responsibilities between different departments in the daily operation of a multi-site enterprise; that are two important issues in the supply chain management.
References Adhitya, A. and Srinivasan, R. (2010). Ind & Eng Chem Res, 49 (20), 9917–9931. Anand, K. and H. Mendelson. (1997). Management Science, 43(12).1609-27. Behdani, B., Lukszo, Z., Adhitya, A., and Srinivasan, R. (2010a). Comp & Chem Eng, 34(5), 793-801. Behdani, B., Lukszo, Z., Adhitya, A., and Srinivasan, R. (2010b). ESCAPE 20, Italy, June 2010. Carley, K. M. (2003). In R. Breiger, K.M. Carley, & P. Pattison (Eds.), Dynamic social network modeling and analysis:Workshop summary and papers,134-145. Washington, DC: The National Academies Press. Chen, R., Sharman, R., Rao, H. R., & Upadhyaya, S. J. (2008). Communications of the ACM, 51(5), 66-73. Crowston, K., & Osborn, C. S. (2003). In T. W. Malone, K. Crowston & G. Herman (Eds.), Organizing Business Knowledge: The MIT Process Handbook. Cambridge, MA: MIT Press. Malone, T. W., & Crowston, K. (1994). ACM Computing Surveys, 26(1), 87-119. Malone, T. W., Crowston, K., Lee, J., Pentland, B., Dellarocas, C., Wyner, G., et al. (1999). Management Science, 45(3), 425-443. Pantelides, C.C. (1994). In: Proceedings 2nd FOCAPO,pp 253–274. CACHE Corp,Sydney. Xu, L., & Beamon, B. M. (2006). The J SCM, 42(1), 4-12.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
5HFLSHGULYHQG\QDPLFK\EULGVLPXODWLRQRIEDWFK SURFHVVHVDFRPELQHGRSWLPL]DWLRQVLPXODWLRQ DSSURDFK *LOOHV+pWUHX[$QWKRQ\5DPDURVRQ-HDQ0DUF/H/DQQ /DERUDWRLUHGH*pQLH&KLPLTXH805&156,137(16,$&(7DOOpH(PLOH 0RQVR7RXORXVH&HGH[)UDQFH
$EVWUDFW 3U2'+\6 LV D G\QDPLF K\EULG VLPXODWLRQ HQYLURQPHQW GHGLFDWHG WR WKH PRGHOLQJ RI GHYLFHV DQG RSHUDWLRQV IRXQG LQ FKHPLFDO SURFHVVHV 8QOLNH FRQWLQXRXV SURFHVVHV WKH G\QDPLFVLPXODWLRQRIEDWFKSURFHVVHVUHTXLUHVWKHH[HFXWLRQRIFRQWUROUHFLSHVLQRUGHU WRDFKLHYHDVHWRISURGXFWLRQRUGHUV,QWKLVIUDPHZRUNWKLVVLPXODWRULVFRXSOHGWRD VFKHGXOLQJPRGXOH3UR6FKHG LQRUGHUWRLQLWLDOL]HYDULRXVSDUDPHWHUVDQGWRHQVXUHD SURSHU FRPSOHWLRQ RI WKH VLPXODWLRQ 7KLV SDSHU IRFXVHV RQ WKH EXLOGLQJ SURFHGXUH RI WKHVLPXODWLRQPRGHOFRUUHVSRQGLQJWRWKHUHDOL]DWLRQRIDSDUWLFXODUVFKHGXOLQJ .H\ZRUGVG\QDPLFK\EULGVLPXODWLRQEDWFKSURFHVVHVVFKHGXOLQJ3HWULQHWV
,QWURGXFWLRQ $PRQJWKHDYDLODEOH&$3(WRROVG\QDPLFVLPXODWLRQDURXVHVDJURZLQJLQWHUHVWIRULWV DELOLW\ WR FDUU\ RXW YDULRXV DQDO\VHV FRQILJXUDWLRQV RSHUDWLQJ SROLFLHV HWF RQ D YLUWXDO SODQW H[WUHPHO\ XVHIXO WR SURFHVV HQJLQHHUV LQ WKHLU GDLO\ ZRUN WR LPSURYH V\VWHP SHUIRUPDQFH SURGXFWLYLW\ HQHUJ\ HIILFLHQF\ ZDVWH UHGXFWLRQ HWF +RZHYHU EDWFKSURFHVVHVDUHJHQHUDOO\FODVVLILHGDVG\QDPLFK\EULGV\VWHPV7KLVNLQGRIV\VWHP UHTXLUHVVSHFLILFVLPXODWRUVDEOHWRKDQGOHULJRURXVO\ERWK WKHFRQWLQXRXVHYROXWLRQRI VWDWH YDULDEOHV WHPSHUDWXUH FKHPLFDO NLQHWLFV HWF DQG WKH GLVFRQWLQXRXV FKDQJHV RI FRQILJXUDWLRQGXHWRDFWLYDWLRQGHDFWLYDWLRQRIDFWXDWRUVHWF ,QWKLVFRQWH[WZHKDYH GHYHORSHG VLQFH PDQ\ \HDUV WKH G\QDPLF K\EULG VLPXODWLRQ HQYLURQPHQW 3U2'+\6 GHGLFDWHGWRFKHPLFDOSURFHVVHV>@%DVHGRQREMHFWFRQFHSWVWKLVHQYLURQPHQWRIIHUV H[WHQVLEOH DQG UHXVDEOH VRIWZDUH FRPSRQHQWV DOORZLQJ D ULJRURXV DQG V\VWHPDWLF PRGHOLQJRIWKHWRSRORJ\DQGWKHEHKDYLRURISURFHVVHV7KHK\EULGIHDWXUHLVPDQDJHG ZLWK WKH 2EMHFW 'LIIHUHQWLDO 3HWUL 1HWV 2'31 IRUPDOLVP ,W FRPELQHV LQ WKH VDPH VWUXFWXUH D VHW RI GLIIHUHQWLDO DQG DOJHEUDLF HTXDWLRQV V\VWHPV ZKLFK GHVFULEH WKH FRQWLQXRXV HYROXWLRQ RI WKH V\VWHP SULPDULO\ EDVHG RQ WKH WKHUPRG\QDPLF DQG SK\VLFRFKHPLFDO ODZV DQG KLJK OHYHO 3HWUL QHWV ZKLFK GHILQH WKH OHJDO FRPPXWDWLRQ VHTXHQFHVEHWZHHQVWDWHVLHRQHRIWKHSRVVLEOHFRQILJXUDWLRQVRI'$(V\VWHPV 1HYHUWKHOHVV LQ RSSRVLWH WR FRQWLQXRXV SURFHVVHV VWXGLHV RQ EDWFK XQLWV RIWHQ QHFHVVLWDWHWRWDNHLQWRDFFRXQWERWKWKHSK\VLFRFKHPLFDOSKHQRPHQDWKDWWDNHSODFHLQ HDFK GHYLFH ORFDO YLVLRQ DQG WKH PDQDJHPHQW RI EDWFKHV QDWXUH VL]H QXPEHU DQG VWDUWLQJ GDWH SDVVLQJ WKURXJK WKH XQLW JOREDO YLVLRQ 2EYLRXVO\ WKHVH WZR IHDWXUHV KDYH D VLJQLILFDQW LPSDFW RQ WKH SHUIRUPDQFHV DQG LQGXFH WKDW WKH V\VWHP KDV WR EH WDFNOHGDVDZKROHWRHVWDEOLVKDFRQVLVWHQWDQDO\VLV1HYHUWKHOHVVWKHPDQDJHPHQWRI EDWFKHVRQO\E\VLPXODWLRQGRHVQRWDOZD\VJLYHVDWLVIDFWRU\UHVXOWVDQGPD\HYHQOHDG WR DERUW DQ H[HFXWLRQ 6R LQ RUGHU WR WDFNOH ULJRURXVO\ HDFK SDUW RI WKH SUREOHP WKH VWUDWHJ\ DGRSWHG LQ 3U2'+\6 FRQVLVWV LQ GULYLQJ WKH VLPXODWLRQ E\ IROORZLQJ D SURGXFWLRQ VFHQDULR REWDLQHG IURP D VFKHGXOLQJ PRGXOH EDVHG RQ RSWLPL]DWLRQ
1096
*+pWUHX[HWDO
WHFKQLTXHV 7KH UHVW RI WKH SDSHU IRFXVHV RQ WKH LQWHUIDFH EHWZHHQ WKLV VFKHGXOLQJ PRGXOH DQG WKH VLPXODWLRQ PRGHO DQG LV RUJDQL]HG DV IROORZ 6HFWLRQ SUHVHQWV WKH SULQFLSOHRIWKHDSSURDFK6HFWLRQGHVFULEHVEULHIO\HDFKPRGXOHZLWKDQH[DPSOH
0DLQVWHSVRIWKHUHFLSHGULYHQG\QDPLFK\EULGVLPXODWLRQ )LJXUH VXPPDUL]HV WKH SURFHGXUH LPSOHPHQWHG LQ 3U2'+\6 WR UXQ D UHFLSHGULYHQ G\QDPLFVLPXODWLRQRIDFRPSOHWHSURFHVVIRUDJLYHQSURGXFWLRQFDPSDLJQ
Considered process topology
Product Recipe to be produced
ERTN modeling of the system
-- Formulation Formulation (( reagent, reagent, product, product, proportions, proportions, kinetics), kinetics), -- Operating Operating procedure, procedure, -- technical technical data data (molar (molar capacities, capacities, heating heating or or cooling cooling power, power, etc) etc) -- Production data (initial stocks, work in progress) Production data (initial stocks, work in progress) -- etc etc … …
3 ∞
3Up FKD XIIHXU
5pD FWLRQ 3UpFK DXIID JH $ ∞
7
+RW$
7
,QW$%
7
5pDF WHXU 5pD FWLRQ
7
,QW%&
7 5p DFWH XU
9DS +3
7
6pSDUDWLRQ
7
3 ∞
6pSD UD WHXU
7
,PSXU(
5pDFWLR Q % ∞
& ∞
(/(&
03
/3
Simplified modeling
Detailed modeling
ProSched Production Plan List of production orders that define the required amount of products and due dates
PrODHyS
X : vector of variables P : vector of parameters
T2 T3
T5 T6
T1
< T>
Col1 Vide
Pr échauffag e(H1 )
< b>
< T>
H1 Vide
Réa ction1(R1)
< b>
Solution
< T>
Réaction2(R1)
R1 Vide < T>
begin
R éaction3(R 1)
< T>
T8 T9
< T>
T7
Calculation of a schedulling
Generated file of parameters
Sequence of tasks defined by starting dates, required resource, batch size, etc
Topological modeling
Séparation(C ol1)
T4
Mathematical model
Simulated production plan
Réa ction1(R2)
< T>
Simulation model
Réaction2(R2)
< T>
R2 Vide
< T>
< T>
T10
R éaction3(R 2)
< T>
Control recipe (Procedure level)
'\QDPLF K\EULG VLPXODWLRQ ZLWK GHFLVLRQ SRLQWV
Simulation driven by a schedule and process analysis
Simulated physicophysico-chemical phenomenon
)LJJHQHUDOSURFHGXUHRIDG\QDPLFVLPXODWLRQLQ3U2'+\6
5HFLSH LV DQ HQWLW\ WKDW GHVFULEHV WKH IRUPXODWLRQ VHW RI FKHPLFDO VXEVWDQFHV DQG SURSRUWLRQV WKHSURFHGXUHVHWRISK\VLFDOVWHSVUHTXLUHGWRPDNHWKHSURGXFW DQGWKH UHTXLUHGHTXLSPHQW7RWDFNOHFRPSOH[SURFHVVHVWKHVWDQGDUG,6$63ZZZLVDRUJ KDVVSHFLILHGDKLHUDUFKLFDOPRGHOLQFOXGLQJOHYHOVJHQHULFVLWHPDVWHUDQGFRQWURO UHFLSH HDFKRQHSURYLGLQJLQIRUPDWLRQLQDQDSSURSULDWHJUDQXODULW\*LYHQWKHJHQHULF UHFLSHRIWKHPDQXIDFWXUHGSURGXFWVDQGWKHWRSRORJ\RIWKHXQLWWKHSURFHGXUHRIWKH VLWHUHFLSHLV PRGHOHGLQRXU WRROXVLQJWKH(571([WHQGHG5HVRXUFH7DVN1HWZRUN JUDSKLFDOIRUPDOLVP$³VLPSOLILHG´EXWVWUXFWXUDOO\JHQHULFVFKHGXOLQJPRGHOEDVHGRQ D0,/3IRUPXODWLRQLVVHWDQGLQVWDQWLDWHGZLWKGDWDSURYLGHGWKURXJKWKH(571YLHZ VHW RI HVWLPDWHG SDUDPHWHUV IRU GXUDWLRQ FDSDFLW\ RI GHYLFHV DFFRUGLQJ WR WKH VWRUHG PDWHULDO HWF WR PDQDJH RYHUDOO IORZV SDVVLQJ WKURXJK WKH XQLW 7KXV JLYHQ D WLPH KRUL]RQDQGDSURGXFWLRQSODQREWDLQHGE\D053SURFHGXUHIRUH[DPSOH WKHSDFNDJH 3UR6FKHG FDOFXODWHV D VFKHGXOLQJ E\ FDOOLQJ WKH FRPPHUFLDO VROYHU ;35(6603 7KH UHVXOWLQJ OLVW RI WDVNV JLYHV ULVH WR WKH PDVWHU UHFLSH DQG FDQ EH GHSLFWHG RQ D *DQWW FKDUW'DWDFKDUDFWHUL]LQJHDFKWDVNDUHWUDQVPLWWHGYLDDILOHWRWKHG\QDPLFVLPXODWRU 3U2'+\6LQRUGHUWRSDUDPHWHUL]HWKHFRPPDQGOHYHORIWKHVLPXODWLRQPRGHOLHWKH FRQWUROUHFLSH SUHYLRXVO\FRQVWUXFWHGLQDFFRUGDQFHWRWKH(571YLHZE\DVVHPEOLQJ SUHGHILQHG RSHUDWLRQ REMHFWV 7KH SURFHVV OHYHO RI WKH VLPXODWLRQ PRGHO LV EXLOW DFFRUGLQJ WR WKH WRSRORJ\ RI WKH XQLW ZLWK GHYLFH RU FRPSRVLWH GHYLFH REMHFWV 7KH VLPXODWLRQ RI WKLV ³GHWDLOHG´ PRGHO LV WKHQ H[HFXWHG XQWLO WKH FRPSOHWLRQ RI WKH SURGXFWLRQ SODQ ,Q VXPPDU\ WKH PDLQ LGHD RI WKLV FRPELQHG DSSURDFK LV WR WDNH DGYDQWDJH RI WKH VWUHQJWKV RI G\QDPLF VLPXODWLRQ DQG PDWKHPDWLFDO SURJUDPPLQJ WR DFKLHYH D FRQVLVWHQW EDWFK PDQDJHPHQW LQ WKH ZRUNVKRS DQG WKXV WR HQKDQFH WKH DFKLHYHPHQWRIWKHG\QDPLFVLPXODWLRQ
'\QDPLFK\EULGVLPXODWLRQRIEDWFKSURFHVVHVGULYHQE\DVFKHGXOLQJPRGXOH
1097
%ULHIGHVFULSWLRQRIHDFKPRGXOHWKURXJKDQLOOXVWUDWLYHDSSOLFDWLRQ 7KH W\SLFDO PXOWLSXUSRVH EDWFK SURFHVVHV DGGUHVVHG LQ WKHVH ZRUNV DUH JHQHUDO QHWZRUNSURFHVVHVWKDWFRUUHVSRQGWRWKHPRUHJHQHUDOFDVHLQZKLFKPDWHULDOEDODQFHV PXVW EH WDNHQ LQWR DFFRXQW H[SOLFLWO\ &RQVHTXHQWO\ WKH VLPXODWLRQ PRGHOV KDYH WR LQFRUSRUDWH VHYHUDO JHQHUDO FKDUDFWHULVWLFV VXFK WKDW GLVMXQFWLYH DQG FXPXODWLYH UHVRXUFHV FRQVWUDLQWV YDULRXV VWRUDJH DQG WUDQVIHU SROLFLHV IL[HG DQGRU GHSHQGHQW SURFHVVLQJWLPHVRQEDWFKVL]H PL[LQJDQGVSOLWWLQJRIEDWFKHVHWF Q5
UA
UC A
V1
P4
C
A LM
TR1
REACTOR 1
UPrech
XPR1
UB
UIntAB
6 100
:(67
P3
1 2 S ET
2 AT
*
AT
A LM
*
PREHEATER / MIXER
M2
Q2
V3
&2/
UP2
IntAB
B
M3
P5
UP1
UCol
UR2
Q1
1 2 S ET
V2
AT
P1
P6
6 100
:(67
A LM
*
TR2
REACTOR 2
P3
UP3
4
1
S ET
P8
M1
9
6 100
:(67
P2
TPrech
FCol
UR1
P1
4
&21'
P2
COLUMN 5(%
XPR2
P7 P9
Q3
Q4
)LJWRSRORJ\RIWKHXQLWFRQVLGHUHGLQWKHH[DPSOH
,Q WKLV H[DPSOH WKH WRSRORJ\ RI WKH XQLW LV VKRZQ RQ ILJXUH 5HJDUGLQJ WKH HQHUJ\ SRLQW RI YLHZ UHDFWRUV DQG SUHKHDWHU FRQVXPH HOHFWULFLW\ WR PDLQWDLQ WKH RSHUDWLQJ FRQGLWLRQV DQG WKH FROXPQ UHTXLUHV KLJK SUHVVXUH VWHDP +3 DW ERLOHU DQG D FRRODQW &: DWFRQGHQVHU7KHV\QWKHVLVRISURGXFW 3QHFHVVLWDWHVWKHSUHKHDWLQJRIDUHDFWDQW $ QH[W D UHDFWLRQ UHDFWLRQ $ % → ,QW$% DQG ILQDOO\ D GLVWLOODWLRQ WR VHSDUDWH ILQDO SURGXFW 3 DQG UHVLGXH 3 ,I ZH VXSSRVH WKDW LQWHUPHGLDWH ,QW$% DOUHDG\ H[LVWV WKH JHQHULF UHFLSH RI WKH VHFRQG ILQDO SURGXFW 3 UHTXLUHV WKH SUHKHDWLQJ RI D UHDFWDQW & IROORZHGE\DUHDFWLRQUHDFWLRQ&,QW$%→3 5HDFWLRQFDQEHSHUIRUPHGLQGLIIHUHQWO\ LQWKHWZRUHDFWRUVZKLOH 5HDFWLRQ FDQEHSHUIRUPHGRQO\LQ 5($&725 )LQDOO\D]HUR ZDLWWUDQVIHUSROLF\LVFKRVHQEHWZHHQSUHKHDWHUDQGUHDFWRUV 5HDFWRU $ ∞
+RW$
7² 3UHKHDWLQJ
&ROXPQ
=:
(OHF ∞∞
3UHKHDWHU 0L[HU
% ∞
7² 3UHKHDWLQJ
+3 ∞∞
=:
3 ∞
&: ∞∞
5HDFWRU
+RW&
7 6HSDUDWLRQ
7² 5HDFWLRQ
& ∞
,QW$%
3 ∞
7² 5HDFWLRQ
7² 5HDFWLRQ
3 ∞
)LJ(571YLHZRIWKHVLWHUHFLSHRIWKHSURFHVV
&RPPRQO\WKHVXSSRUWRIDJUDSKLFDOIRUPDOLVPDOORZVQRQH[SHUWXVHUVLQVLPXODWLRQ DQGRSWLPL]DWLRQWRGHVFULEHSUREOHPVLQDQXQDPELJXRXVDQGLQWXLWLYHZD\E\DGGLQJ VSHFLILF FRQVWUXFWLRQ UXOHV ZKLOH LJQRULQJ WKH PDWKHPDWLFDO VXSSRUW XVHIXO WR LWV UHVROXWLRQ,QWKLVIUDPHZRUNWKH([WHQGHG5HVRXUFH7DVN1HWZRUN(571 IRUPDOLVP
1098
*+pWUHX[HWDO
KDV EHHQ GHYHORSHG IRU WKH PRGHOLQJ RI UHFLSHV %DVHG RQ WKH ZHOONQRZQ 5HVRXUFH 7DVN 1HWZRUN 571 IRUPDOLVP QHZ VHPDQWLF HOHPHQWV KDYH EHHQ LQWURGXFHG E\ >@ QRWDEO\ WR KDQGOH H[SOLFLWO\ FXPXODWLYH UHVRXUFHV VXFK DV XWLOLWLHV DQG PXOWLPRGDO UHVRXUFHV )RU RXU H[DPSOH WKH (571 UHSUHVHQWLQJ WKH SURFHGXUH RI WKH VLWH UHFLSH LV VKRZQRQILJXUH 6HYHUDO H[FHOOHQW UHYLHZV FOHDUO\ SRLQW RXW WKDW 0L[HG ,QWHJHU /LQHDU 3URJUDPPLQJ 0,/3 KDVEHHQZLGHO\XVHGIRUVROYLQJWKHEDWFKSURFHVVVFKHGXOLQJSUREOHP>@$V WKH JHQHULF QDWXUH RI WKH (571 IRUPDOLVP RIIHUV D GLUHFW FRUUHVSRQGHQFH EHWZHHQ WKH JUDSKLFDOHOHPHQWVDQGPDWKHPDWLFDOFRQVWUDLQWVVHYHUDO0,/3IRUPXODWLRQVKDYHEHHQ LPSOHPHQWHG &RQFHUQLQJ WKH H[DPSOH WKH VFKHGXOLQJ RI D VLQJOH SURGXFWLRQ RUGHU HTXDOWRNJRI 3LVVKRZQLQ ILJXUH,WGHWHUPLQHV WKH VHTXHQFLQJGHFLVLRQVRQ HDFKSURFHVVLQJXQLWVWDUWLQJGDWHVHWF DVZHOODVWKHQXPEHUDQGWKHVL]HRIEDWFKHV Starting date V1
T2
queue
Queue Queuemanagement management
P4
Condition Conditionon onmaterial materialavailability availability cond
UR1
P2
:(6 7
Decision center of the couple 5HDFWLRQ5($&725!
6100
1 2 SE T
AT
AL M
*
REACTEUR 1
TR1
XPR1
5HDFWRU
+RW$
Q2
% ∞
7² 5HDFWLRQ
Res Available
begin
M2
=:
start
5HDFWLRQ5($&725
Mutex Mutexplace placefor for resource resourceavailability availability
,QW$%
M10 M10
ControlledReaction (res)
Parameterized Parameterized macro-place macro-place
E10
M10 M10
Operation Operation Op.name Op.name == REACTION REACTION 11 Op.T = 320°K Op.Treac reac = 320°K Op.T = 293°K Op.Tcool cool = 293°K res.P res.P == 11 atm atm res.XP = 0.8 res.XPIntAB IntAB = 0.8
M20
M30
EquipementUnit EquipementUnit in res.V res.VAAin == V1 V1 in res.P res.PBBin == P2 P2 res.Q res.Q == Q2 Q2 out out res.P res.PPP == P4 P4 res.U res.U == U UR1 R1
S20
Feed
M50
res.U)
M60
S10
.val - U0 ≥.Size(A)
U0 ← U
E50
(res.Vain,
Drain (res.PPout,res.U)
.val - U0 ≥ .Size(B)
U0 ← U
E20
M40
Task Task .name .name == T2 T2 .StartingDate .StartingDate == 0.0 0.0 .Size .Size == 700 700 mol mol .res .res == .Op .Op ==
Heat(res.Q,res.T)
Feed(res.PBin,res.U)
Reaction1 (REACTOR 1)
S50
T2
Task token
PBin.open()
U.sig Retention
VAin.open()
U.sig Retention
)LJPDFURSODFHDQGWDVNWRNHQ
(DFKRSHUDWLRQRIWKH(571LVUHSUHVHQWHGLQWKHVLPXODWLRQPRGHORI3U2'+\6E\D PDFURSODFHSDUDPHWHUL]HGE\WKHHTXLSPHQWXQLWWKDWSHUIRUPVWKHRSHUDWLRQILJXUH 6R WKLV PDFURSODFH GHILQHV D FRXSOH 2SHUDWLRQ (TXLSPHQW8QLW! DQG LW LV LQFOXGHG LQ D VSHFLILF2'31VWUXFWXUHFDOOHGGHFLVLRQFHQWHU7KH2'31RIWKHFRQWUROUHFLSHLVEXLOW E\DVVHPEOLQJDVHWRIGHFLVLRQFHQWHUILJXUH 7KXVRSHUDWLRQVFDUULHGRXWE\VHYHUDO SURFHVVLQJ XQLWV PXVW EH GXSOLFDWHG DV LW LV GRQH LQ WKH (571 IRUPDOLVP 7KLV FDVH FRQFHUQV WKH RSHUDWLRQ 5HDFWLRQ SHUIRUPHG HLWKHU LQ 5($&725 RU 5($&725 ,Q DGGLWLRQLIWKH VDPHUHVRXUFHUHVLV XVHGE\VHYHUDORSHUDWLRQVRSL WKHQ HDFK GHFLVLRQ FHQWHU DVVRFLDWHG ZLWK D FRXSOH RSLUHV! VKDUHV WKH VDPH PXWH[ SODFH QDPHG 5HV$YDLODEOH ZKLFK PRGHOV WKH DYDLODELOLW\ RI WKH UHVRXUFH UHV 7KLV FDVH FRQFHUQV IRU H[DPSOH 5($&725 ZKLFKSHUIRUPVERWK 5HDFWLRQDQG 5HDFWLRQ(DFKWDVNLHWULSOHW 2SHUDWLRQ (TXLSPHQW8QLW%DWFKVL]H! HVWDEOLVKHG E\ WKH VFKHGXOLQJ LV LQVWDQWLDWHG DQG DVVRFLDWHGZLWKDWDVN7RNHQREMHFW 7!)LJXUHVKRZVWKH2'31RIWKHFRQWUROUHFLSH DW WKH SURFHGXUH OHYHO FRUUHVSRQGLQJ WR WKH (571 RI ILJXUH LQVWDQWLDWHG ZLWK WKH DIRUHPHQWLRQHG VFKHGXOLQJ 7KH VLPXODWLRQ LV WKHQ SHUIRUPHG E\ IROORZLQJ WKH SURGXFWLRQ SODQ VR GHILQHG ,Q ILJXUH WKH VXFFHVVLYH H[HFXWLRQ RI WZR EDWFKHV RI LGHQWLFDOVL]HLQWKHVDPHGHYLFHLVVKRZQ7KHFXUYHVVKRZWKDWWKHGXUDWLRQVRIHDFK
'\QDPLFK\EULGVLPXODWLRQRIEDWFKSURFHVVHVGULYHQE\DVFKHGXOLQJPRGXOH
1099
EDWFKDUHQRWHTXDOLQVLPXODWLRQGLIIHUHQWIHHGUDWHGXHWRDJUDYLW\WUDQVIHU ZKLOHWKH\ DUH FRQVLGHUHG DV LGHQWLFDO DQG IL[HG DW WKH VFKHGXOLQJ OHYHO 7KLV FDVH KLJKOLJKWV WKH PRGHOLQJ JDS PRGHOV DUH GLIIHUHQW E\ QDWXUH H[LVWLQJ EHWZHHQ WKH WZR PRGXOHV DQG WKHQHHGWRSURYLGHGHFLVLRQDODXWRQRP\WRWKHVLPXODWRUIRUWKHODXQFKLQJRIEDWFKHV U ≥ .Volume WW K K % % NJ NJ 3DUDPHWHUV 3DUDPHWHUV IntAB
T3
T1
T4
T6
T5
T2
T7
Column available
NJ
NJ NJ G H 3 G H 3
NJ G H 3 G H 3
T8
50 kg
Réact 2
T10
T11 T13
Préchauf 10
15
20
T12
3
25
UPrech ≥ 0,625×.Volume and UB ≥ 0,375×.Volume
R1 available
WW K K % % NJ NJ 3DUDPHWHUV 3DUDPHWHUV
UA ≥ .Volume
Préchauffage1(Prec)
WW K K % % NJ NJ 3DUDPHWHUV 3DUDPHWHUV 5
Préchauffage2(Prec)
T14
Réact 1
0
Preheater available
N J N J NJ G H 3 G H 3 G H 3
T9 50 kg
50 kg
Colonne Stk IntAB
Separation(Col)
T18
T17
T16
Reaction1(R1)
T15
WW K K % % NJ NJ 3DUDPHWHUV 3DUDPHWHUV
%DWFKG2) V
%DWFKG2) V
5HWHQWLRQ LQWKHYHVVHOV PRO
T19 ,QW
%$
begin
T20
UPrech ≥ 0,625×.Volume and UB ≥ 0,375×.Volume
T21
R2 available
Reaction1(R2)
5($&725
$ %
Réaction2(R2)
7LPHV
)LJ2'31RIWKHFRQWUROUHFLSHSURFHGXUHOHYHO
&RQFOXVLRQ $ QRUPDOO\ HQGHG VLPXODWLRQ LQGLFDWHV WKDW WKH SURGXFWLRQ SODQ LV YDOLGDWHG DQG WKH DQDO\VLV RI WKH RSHUDWLRQDO DQG SK\VLFRFKHPLFDO SURSHUWLHV FDQ EH PDGH ,I WLPH FRQVWUDLQWV DUH YLRODWHG WKH XVHU KDV WR DQDO\]H WKH VLPXODWLRQ UHVXOWV WR XQGHUWDNH FRUUHFWLYH DFWLRQV UHILQHPHQW RI GXUDWLRQV VKLIWLQJ LQ PDUJLQ HWF $FFRUGLQJ WR WKH REMHFWLYHRIWKHVWXG\WKHVLPXODWLRQUHVXOWVDERYHFDQEHXVHGWRUHVHWWKHGDWDRIWKH PDWKHPDWLFDO PRGHO DQG WKXV LPSURYH WKH SURGXFWLRQ SODQV REWDLQHG WKURXJK DQ LWHUDWLYHSURFHGXUH$QRWKHUVWUDWHJ\LVWKHVLPXODWLRQRIHDFKRSHUDWLRQLQGHSHQGHQWO\ IRU D VHW RI SDUDPHWHUV LQ RUGHU WR REWDLQHG DFFXUDWH LQLWLDO GDWD IRU WKH VFKHGXOLQJ PRGXOH &XUUHQWO\ WKH HIIHFWLYHQHVV RI WKLV IUDPHZRUN KDV EHHQ SURYHG DQG VHYHUDO VWXGLHVRQEDWFKSURFHVVHVKDYHEHHQFRQGXFWHGZLWKVXFFHVV
5HIHUHQFHV >@ $JKD 0 ,QWHJUDWHG PDQDJHPHQW RI HQHUJ\ DQG SURGXFWLRQ VFKHGXOLQJ RI EDWFK SURFHVVHVDQG&+3SODQWV7KqVHGH'RFWRUDW,13GH7RXORXVH)UDQFH >@ 0pQGH] &$ &HUGi - *URVVPDQQ ,( +DUMXQNRVNL , HW )DKO 0 6WDWHRIWKHDUW UHYLHZRIRSWLPL]DWLRQPHWKRGVIRUVKRUWWHUPVFKHGXOLQJRIEDWFKSURFHVVHV&RPSXWHUVDQG &KHPLFDO(QJLQHHULQJYRO,VVXHV3DJHV >@ 3HUUHW-*+pWUHX[-0/H/DQQ,QWHJUDWLRQRIDQREMHFWIRUPDOLVPZLWKLQDK\EULG G\QDPLFVLPXODWLRQHQYLURQPHQW&RQWURO(QJLQHHULQJ3UDFWLFH9ROSS
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Recipe-based Batch Process Engineering Tool for Development Workflow Jae Hyun Cho*, Junghwan Kim, Il Moon Department of Chemical and Biomolecular Engineering, Yonsei University 262 Seongsanno, Seodaemun-gu, Seoul 120-749, Korea * [email protected]
Abstract This paper presents the extensive upgrades of Aspen Batch Process DeveloperTM (ABPD) that is a recipe-based batch process modeling and simulation environment, designed to employ over the entire life cycle of processes from early synthetic route selection to plant scale engineering. The user interface, in particular, operations dialog re-written in C# is of high customization. A number of enhanced provisions have been incorporated to support data management. A set of challenging schedule issues that have routinely exhibited in industrial processes, but not provided complete solutions in commercial tools yet, are addressed. Generic approach for industrial solutions is implemented, in combination with automatic scheduling and interactive scheduling constraints over multi-levels of process representation hierarchy. Keywords: Batch Processes, Modelling, and Simulation, Batch Process Modelling Tools, Scheduling.
1. Introduction With an intensifying worldwide competition in pharmaceutical industry and dynamic change of market, involving rising costs, tightening new drug authorization processes, and a paradigm shift for quality assurance, pharmaceutical industries are obviously under imperative business challenges in terms of delivering quality assured product, reducing time-to-market, reducing cost-of-goods, increasing return-on-capital from manufacturing assets and ensuring regulatory compliance. The need for optimal process development workflow amongst laboratory chemists, pilot-plant engineers, process engineers and production-floor operators has never been greater. This situation has led to the emergence of commercial modeling and scheduling tools. In academia, focus has been on modeling and scheduling of batch processes instead of incorporating into the system a provision for helping find and develop a new pharmaceutical product, which bottlenecks the success of pharmacy business. In the field of scheduling and planning tools, comprehensive application of commercial tools to industrial pharmacy development has not been carried out since those tools have been introduced into batch process industry. In fact, it reflects there has been a considerable gap between an impending need of industry and the applicability of those tools to industry. Intrinsically batch processes require tools of supporting both modeling and scheduling/planning in interactive and responsive fashion. A number of researches have been conducted in different approaches a majority of which is based on mathematical programming methods. A few heuristic rule-based approaches are presented to claim the overcoming of the limitation of mathematical formulation in handling complexities in real plant operation as well as excessively increasing computational size together with the problem size increasing. He and Hui (2008) introduced a heuristic rule-based genetic
Recipe-based Batch Process Engineering Tool for Development Workflow
1101
algorithm for large-size single-stage multi-product scheduling problems, which a heuristic rule is applied ahead of applying genetic algorithm, is a key factor for better feasible solutions. Henning and Cerdá (2000) introduced a knowledge-based predictive and reactive scheduling system designed to maximize the involvement of human expertise-based decision-making by adopting object-oriented technology. A recent review can be referred to Méndez, Cerdá, Grossmann, Harjunkoski, and Fahl (2006), and Floudas and Lin (2004). The first commercial batch simulator is Batches from Batch Process Technologies in the mid of 1980 (BATCHES User’s Manual, 2001). SuperPro Designer from Intelligen, Inc. (Toumi, Jürgens, Jungo, Maier, Papavasileiou and Petrides, 2010) supports combined batch and continuous process modeling. Modeling and simulation information is to be exported and imported to SchedulePro to perform schedule and planning. In reality of pharmaceutical industry, a majority of investment focus on the development of a new pharmaceutical product through a set of clinical trials, then focus moves on the development of process scale-up and optimal manufacturing configuration. Unfortunately, until now there is no commercialized comprehensive tool to help pharmaceutical industry perform entire development work flow from route selection through bench/pilot plant process engineering to full-scale manufacturing. Aspen Batch Process Developer (ABPD) from Aspen Technology, Inc. (Aspen Batch Process Developer User Manual, 2010) is a recipe-oriented batch processmodeling environment that spans the value chain of batch process development from discovery through manufacturing. Modi and Musier (1998) presented the overview of the system, and its architecture. This paper describes extensive upgrades of ABPD: new user interface completely rewritten in C#, a number of new features for supporting the entire process development workflow, and new advanced generic scheduling functionality as industrial solutions.
2. New User Interface The user interface for the Operations dialog has been completely rewritten in C#, based on the issues that the ABPD usability focus group addressed. The new Operations dialog is accessed the same way as before, by double-clicking on an Operation in the Recipe View in ABPD. In addition to the Operations dialog, many of the other dialogs in the Data menu have been re-written. Those are of high customization for toolbars, keyboard accelerators, menus and user-defined tools so that a power user could create an environment appropriate for chemists and another appropriate for design engineers at a pilot plant, and yet another environment appropriate for manufacturing engineers.
3. Data Management for Development Workflow 3.1. Equipment Independent Recipe Once chemists complete route selection, process chemists and engineers try to build a more rigorous recipe for the various alternatives, aiming to take the lab procedure in the format of paragraph, and then add details in tabular recipe for scale-up. The recipe does not have any equipment assigned. Instead, generic stages are assigned as placeholders for equipment, the recipe of which is independent of equipment. ABPD enables users to define recipes that are equipment-independent with a generic equipment class, GenericBatch. Each new project created has a conceptual facility as part of the project, which is populated with several generic-batch equipment units.
1102
J.H. Cho et al.
3.2. Importing/Exporting Data All data in ABPD can be imported and exported, allowing sharing data and information and may be used to further document a batch process; Steps and Preferences, Text Recipe, Equipment, Material, and Utility. In addition, Step data can be exported to XML or BatchML (a schema based on ANSI/ISA88 standards). The information exported from Step database is transformed to the BatchML schema where the recipe information is represented by a master recipe that contains the recipe procedure structure represented with recursive elements. 3.3. Exporting Data to Excel A tool for custom-exporting data to an excel workbook is provided. The interface provides a hierarchical view of the databases such that there is no need to understand the database relations. 3.4. Working with ABPD Compound Documents ABPD Compound Documents include all the ABPD project databases (project, equipment, material, and steps). ABPD transforms all the databases into xml format and compresses them. Because of this compression mechanism, the size of this file is significantly smaller than that of the ABPD databases, enabling to send it by e-mail or to put it in Document Management Systems. Additional documents (for example, Excel reports, Word Documents, and so on) can also be added to this file. In order to be able to compress additional documents, the access database is transformed into xml. 3.5. Custom Operation and Excel/VBA Model A comprehensive collection of models underlies the various algorithms available in ABPD, comprising three broad classes of models; operation, physical property and vapor emission models which are solved when a recipe is simulated. Currently 88 operations are available in operations library, each represented by one or more rigorous models. It is not possible to cover all operations with its library. A user-defined operation, named custom operation is supported, which can be represented either by using built-in shortcut or by a user-defined model written in Excel/VBA. Any thirdparty application invoked from either Excel or VBA can be linked to ABPD, which allows Excel as a glue between ABPD and other calculation engines.
4. Advanced Scheduling Features 4.1. Batch Process Representation Concept In ABPD, a recipe is represented at multi-levels of abstraction, the highest of which is process. A process produces one key output intermediate (the product) from a key input intermediate (the main raw material). A process may comprise one or more steps. A step usually produces one key output intermediate from one key input intermediate. ABPD allows the specification of second and third output and input intermediates for steps, each of which occurs in a facility. A step may comprise one or more unit procedures. A unit procedure is usually performed in one main equipment unit, although several different equipment units may be involved. A unit procedure may comprise one or more operations. An operation is the lowest level of abstraction for a process. The operations are the basic building blocks of the process. Each operation represents a processing task such as charge, heat, react, ferment, dry, extract, transfer, crystallize and so on. 4.2. Step Level Automatic Schedule Options At step level, three advanced schedule options are provided: either exact recipe sequence (ERS) or Auto-Parallel that automatically determines which operations can
Recipe-based Batch Process Engineering Tool for Development Workflow
1103
occur in parallel and finally Just-In-Time (JIT). In ERS an operation cannot start before the end of previous operation in recipe sequence. Auto-Parallel option releases any constraints coming from the recipe sequence as well as a set of operations or unit procedures in parallel . JIT tries to eliminate gaps in the schedule by delaying the start time of operations whenever this delay does not increase of cycle time. For each equipment in the batch, ABPD loops backwards over the list operations occurring in the equipment unit. Whenever there is a gap between the start time of an operation and the end of the previous one, ABPD tries to move the previous operation forward in order to close the gap. There are four possible choices using three options such as ERS, ERS + JIT, Auto-Parallel and Auto-Parallel + JIT. 4.3. Interactive schedule control at Operation/Unit Procedure levels In contrast to automatic schedule options at step level, in order to support arbitrary schedule control in more detail at unit procedure and operation levels, the approach aims to facilitate interactive and flexible control environment so that users are intuitively and easily guided to reach their desired schedule. The focus is on four schedule synchronization types in general: start-to-start, start-to-end, end-to-start and end-to-end. The start-to-start synchronization (namely parallel block) enables the start of an operation or unit procedure to the end of other operations or unit procedures so that a set of operations or unit procedures may be carried out either in series or in parallel with each other. Within the parallel block, each parallel branch is also allowed to delay in reference to first branch, using Parallel-Delay constraint. The start-to-end synchronization is provided by Start-After constraint that synchronizes or delays an operation or unit procedure to the end of other operations or unit procedures together with optional Delay constraint. The other two synchronization types can be embodied with Start-After constraint with optional Delay and Parallel-Delay constraints. Users have access to all these constraints from operation dialog, Gantt chart and text recipe. 4.4. Solutions to Shared Equipment Problems Most batch equipment is allocated to only one batch or operation at one time so that a batch vessel can process only one operation, not other operations, at any one time (Exclusive equipment). Whereas, Shared equipment may be allocated to multiple batches or operations at the same time. If parallel events in a vessel occur in different operations, these parallel calculations cannot be done because the simulation engine will not be able to combine the equations from those operations into a single model. However, ABPD needs to solve parallel events within the frame of sequential-modular simulator instead of equation-oriented one. To overcome this issue, shared equipment contents calculation could be made based on composite contents calculated from input and output streams instead of initial and final regular equipment contents. Further, most parallel events yield negative contents of shared equipment upon this method. This problem could be fixed by imposing the set of schedule control constraints proposed above appropriately; Parallel Block, Start-After, Parallel-Delay with optional Delays. 4.5. Interleaves of equipment in Multiple Batches Schedule In multiple batches, equipment used in current batch cannot be allocated until any of the previous batches have released it, which sometimes bottlenecks optimal schedule. In order to resolve this limit, we introduce shared-exclusive equipment that may be allocated to multiple batches only by one operation at one time if there are any gaps in previous batches for shared-exclusive equipment to interleave so that those operations using shared-exclusive equipment can be interleaved into previous batches even before the equipment is released from previous batches. However, at the time when a sharedexclusive equipment interleaves into a previous batch, all the subsequent shared-
1104
J.H. Cho et al.
exclusive equipment contents become incorrect because ABPD execute multiple batches by simulating and scheduling batch by batch instead of entire batches at once, although entire batches schedule is correct. This contents problem could be overcome by re-simulating the multiple batches based on the schedule obtained from first simulation, maintaining the same method of executing batch by batch.
5. Concluding Remarks ABPD is a recipe-based workflow-oriented batch process modeling environment. In order to support the entire life cycle modeling activities of processes, a number of new features for data management are incorporated: importing/exporting data, exporting data to excel, compound documents, user-defined operation and excel/VBA model. In line with the workflow-based design, user interface of high customization is completely rewritten in C#. In addition, new generic approaches for overcoming a set of fundamental technical limitations embedded in sequential modulus simulator paradigm are developed, and industrially applicable general solutions are successfully implemented, focusing on schedule synchronization primitives and the classification of equipment allocation types. From software usability perspective, the resulting software environment gives mainly two benefits as follows: enabling two key tasks of both rigorous modeling and advanced scheduling to perform within a unified framework; facilitating good balance between automatic scheduling at step level and detailed schedule control by human expertise in interactive and flexible fashion at unit procedure and operation levels.
Acknowledgement This work was supported by the Ministry of Education of Korea through its BK21 and the author, Jae Hyun Cho would like to acknowledge the other members of the Aspen Batch Process Developer Development Team.
References Aspen Batch Process Developer User Manual (2010). Aspen Technology, Inc., Burlington, MA, USA. BATCHES User’s Manual (2001). Batch Process Technologies, Inc., 2001. Floudas, A. C. and Lin, X., (2004). Continuous-time versus discrete-time approaches for scheduling of chemical processes: a review, Computers & Chemical Engineering, 28, 21092129. He, Y. and Hui C., (2008). A rule-based genetic algorithm for the scheduling of single-stage multi-product batch plants with parallel units, Computers & Chemical Engineering, 32, 30673083. Henning, G. P. and Cerdá, J., (2000). Knowledge-based predictive and reactive scheduling in industrial environments, Computers & Chemical Engineering, 24, 2315-2338. Méndez, A. C., Cerdá, J., Grossmann, I.E., Harjunkoski, I. and Fahl, M., (2006). State-of-the-art review of optimization methods for short-term scheduling of batch processes, Computers & Chemical Engineering, 30, 913-946. Modi, A. and Musier, R., (1998). Systematic Batch Process Development and Analysis via an Advanced Modeling and Simulation Tool, In Proceedings of Batch Process Design and Operations Conference, AIChE Spring National Meeting, New Orleans, LA. Toumi, A., Jürgens, C., Jungo C., Maier A. B., Papavasileiou V. & Petrides D., (2010). Design and Optimization of a Large Scale Biopharmaceutical Facility Using Process Simulation and Scheduling Tools, Pharmaceutical Engineering, January/February 2010, Vol. 30, No. 2, www.ispe.org.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
6XSHUVWUXFWXUH$SSURDFKWR%DWFK3URFHVV 6FKHGXOLQJE\6JUDSK5HSUHVHQWDWLRQ %%HUWRN D5$GRQ\LD))ULHGOHUD/7)DQE D
'HSDUWPHQWRI&RPSXWHU6FLHQFHDQG6\VWHPV7HFKQRORJ\8QLYHUVLW\RI3DQQRQLD (J\HWHPX+9HV]SUpP+XQJDU\ E 'HSDUWPHQWRI&KHPLFDO(QJLQHHULQJ.DQVDV6WDWH8QLYHUVLW\0DQKDWWDQ.DQVDV 86$
$EVWUDFW 6FKHGXOLQJ SOD\V D NH\ UROH LQ EDWFK SURFHVV RSHUDWLRQ LW KDV D PDMRU HIIHFW RQ WKH SURFHVV¶ SHUIRUPDQFH $YDLODEOH PHWKRGV IRU GHWHUPLQLQJ WKH RSWLPDO VFKHGXOH DUH SULPDULO\ EDVHGRQHLWKHU0,/30,1/3IRUPXODWLRQLQFRQMXQFWLRQZLWKPDWKHPDWLFDO SURJUDPPLQJ)ORXGDVDQG/LQ9DNOLHYD%DQFKHYDDQG.LULORYD RUJUDSK UHSUHVHQWDWLRQLQFRQMXQFWLRQZLWKFRPELQDWRULDODOJRULWKPV6DQPDUWtHWDO 7KH FXUUHQW ZRUN FRPSULVHV WKUHH PDMRU FRQWULEXWLRQV )LUVW DQ DOJRULWKP KDV EHHQ FUDIWHGWRJHQHUDWHDVXSHUVWUXFWXUHIRUDVFKHGXOLQJSUREOHP7KHSUREOHPLVGHILQHGLQ WKHIRUPRIDQ6JUDSKUHSUHVHQWLQJWKHUHFLSH7KHVXSHUVWUXFWXUHFRQWDLQVH[FOXVLYHO\ HYHU\ VWHS SRWHQWLDOO\ SHUIRUPHG E\ DQ\ RI WKH IXQFWLRQDO RU RSHUDWLQJ IDFLOLWLHV RU HTXLSPHQW XQLWV FDSDEOH RI FRPSOHWLQJ DW OHDVW RQH WDVN WR EH VFKHGXOHG 7KHVH VWHSV LQYROYHH[HFXWLRQVRIWDVNVDQGFKDQJHRYHUVIURPRQHWDVNWRDQRWKHU6HFRQGDQ0,/3 IRUPXODWLRQ LV HODERUDWHG RQ WKH EDVLV RI WKH VXSHUVWUXFWXUH ZKLFK JXDUDQWHHV WKH RSWLPDO VROXWLRQ RI WKH VFKHGXOLQJ SUREOHP 7KLUG DUHOD[DWLRQ RI WKH 0,/3 PRGHO LV LQFRUSRUDWHG LQWR WKH 6JUDSK DOJRULWKPV WR VXSSRUW WKH VHOHFWLRQ RI VXESUREOHPV DQG GHFLVLRQYDULDEOHVLQWKHEUDQFKDQGERXQGSURFHGXUH .H\ZRUGVVFKHGXOLQJ6JUDSKVXSHUVWUXFWXUH0,/3
,QWURGXFWLRQ 7KHPDMRUGLIILFXOW\LQDSSO\LQJPDWKHPDWLFDOSURJUDPPLQJUHVLGHVLQWKHGHILQLWLRQRI D PDWKHPDWLFDO PRGHO ZLWK PLQLPDO FRPSOH[LW\ JLYLQJ ULVH WR DW OHDVW RQH RSWLPDO VROXWLRQRIWKHRULJLQDOSUREOHP,QWLPHSRLQWRUWLPHLQWHUYDOEDVHGPRGHOVWKHWLPH KRUL]RQLVGLVFUHWL]HGE\DSUHGHILQHGQXPEHURIWLPHSRLQWVRUWLPHVORWV1HYHUWKHOHVV QR DSSURDFK LV DYDLODEOH WR GHWHUPLQH WKH VXIILFLHQW QXPEHU RI WLPH SRLQWV IRU WKH JOREDOO\RSWLPDOVROXWLRQ&DVWURHWDO 3UHFHGHQFHEDVHG0,/30L[HG,QWHJHU /LQHDU3URJUDPPLQJ IRUPXODWLRQVGRQRWHQWDLOWKHVSHFLILFDWLRQRIWKHQXPEHURIWLPH SRLQWVDSULRUL2QWKHRWKHUKDQGWKHVL]HRIWKHPRGHOLVKLJKO\VHQVLWLYHWRWKHQXPEHU RIEDWFKHVDQGVRPHFRQVWUDLQWVDUHGLIILFXOWWRLPSOHPHQW.RSDQRVHWDO ,Q WKH 6JUDSK IUDPHZRUN WKH PDWKHPDWLFDO PRGHO IRU WKH RSWLPL]DWLRQ LV D GLUHFWHG JUDSK7KHWHPSRUDORUGHULQJRIWDVNVLV H[SUHVVHG E\WZR VHWRIDUFVUHFLSHDUFVDQG VFKHGXOHDUFV7KHUHFLSHGHILQHGE\WKHUHFLSHDUFVVHUYHV DVWKHLQSXWWRWKHEUDQFK DQGERXQGEDVHGRSWLPL]DWLRQDOJRULWKP7KHVFKHGXOHDUFVUHSUHVHQWLQJWKHVFKHGXOLQJ GHFLVLRQV DUH LQFRUSRUDWHG LQWR WKH JUDSK WKURXJK WKH RSWLPL]DWLRQ 7KH 6JUDSK IUDPHZRUNHQVXUHVWKDWWKHUHVXOWLQJVROXWLRQRIWKHSUREOHPLVJOREDOO\RSWLPDODQGWKDW LQIHDVLEOHRUVXERSWLPDOVROXWLRQVDUHQHYHUJHQHUDWHG7KHSUREOHPVSHFLILFPRGHODQG DOJRULWKPV XVXDOO\ UHVXOW LQ OHVV FRPSXWDWLRQDO QHHGV EXW UHTXLUHV GHHSHU LQVLJKW DQG
1106
%%HUWRNHWDO
SURJUDPPLQJ VNLOOV WR H[WHQG WKH IUDPHZRUN WR DQ XQGLVFRYHUHG FODVV RI VFKHGXOLQJ SUREOHPV
6JUDSKIUDPHZRUN 7KH 6JUDSK IUDPHZRUN GHYHORSHG E\ 6DQPDUWt HW DO DLPV DW WKH VKRUW WHUP VFKHGXOLQJ RI PXOWLSXUSRVH EDWFK SODQWV 7KLV IUDPHZRUN FRPSULVHV D UHSUHVHQWDWLRQ D PDWKHPDWLFDO PRGHO D VROXWLRQ PHWKRG WHUPHG WKH EDVLF DOJRULWKP DQGDFFHOHUDWLRQWRROV7KHXQGHUO\LQJQRWLRQLVWRH[SORUHDSUREOHP IRUPXODWLRQWKDW PDQLIHVWV LWVHOI WKH XQLTXH VWUXFWXUH RI D FODVV RI VFKHGXOLQJ SUREOHPV DQG D VROXWLRQ SURFHGXUH WKDW H[SORLWV WKLV XQLTXH VWUXFWXUH )ULHGOHU +HJ\KDWL DQG )ULHGOHU 7KHUHVXOWDQWDSSURDFK LV HQGRZHG ZLWK WKH IROORZLQJ DGYDQWDJHV JOREDOO\ RSWLPDO VROXWLRQV DUH GHWHUPLQHG QR LQIHDVLEOH VROXWLRQV DUH REWDLQHG LQ WHUPV RI FURVVWUDQVIHUV VHDUFK VSDFH LV VLJQLILFDQWO\ UHGXFHG DQG LW FRQVLVWV RI D FRQWLQXRXVIRUPXODWLRQZLWKRXWWKHQHFHVVLW\RIGHWHUPLQLQJWKHWLPHSRLQWV $Q 6JUDSK FDQ UHSUHVHQW WKH UHFLSH DV ZHOO DV WKH VROXWLRQ RI D EDWFK SURFHVV VFKHGXOLQJSUREOHPZLWKUHFLSHJUDSKVDQGVFKHGXOHJUDSKV7KH6JUDSKLVGHILQHGWR EHDVFKHGXOHJUDSKIRUDUHFLSHJUDSKLILWVDWLVILHVIRXUD[LRPV6DQPDUWL 7KH YLRODWLRQRIDQ\D[LRPZLOOQRWUHVXOWLQDIHDVLEOHVFKHGXOH$VXEJUDSKRIDVFKHGXOH JUDSK LH FRPSRQHQWJUDSK LQFRUSRUDWHV D VHW RI DUFV UHSUHVHQWLQJ WKH WDVNV DQG FKDQJHRYHUVWREHSHUIRUPHGE\DJLYHQHTXLSHPHQWXQLW )RULOOXVWUDWLRQVXSSRVHWKDWIRXUSURGXFWVQHHGWREHSURGXFHGZKHUHSURGXFWV$% DQG ' DUH SURGXFHG LQ WKUHH FRQVHFXWLYH VWHSV SURGXFW & LV SURGXFHG LQ WZR FRQVHFXWLYH VWHSV (TXLSPHQW XQLWV ( WKURXJK ( DUH DYDLODEOH WR PDQXIDFWXUH WKH SURGXFWV 7KH UHFLSH RI WKLV LOOXVWUDWLRQ LV JLYHQ E\ 7DEOH )LJXUH GHSLFWV WKH FRUUHVSRQGLQJ UHFLSHJUDSK DQG D FRPSRQHQWJUDSK FRQWDLQLQJ VHTXHQFH DQG IRUHTXLSPHQWXQLW(7RDFWLYDWHWKHLQLWLDOWDVNRIHTXLSPHQWXQLW(LQWKLVVFKHGXOH DQGWRIDFLOLWDWHSUHVFULELQJWKHFRUUHVSRQGLQJPDWKHPDWLFDOPRGHODQRGHUHSUHVHQWLQJ HTXLSPHQW XQLW ( LQ LWV LQLWLDO VWDWH DQG DQ DUF WR WDVN LV LQFOXGHG LQ WKH JUDSK )LJXUHFRQVLWVRIDVHULHVRIWKUHHFRQVHFXWLYHGLUHFWHGDUFVLQGLFDWLQJWKHVFKHGXOHRI HTXLSPHQWXQLW(OHDGLQJIURPLWVLQLWLDOVWDWHWRWDVNWKHQIURPWDVNWRWDVN DQGILQDOO\IURPWDVNWRWDVN 7DEOH5HFLSHRISURGXFWV$%&DQG' 3URG$ 3URG% 3URG& 3URG' 7DVN 7LPH 7LPH 7LPH 7LPH (TXLS (TXLS (TXLS (TXLS K K K K ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (
6XSHUVWUXFWXUHIRUVFKHGXOLQJUHSUHVHQWHGE\6JUDSKV $Q 6JUDSK FDQ EH FRQVLGHUHG DV WKH VXSHUVWUXFWXUH IRU D VFKHGXOLQJ SUREOHP LI LW LQFOXGHVDWOHDVWRQHVFKHGXOHJUDSKUHSUHVHQWLQJDQRSWLPDOVFKHGXOHIRUWKHSUREOHP 7KH VXSHUVWUXFWXUH SURSRVHG LQWKH FXUUHQW FRQWULEXWLRQ FRQWDLQV HYHU\ VFKHGXOHJUDSK UHSUHVHQWLQJDIHDVLEOHVFKHGXOH7KHVXSHUVWUXFWXUHLOOXVWUDWHGLQ)LJXUHLQFRUSRUDWHV
6XSHUVWUXFWXUH$SSURDFKWR%DWFK3URFHVV6FKHGXOLQJE\ 6JUDSK5HSUHVHQWDWLRQ
1107
WKH UHFLSHJUDSK DV ZHOO DV DGGLWLRQDO QRGHV DQG DUFV H[SUHVVLQJ WKH SRWHQWLDO FKDQJHRYHUVRIHTXLSPHQWXQLWVDPRQJWKHWDVNV
)LJXUH5HFLSHJUDSKDQGDFRPSRQHQWJUDSKRIHTXLSPHQWXQLW(IRUWKHH[DPSOH
)LJXUH6XSHUVWUXFWXUHVIRUWKHVFKHGXOLQJSUREOHPIRUWKHH[DPSOH
1108
%%HUWRNHWDO
6XSHUVWUXFWXUHJHQHUDWLRQ $VVLJQHG WR HDFK HTXLSPHQW XQLW DUH D QRGH GHQRWLQJ LWV LQLWLDO SRVLWLRQ DQG DUFV UHSUHVHQWLQJ DOO RI LWV SRWHQWLDO IRU SHUIRUPLQJ WDVNV DV ZHOO DV FKDQJHRYHUV )LJXUH VKRZVWKHSRWHQWLDOWDVNVDQGFKDQJHRYHUVIRUHDFKHTXLSPHQWXQLWIRUWKHH[DPSOHLQ VHSDUDWH6JUDSKV)LQDOO\WKHVH6JUDSKVDUHPHUJHGWRIRUPWKHVXSHUVWUXFWXUH &OHDUO\HDFKFRPSRQHQWJUDSKUHSUHVHQWLQJDIHDVLEOHVFKHGXOHRIDQHTXLSPHQWXQLWLV DVXEJUDSKRIWKLVVXSHUVWUXFWXUH7KXVDQ\V\VWHPDWLFPHWKRGEDVHGRQLWKDVSRWHQWLDO IRU\LHOGLQJWKHRSWLPDORUDOWHUQDWLYHIHDVLEOHVFKHGXOHV $ VXSHUVWUXFWXUH FDQ EH JHQHUDWHG E\ DOJRULWKP 6*6XSHUVWUXFWXUH VHH )LJXUH 7KH DOJRULWKP OHDGV WR DOO SRWHQWLDO VFKHGXOLQJ GHFLVLRQV DQG UHSUHVHQWV WKHP DV SRWHQWLDO VFKHGXOHDUFVWREHDGGHGWRWKHUHFLSHJUDSK
)LJXUH6*6XSHUVWUXFWXUHDOJRULWKPIRUWKHVXSHUVWUXFWXUHJHQHUDWLRQ
0,/3PRGHOIRUVFKHGXOLQJEDVHGRQVXSHUVWUXFWXUH $Q 0,/3 PRGHO FDQ EH GHILQHG RQ WKH EDVLV RI WKH VXSHUVWUXFWXUH JHQHUDWHG ,Q WKH PRGHO ELQDU\ YDULDEOHV DUH DVVLJQHG WR HDFK DUF LQ WKH VXSHUVWUXFWXUH H[SUHVVLQJ D VFKHGXOLQJ GHFLVLRQ FRQFHUQLQJ WKH H[HFXWLRQ RI D WDVNFKDQJHRYHU E\ DQ HTXLSPHQW XQLW,ILWLVDVVLJQHGWRDFKDQJHRYHULHWKHFRUUHVSRQGLQJVFKHGXOHDUFLVLQFOXGHGLQ WKHVFKHGXOHJUDSKWKHELQDU\YDULDEOHLVRWKHUZLVHLWLV,QDGGLWLRQWRWKHELQDU\ YDULDEOHV UHODWHG WR WKH DUFV FRQWLQXRXV YDULDEOHV DUH DVVLJQHG WR HDFK QRGH LQ WKH VXSHUVWUXFWXUHH[SUHVVLQJWKHVWDUWLQJWLPHRIDQHTXLSPHQWXQLWWDVNRUWKHSURGXFWLRQ WLPHRIDSURGXFW7KHPDNHVSDQLVUHSUHVHQWHGE\DFRQWLQXRXVYDULDEOHDVZHOO $Q0,/3PRGHOFRQWDLQVFRQVWUDLQWVIRUVWDUWLQJWLPHVRI HDFKWDVNEDVHGRQWKHDUFV DQG IRU WKH UHODWLRQVKLS EHWZHHQ WKH PDNHVSDQ DQG WKH VWDUWLQJ WLPHV RI WKH WDVNV 0RUHRYHU WKHUH DUH WKUHH DGGLWLRQDO FODVVHV RI FRQVWUDLQWV )LUVW WKH HTXLSPHQW XQLW PXVW EH FRQVHUYHG DW HDFK QRGH 6HFRQG HDFK WDVN LV UHTXLUHG WR EH SHUIRUPHG E\ H[DFWO\ RQH HTXLSPHQW XQLW 7KLUG WKH QXPEHU RI IXQFWLRQDOO\ HTXLYDOHQW HTXLSPHQW XQLWVLHUHVRXUFHVLVOLPLWHG $Q 0,/3 PRGHO EDVHG RQ WKH VXSHUVWUXFWXUH JLYHV WKH RSWLPDO VROXWLRQ RI WKH VFKHGXOLQJ SUREOHP 7KH PLQLPDO PDNHVSDQ LV KRXUV IRU WKH H[DPSOH )LJXUH LOOXVWUDWHVWKHVFKHGXOHJUDSKRIWKHPLQLPDOPDNHVSDQVROXWLRQ1RWHWKDWLIWKHFURVV WUDQVIHU LV FLUFXPYHQWHG D SUREOHP ZLWK QR LQWHUPHGLDWH VWRUDJH UHTXLUHV QRQ]HUR FKDQJHRYHUWLPHVIRUWKH0,/3IRUPXODWLRQRUXWLOL]DWLRQRIWKHJUDSKDOJRULWKPVIURP WKH6JUDSKIUDPHZRUNLQWKHVHDUFKSURFHGXUH
6XSHUVWUXFWXUH$SSURDFKWR%DWFK3URFHVV6FKHGXOLQJE\ 6JUDSK5HSUHVHQWDWLRQ
1109
)LJXUH0LQLPDOPDNHVSDQVFKHGXOHJUDSKIRUWKHH[DPSOH
&RQFOXGLQJUHPDUNV 7KHFXUUHQWSDSHUSUHVHQWVDQDOJRULWKPWRJHQHUDWHDVXSHUVWUXFWXUHIRUHDFKVFKHGXOLQJ SUREOHP,WDOVRKLJKOLJKWVDSRWHQWLDOV\QHUJ\EHWZHHQWKHPHWKRGRORJLHVURRWHGLQWKH 6JUDSKIUDPHZRUNDQGWKRVHLQWKH0,/3IRUPXODWLRQ 7KHSUREOHPDQGWKHFRQFRPLWDQWVXSHUVWUXFWXUHDUHGHILQHGLQWKHIRUPRIDQ6JUDSK 7KHVXSHUVWUXFWXUHFRPSULVHVHYHU\VWHSSRWHQWLDOO\SHUIRUPHGE\DQ\RIWKHHTXLSPHQW XQLWVFDSDEOHRIFRPSOHWLQJDWOHDVWRQHWDVNWREHVFKHGXOHG$Q0,/3IRUPXODWLRQKDV EHHQHODERUDWHGRQWKHEDVLVRIWKHVXSHUVWUXFWXUHZKLFKLQYDULDEO\ \LHOGVWKHRSWLPDO VROXWLRQ RI WKH VFKHGXOLQJ SUREOHP 7KH UHOD[DWLRQ RI WKH 0,/3 PRGHO KDV EHHQ LQFRUSRUDWHG LQWR WKH 6JUDSK DOJRULWKPV WR VXSSRUW WKH VHOHFWLRQ RI VXESUREOHPV DQG GHFLVLRQYDULDEOHVGXULQJWKHEUDQFKDQGERXQGSURFHGXUH
$FNQRZOHGJHPHQW $XWKRUVDFNQRZOHGJHWKHVXSSRUWRIWKH+XQJDULDQ5HVHDUFK)XQGXQGHUSURMHFW27.$.
5HIHUHQFHV 3&DVWUR$3)'%DUERVD3yYRDDQG+0DWRV$QLPSURYHG571FRQWLQXRXVWLPH IRUPXODWLRQIRUWKHVKRUWWHUPVFKHGXOLQJRIPXOWLSXUSRVHEDWFKSODQWV,QGXVWULDO (QJLQHHULQJ&KHPLVWU\5HVHDUFK &$)ORXGDVDQG;/LQ&RQWLQXRXVWLPHYHUVXVGLVFUHWHWLPHDSSURDFKHVIRUVFKHGXOLQJ RIFKHPLFDOSURFHVVHVDUHYLHZ&RPSXWHUVDQG&KHPLFDO(QJLQHHULQJ ))ULHGOHU3URFHVV,QWHJUDWLRQ0RGHOOLQJDQG2SWLPLVDWLRQIRU(QHUJ\6DYLQJDQG 3ROOXWLRQ5HGXFWLRQ$SSOLHG7KHUPDO(QJLQHHULQJ 0+HJ\KDWLDQG))ULHGOHU2YHUYLHZRI,QGXVWULDO%DWFK3URFHVV6FKHGXOLQJ&KHPLFDO (QJLQHHULQJ7UDQVDFWLRQV (6DQPDUWL))ULHGOHUDQG/3XLJMDQHU&RPELQDWRULDO7HFKQLTXHIRU6KRUW7HUP 6FKHGXOLQJRI0XOWLSXUSRVH%DWFK3ODQWV%DVHGRQ6FKHGXOH*UDSK5HSUHVHQWDWLRQ &RPSXWHUV&KHP(QJQJ6 (6DQPDUWL+ROF]LQJHU73XLJMDQHU/DQG)ULHGOHU)&RPELQDWRULDOIUDPHZRUNIRU HIIHFWLYHVFKHGXOLQJRIPXOWLSXUSRVHEDWFKSODQWV$,&K(-RXUQDO *0.RSDQRV/3XLJMDQHUDQG0&*HRUJLDGLV2SWLPDOSURGXFWLRQVFKHGXOLQJDQG ORWVL]LQJLQGDLU\SODQWVWKH\RJXUWSURGXFWLRQOLQH,QGXVWULDO (QJLQHHULQJ&KHPLVWU\ 5HVHDUFK 1*9DNOLHYD%DQFKHYD(*.LULORYD&OHDQHUPDQXIDFWXUHRIPXOWLSXUSRVHEDWFK FKHPLFDODQGELRFKHPLFDOSODQWV6FKHGXOLQJDQGRSWLPDOFKRLFHRISURGXFWLRQUHFLSHV -RXUQDORI&OHDQHU3URGXFWLRQ