PROCESS SYSTEMS ENGINEERING 2003 (Part A)
COMPUTER-AIDED CHEMICAL ENGINEERING Advisory Editor: R. Gani Volume 1: Volume 2: Volume 3: Volume 4: Volume 5:
Volume 6: Volume 7: Volume 8: Volume 9: Volume 10: Volume 11: Volume 12: Volume 13: Volume 14: Volume 15:
Distillation Design in Practice (L.M. Rose) The Art of Chemical Process Design (G.L. Wells and L.M. Rose) Computer Programming Examples for Chemical Engineers (G. Ross) Analysis and Synthesis of Chemical Process Systems (K. Hartmann and K. Kaplick) Studies in Computer-Aided Modelling. Design and Operation Part A: Unite Operations (I. Pallai and Z. Fonyo, Editors) Part B: Systems (I. Pallai and G.E. Veress, Editors) Neural Networks for Chemical Engineers (A.B. Bulsari, Editor) Material and Energy Balancing in the Process Industries - From Microscopic Balances to Large Plants (V.V. Veverka and F. Madron) European Symposium on Computer Aided Process Engineering-10 (S. Pierucci, Editor) European Symposium on Computer Aided Process Engineering- 11 (R. Gani and S.B. Jerrgensen, Editors) European Symposium on Computer Aided Process Engineering-12 (J. Grievink and J. van Schijndel, Editors) Software Architectures and Tools for Computer Aided Process Engineering (B. Braunschweig and R. Gani, Editors) Computer Aided Molecular Design: Theory and Practice (L.E.K. Achenie, R. Gani and V. Venkatasubramanian, Editors) Integrated Design and Simulation of Chemical Processes (A.C. Dimian) European Symposium on Computer Aided Process Engineering-13 (A. Kraslawski and I. Turunen, Editors) Process Systems Engineering 2003 (Bingzhen Chen and A.W. Westerberg, Editors)
COMPUTER-AIDEDCHEMICAL ENGINEERING, 15
PROCESS SYSTEMS ENGINEERING 2003 Part A
8th International Symposium on Process SystemsEngineering, China
Edited by
Bingzhen Chen Department of Chemical Engineering Tsinghua University Beijing 100084, China
Arthur W. Westerberg Department of Chemical Engineering Doherty Ha1123 11 Carnegie Mellon University Pittsburgh, PA 15213 USA
2003 ELSEVIER Amsterdam - Boston - London - New York - Oxford - Paris San Diego San Francisco -Singapore - Sydney -Tokyo
-
ELSEVIER S C I E N C E B.V. Sara Burgerhartstraat 25 P.O. B o x 2 1 1, lOOOAEAmsterdam, T h e Netherlands
O 2 0 0 3 Elsevier Science B.V. All rights reserved. This work is protected under copyright by Elsevier Science, a n d the following terms a n d conditions apply t o its use: Photocopyinp ~ i n g l e ~ h 6 t o & ~ i e ssinglechapters of may bemade forpersonaluseasallowed by nationalcopyright laws. Permissionof the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available foreducational institutions that wish to make photocopies for non-profit educational classroom use Permissions may be sought directly from Elsevier Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 1865 853333, e-mail:
[email protected]. You may also complete your request on-line via the Elsevier 843830, fax: (+44) Science homepage (http://www.elsevier.com), by selecting 'Customer support' and then 'Obtaining Permissions'. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MAO1923, USA; phone: ( + I ) (978) 7508400, fax: (+I) (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London WIP OLP, UK; phone: (+44) 207 631 5555; fax: (+44) 207 63 1 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Tables of contents may be reproduced for internal circulation, but permission of Elsevier Science is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or part of a chapter. Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Science Global Rights Department, at the fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury andlordamage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operationof animethods, producti, instructions or ideas containedin the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made
First edition 2003 I.ihmry ~ i C o n g r ~('atalsrg~ng ,\ In Puhllcatlon D ~ t a ,\cntalug record trom thc I.~br.ir)oiCongrc\, hm been applltd ior British Library Cataloguing in Publication Data Acatalogue record from the British Library has been applied for.
ISBN: 0-444-5 1404-X ISSN: 1570-7946 (Series)
@ The paper used in this publication meets the requirements of ANSI/NISO 239.48-1992 (Permanence of Paper) Printed in Hungary.
Preface PSE2003 is the eighth in the triennial series of international symposia on process systems engineering initiated in 1982. The purpose of these meetings is to bring together the worldwide PSE community of researchers and practitioners who are involved in the creation and application of computing based methodologies for planning, design, operation, control, and maintenance of chemical processes. The composition of these meetings is international by design, with representation from the three main geographic zones of Asia and Pacific, Europe and Africa, and Americas. The conference was initiated by the Executive Committee of the Process Systems Engineering Symposium series, which draws its representation from the Asian Pacific Confederation of Chemical Engineering, the European Federation of Chemical Engineering, and the Inter American Confederation of Chemical Engineering. In keeping with the international scope of the conference series, the previous conferences in the series were held in Keystone, Colorado USA(2000); Trondheim, Nonvay(1997); Kyongju, Korea(1994); Montebello, Canada(l991); Sydney, Australia(l988); Cambridge, England(1985); and Kyoto, Japan(1982). PSE2003 is the first time the conference has convened in China. Selecting the conference theme was a first goal. All involved in organizing exchanged e-mails suggesting ideas. In the midst of a dampened business climate, an informal conversation involving Vladimir Mahalec (AspenTech), Bing Tjoa (Mitsubishi Chemicals) and Art Westerberg (Carnegie Mellon) at the AIChE meeting in Reno, Nevada, in November 2001, proposed supporting business decision making. We all knew this topic was an exciting possibility. An exchange of e-mails among the organizers led to the vision for a conference in which the participants would come more with questions than answers, a conference involving lively discussions and one where everyone would learn from that discussion. The suggestion arose to have two introductory plenary sessions from the "customer" - i.e., the business andlor government community. If we are to help them, they should define their needs. We next elected to have a first keynote lecture to relate the terminology of the business leaders to that of the PSE community and vice versa. Finally we asked all organizers and presenters to think how their contributions would speak to this theme. While we would have the typical topics of PSE for the sessions, each would be asking about supporting business decision making rather than simply reporting latest research results. A later conversation with Mike Doherty (Univ. of California, Santa Barbara) helped to set the desired tone for this conference. He noted that we as engineers like to think of ourselves as clever, which is often the basis for our presentations to management. Management already believes we are clever. We need to understand much better the kinds of decisions they must make and the type of information they need. We are useful to them when we can provide this information. This understanding should certainly alter what we do. The program features 2 plenary and 10 keynote papers as well as 225 regular conference papers. They cover topics on PSE and business decision making, product discovery and
design, enterprise and supply chain optimization, control and operations, design and operation, PSE education, PSE in formulation development, integration of business with information and process systems, information technology, and bio-chemicals and materials. The PSE 2003 conference is conducted under the auspices of the Executive Committee for PSE, chaired by Professor G. V. Reklaitis, Purdue University. The technical program was assembled with the active participation of the International Program Committee. We would like to express our gratitude to the highly professional volunteer efforts of the IPC in reviewing abstracts and manuscripts and providing feedback to authors. We would also like to acknowledge the participation of National Organizing Committee. Finally, we gratefully acknowledge the joint sponsorship of the conference by the China petroleum & Chemical Corporation, Petroleum Company Limited and Process Systems Engineering Society of China. Bingzhen Chen Tsinghua University Co-chair PSE 2003
Arthur W. Westerberg Carnegie Mellon University Co-chair PSE 2003
Executive Committee for PSE Gintaras V. Reklaitis (USA, Chair) (Japan) Iori Hashimoto John Perkins (UK)
Jeffrey J. Siirola Sigurd Skogestad Kun Soo Chang
(USA) (Norway) (Korea)
International Program Committee Co-Chairs Bingzhen Chen Arthur W. Westerberg
(China) (USA)
Members PSE & Business Decision Making Gintaras V. Reklaitis Lixing Wang
(USA, Chair) (China, Vice Chair) Vasilios I. Manousiouthakis (USA) Sunwon Park (Korea) Jeffrey J. Siirola (USA) Youqi Yang (China)
Herbert I. Britt (USA) Christodoulos A. Floudas (USA) (USA) Vipin Gopal Vincent G. Grassi (USA) Michael F. Malone (USA)
Challenges in the NEW Millennium Lorenz T. Biegler (USA, Chair) Efstratios N. Pistikopoulos (UK, Vice Chair) Paul I. Barton Jim Davis Furong Gao Shinji Hasebe Xiaorong He
Knut W. Mathisen (Norway) Masahiro Oshima (Japan) Gerhard Schembecker (Germany) (Korea) En Sup Yoon
(USA) (USA) (China) (Japan) (China)
CPI Applications Sigurd Skogestad Robin Smith
(Norway, Chair) (UK, Vice Chair)
David Bogle Min-Sen Chiu Jian Chu Peter Glavic Xavier Joulia
Andrzej Kraslawski (Finland) Mantked Morari (Switzerland) Maurizio Rovaglio (Italy) (China) Wen-Teng Wu Tezi Mokhtar Zadeh (USA)
(UK) (Singapore) (China) (Slovenia) (France)
Nontraditional Applications Rafiqul Gani Hirokazu Nishitani
(Denmark, Chair) (Japan, Vice Chair) William Y. Svrcek (Canada) Venkat Venkatasubramanian (USA) Xigang Yuan (China) (USA) Frank Zhu
Xiao Feng (China) Costas Maranas (USA) Jack W. Ponton (UK) Jose Romagnoli (Australia) George Stephanopoulos (Japan)
Support Technology Rene Banares-Alcantara (Spain, Chair) Barbara S. Nyland (USA, Vice Chair) Conor M. McDonald (USA) Wolfgang Marquardt (Germany) Ming Rao (Canada) Jan van Schijndel (Netherlands)
Joan Cordiner (UK) Wolfgang Gutermuth (Germany) Sten B. Jorgensen (Denmark) (Japan) Chiaki Kuroda
PSE Education Geoffrey Moggridge Yu Qian Chuei-Tin Chang (China) (USA) Michael Henson Andrew N. Hryrnak (Canada) Milan Kubicek (Czech)
(USA, Chair) (China, Vice Chair) In Beum Lee Peter Lee Katsuaki Onogi
National Organizing Committee Honorary Chair: Cheng, Siwei, NFSC, Honorary chair
(Korea) (Australia) (Japan)
Chair: Wang, Jiming, SINOPEC Vice Chair: Yuan, Qingtang, SINOPEC Secretary: Wang, Lixing, SINOPEC, General secretary Committee members: Hu, Renan, Automation Research Institute of Metallurgy Meng, chunxu, PetroChina Gu, Jifa, System Engineering Society of China Ye, liaoyuan, Kunming Univ. Of Sci. & Tech. Feng, Quanli, Yunnan Univ. Guo, Jingbiao, Research Institute of Petroleum Processing, SINOPEC Han, Fangyu, Qingdao Institut. Chern. Tech. Hu, Shanying, Tsinghua Univ. Jin, Yihui, Tsinghua Univ. Sheng, Lizhong, Kunming Univ. Of Sci. & Tech. Sun, Hongwei, NSFC Yao, Pinjing, Dalian Univ. Of Tech. Wang, Hongshui, Baoshan Steel Co. Wang, Jianhong, Beijing Univ. Chem. Tech. Wang, Mei, Sichuan Univ. Zhao, Jianhua, SINOPEC Conference Secretariat Process Systems Engineering Institute Department of Chemical Engineering Beijing 100084, China E-mail:
[email protected] URL: http://www.chemeng.tsinghua.edu.cn
This Page Intentionally Left Blank
Contents Plenary Papers Wang, J.M. SINOPECfsReform and IT Development Kawachi, S. Technological competitiveness in the chemical industry Keynote Papers Ydstie, B.E., Jiao, Y. The Distributed Enterprize Integrates Business, IT and Process Systems Engineering Grossmann, I.E. Challenges in the New Millennium: Product Discovery and Design, Enterprise and Supply Chain Optimization, Global Life Cycle Assessment Edgar, T.F. Control and Operations: When Does Controllability Equal Profitability? Ng, K.M. MOPSD: A Framework Linking Business Decision-Making to Product and Process Design Seider, W.D., Seader, J.D., Lewin, D.R. PSE and Business Decision-Making in the Chemical Engineering Curriculum Kim, S. Informatics in Pharmaceutical Research Hasebe, S. Design and operation of micro-chemical plants - Bridging the gap between nano, micro and macro technologies Marquardt, W., Nagl, M. Workjlow and Information Centered Support of Design Processes Cordiner, J.L. Challenges for the PSE community in formulations Floudas, C.A., Siirola, J.J. A Summary of PSE2003 and the Impact of Business Decision Making on the Future of PSE
Contributed Papers PSE & Business Decision Making Aoyarna, A., Naka, Y. Multi-Scale and Multi-Dimensional Formalism for Enterprise Modeling Arellano-Garcia, H., Martini, W., Wendt, M., Li, P., Wozny, G. An Evaluation Strategy for Optimal Operation of Batch Processes under Uncertainties by Chance Constrained Programming Balasubramanian, J., Grossmann, I.E. Scheduling Multistage Flowshops with Parallel Units - An Alternative Approach to Optimization under Uncertainty
Batres, R., Lu, M.L., Wang, X.Z. Concurrent Process Engineering and Intergrated Decision Making Bezzo, F., Macchietto, S., Pantelides, C.C. An Object-Oriented Approach to Hybrid CFDMultizonal Modelling Cheng, H.N., Qian, Y., Li, X.X. Integration of Decision Tasks in Chemical Operation Process Cheung, K.Y., Hui, C.W., Sakamoto, H., Hirata, K. Marginal Values Analysis for Chemical Industry Elsass, M., Saravanarajan, R., Davis, J.F., Mylaraswamy, D., Reising, D.V., Josephson, J. An Integrated Decision Support Framework for Managing and Interpreting Information in Process Diagnosis Ender, L., Filho, R.M. Neural Networks Applied to a Multivariable Nonlinear Control Strategies Espuiia, A., Rodrigues, M.T., Gimeno, L., Puigjaner, L. Decision Support System for multi-layered negotiated management in Supply Chain networks Ha, J.K., Lee, E.S., Yi, G.B. Optimal operation strategy and production planning of multi-purpose batch plants with batch distillation process Heller, M., Westfechtel, B. Dynamic Project and Workjlow Management for Design Processes in Chemical Engineering Hugo, A., Pistikopoulos, E.N. Environmentally Conscious Planning and Design of Supply Chain Networks Huss, R.S., Malone, M.F., Doherty, M.F., Alger, M.M. Challenge Problem Approach to Business Dynamics and Decision-Making for Process Engineers Jia, Z.Y., Ierapetritou, M. Incorporation of Flexibility in Scheduling Decision-Making Jiang, L., Biegler, L.T., Fox, V.G. Design and Optimization of Pressure Swing Adsorption Systems with Parallel Implementation Kim, M.J., Lee, Y.H., Han, LS., Han, C.H. Quality Improvement in the Chemical Process Industries using Six Sigma Technique Kim, K.H., Ahn, S.J., Shin, M.W., Yoon, E.S. Risk Analysis of Chemical Process Using Multi-distinction Equipment Screening Algorithm Koshijima, I., Shindo, A., Umeda, T. Process Integration Framework for Business Decision-Making in the Process Industry Lee,Y.H.,Yun,K.U.,Kim,M.J.,Han,C.H. Application of multivariate statistical process control to supervising NOx emissions from large-scale combustion systems Li, Z.H., Hua, B. Study of Heat Storage Setting & Its Release Timefor Batch Processes Li, W.K.,Hui,C.W. Debottlenecking and Retrofitting for a Refinery Using Marginal Cost Analysis, Sensitivity Analysis and Parametric Programming Maravelias, C.T., Grossmann, I.E.
A General Continuous State Task Network Formulation for Short Term Scheduling of Multipurpose Batch Plants with Due Dates Martinez-Miranda, J., Aldea, A., Baiiares-Alcintara, R. A tool to support the configuration of work teams Mujtaba, I.M., Greaves, M.A., Hussain, M.A. A Quick Eflcient Neural Network Based Business Decision Making Tool in Batch Reactive Distillation Musulin, E., Bagajewicz, M., NouguCs, J.M., Puigjaner, L. Design of sensor networks to optimize PCA monitoring performance PintariE Z.N., Kravanja, Z. An Approximate Novel Method for the Stochastic Optimization and MINLP Synthesis of Chemical Processes under Uncertainty Reddy, P.C.P., Karimi, LA., Srinivasan, R. Short-term Scheduling of Refinery Operations from Unloading Crudes to Distillation Rogers, M.J., Gupta, A., Maranas, C.D. Real Options Based Approaches to Decision Making Under Uncertainty Romero, J., Badell, M., Bagajewicz, M., Puigjaner, L. Risk management in integrated budgeting-scheduling models for the batch industry Shi, L., Huang, J., Shi, H.C., Qian, Y. Modeling Cleaner Production Promotion with Systems Dynamics Methodology: a Case Study of Process Industries in China Shimizu, Y., Yoo, J.K., Tanaka, Y. Web-based Application for Multi-Objective Optimization in Process Systems Siirola, J.D., Hauan, S., Westerberg, A.W. Computing Pareto Fronts Using Distributed Agents Skogestad, S. Self-optimizing control: From key performance indicators to control of biological systems Stec, L.Z., Bell, P.K., Borissova, A., Fairweather, M., Goltz, G.E., McKay, A., Wang, X.Z. Reconfigurable batch processes: innovative design of the engineering side of chemical supply chains Wang, Y.J., Zhang, H., He, Y.R. Refinery Scheduling of Crude Oil Considering Two Different Uses of Naphtha Yamashita, Y. Dimensionality Reduction in Computer-Aided Decision Making Yen, C.H., Wong, D.S.H., Jang, S.S. Information Directed Sampling and Ordinal Optimization for Combinatorial Material Synthesis and Library Design Yi,H.S.,Kim, J.H.,Han,C.H. Balanced production cost estimation of coke oven gas in iron and steel making plant Yi, G.B., Reklaitis, G.V. Optimal Design of Batch-Storage Network with Recycling Streams Zeng, M.G., Hua, B., Liu, J.P., Xie, X.A., Hui, C.W. Investment Decision-making for Optimal Retrofit of Utility Systems Zheng, X.S., Feng, X., Cao, D.L. Design water allocation network with minimum freshwater and energy consumption Zyngier, D., Marlin, T.E. Monitoring, diagnosing and improving the performance of LPbased real-time
optimization systems
Challenges in the NEW Millennium Atmaniou, L., Dietz, A., Azzaro-Pantel, C., Zarate, P., Pibouleau, L., Domenech, S., Lann, J.M.L. A MultiObjective Genetic Algorithm optimization framework for batch plant design Briesen, H., Marquardt, W. Adaptive Multigrid Solution Strategy for the Dynamic Simulation of Petroleum Mixture Processes: A Case Study Chan, P., Cheung, K.Y., Hui, C.W., Sakamoto, H., Hirata, K. Electricity Contract Optimization for a Large-Scale Chemical Production Site Chang, M.H., Park, Y.C., Lee, T.Y. Iterative Dynamic Programming of Optimal Control Problem Using a New Global Optimization Technique Chen, C.L., Wang, B.W., Lee, W.C., Huang, H.P. The optimal profit distribution problem for a supply chain network Chen, H., He, X.R., Chen, B.Z., Qiu, T. A multi-period optimization model for refinery inventory management under demand uncertainty Cheng, L.F., Subrahmanian, E., Westerberg, A.W. Multi-Objective Decision Processes under Uncertainty: Applications, Problem Formulations and Solutions Choi, J., Lee, J.H., Realff, M.J. Simulation Based Approach for Improving Heuristics in Stochastic ResourceConstrained Project Scheduling Problem Dave, D., Zhang, N. Hybrid Methods Using Genetic Algorithm Approach For Crude Distillation Unit Scheduling Eden, M.R., Jergensen, S.B., Gani, R., El-Halwagi, M.M. Reverse Problem Formulation Based Techniquesfor Process and Product Synthesis and Design Foltz, C., Luczak, H. Analyzing Chemical Process Design Using an Abstraction-Decomposition Space Gisnas, A., Srinivasan, B., Bonvin, D. Optimal Grade Transition for Polyethylene Reactors Guilltn, G., Bagajewicz, M., Sequeira, S.E., Tona, R., Espuiia, A., Puigjaner, L. Integrating pricing policies and risk management into scheduling of batch plants
Heo,S.K.,Son,H.R.,Lee,K.H.,Lee,H.K.,Lee,I.B.,Lee,E.S. Production and Distribution of Polyvinyl Chloride Considering Demands of Warehouses Herder, P.M., Stikkelman, R.M. Decision making in the methanol production chain-A screening tool for exploring alternative production chains Hua, B., Yuan, J.B., Hui, C.W. The Coevolutionary Supply Chain Hurme, M., Tuomaala, M., Turunen, I. General Approach to Connect Business and Engineering Decisions in Plant Design Kang, J.S., Kim, H.D., Lee, T.Y.
Sharing Benefits of Eco-Industrial Park by Multiobjective Material Flow Optimization Kulay, L., JimCnez, L., Castells, F., Baiiares-Alchtara, R., Silva, G.A. A case study on the integration of process simulation and life cycle inventory for a petrochemical process Li, P., Wendt, M., Wozny, G. Optimal Production Planning under Uncertain Market Conditions Li, X., Shao, Z.J., Qian, J.X. Complexity Analysis for Hybrid DifSerentiation in Process System Optimization Li, S.J., Wang, H., Yang, Y.R., Qian, F. Multi-objective programming in refinery planning optimization Li, X.X., Qian, Y., Huang, Q.M., Jiang, Y.B. Multi-scale ART2 Used in State Identification of Process Operation Systems Lim, Y.I., Jorgensen, S.B. Application of a time-space CE/SE (Conservation Element/Solution Element) method to the numerical solution of chromatographic separation processes Lin, B., Chavali, S., Camarda, K., Miller, D.C. Using Tabu Search to solve MZNLP Problems for PSE Lin, X.X., Chajakis, E.D., Floudas, C.A. Continuous-Time Scheduling of Tanker Lightering in Crude Oil Supply Chain Lin, P.H., Jang, S.S., Wong, D.S.H. Dynamical Supply Chains Analysis Via a Linear Discrete Model ?A Study of ztransform Modeling and Bullwhip Effects Simulation Liu,H., Yin,K.K.,Yin,G.G. Making Decisions under uncertainty - Applications of Hierarchical Approach in Industrial Practice Mitova, E., Glasser, D., Hildebrandt, D., Hausberger, B. Finding Candidatesfor Multidimensional Attainable Regions Nishi, T., Konishi, M. An Integrated Optimization of Production Scheduling and Logistics by a Distributed Decision Making: Application to an Aluminum Rolling Processing Line O'Grady, A.R.F., Bogle, I.D.L., Fraga, E.S. An adaptive interval algorithm to identify the globally optimal process structure Roe, B., Shah, N., Papageorgiou , L.G. A Hybrid CLP and MILP Approach to Batch Process Scheduling Sand, G., Engell, S. Risk Conscious Scheduling of Batch Processes Seodigeng, T., Hausberger, B., Hildebrandt, D., Glasser, D., Kauchali, S. DSR Algorithm for Construction of Attainable Region Structure Shi, L., Yang, Y.Q. Green process systems engineering: challenges and perspectives Shimizu, Y., Wada, T. Logistic Optimization for Site Location and Route Selection under Capacity Constraints Using Hybrid Tabu Search Shin, D., Lee, G., Yoon, E.S. Distributed Multi-Agents and Cooperative Problem Solving for On-Line Fault Diagnosis Sugiyama, H., Hirao, M. Development of Process Design Methodology Based on Life Cycle Assessment Wan, X.T., Orcun, S., Pekny, J.F., Reklaitis, G.V.
A Simulation Based Optimization Framework to Analyze and Investigate complex supply chains Wang, X.Z., Garcia-Flores, R., Hua, B., Lu, M.L. Advanced information systemsfor process control and supply chain integration Wang, K.F., Salhi, A., Fraga, E. Cluster analysis and Visualisation Enhanced Genetic Algorithm Wu,L.Y.,Hu,Y.D.,Xu,D.M.,Hua,B. Solving Batch Production Scheduling Using Genetic Algorithm
Xue,D.F.,Li,Y.R.,Shen,J.Z.,Hu,S.Y.
Synthesis of eco-industrial system considering environmental value using adaptive simulated annealing genetic algorithms Yamaba, H., Tomita, S. On an Object-Oriented Modeling of Supply Chain and its Operational Strategy Zhang, J., Chen, B.Z., He, X.R., Hu, S.Y. Variable Decomposition Based Global-OptimizationAlgorithm for Process Synthesis
Zheng,X.P.,Hu,S.Y.,Li,Y.R.,Shen,J.Z.
The Approach of Multi-factor Life Cycle Assessment and Product Structure Optimization Zhou, W., Jiang, Y.H., Jin, Y.H. Coordination of Multi-Factory Production Planning in the Process Industry Based on Intemal-Prices Zhou, Z.Y., Cheng, S.W., Hua, B. A Hierarchical Architecture of Modelling on Integrated Supply Chain Optimization Systems of Continuous Process Industries Zhou, X., He, X.R., Chen, B.Z., Qiu, T. Computer-aided molecular design with BP-ANN and global optimization algorithm
Chemical Process Industry Applications Almeida-Rivera, C.P., Swinkels, P.L.J., Grievink, J. Multilevel modelling of spatial structures in reactive distillation units Arrimadas, C., JimCnez, L., Bafiares-Alckntara, R., Caballero, J.A. Inclusion of quantitative safety evaluations in superstructure optimisation Aziz, N., Hussain, M.A., Mujtaba, I.M. Implementation of Neural Network Inverse-Model-Based Control (NN-IMBC)Strategy in Batch Reactors Balkema, A.J., Preisig, H.A., Otterpohl, R., Larnbert, A.J.D. Augmenting design with sustainability Berard, F., Azzaro-Pantel, C., Domenech, S., Pibouleau, L., Hoquet, P. Combinatorial study of multipurpose batch plant planning Bildea, C.S., Kiss, A.A., Dimian, A.C. Stable plantwide control of recycle systems Brueggemann, S., Marquardt, W. Rapid Screening of Regular and Thermally Coupled Design Alternativesfor Nonideal Multiproduct Distillation Processes Cavin, L., Fischer, U., Hungerbiihler, K. Identifying the optimal design of a single chemical process to be implemented in an existing multipurpose batch plant by the use of Tabu Search Chatzidoukas, C., Kiparissides, C., Perkins, J.D., Pistikopoulos, E.N.
Optimal grade transition campaign scheduling in a gas-phase polyolejin FBR using mixed integer dynamic optimization Chen, Q.S., Feng, X. Potential Environmental Impact (PEI)Analysis of Reaction Processes Chen, Q.L., Wang, S.P., Yin, Q.H., Zhu, X.F., Hua, B. Environmental-exergoeconomicStrategiesfor Modelling and Optirnization of Energy Systems Chien, I.L., Chao, H.Y., Teng, Y.P. Design and Control of a Complete Heterogeneous Azeotropic Distillation Column System Chin, J., Lee, J.W. Critical composition regions and the feasibility of extractive distillation Choi, S.H. Calculation of Probability Distributions of Output Variables in Process Simulation Cziner, K., Hurme, M. Process Evaluation and Synthesis by Analytic Hierarchy Process Combined with Genetic Optirnization Dadhe, K., Engell, S., Gesthuisen, R., Scharf, T., Volker, M. Multi-Model Trajectory Optimisationfor Batch and Semi-Batch Processes Du, H.B., Yao, P.J. Closed loop robust stability of mimo dynamic matrix control with input saturation Du, J., Yu, H.M.,Fan,X.S., Yao, P.J. Integration of Mass and Energy in Water Network Design Eggersmann, M., Schneider, R., Marquardt, W. Understanding the interrelations between synthesis and analysis during model based design Ekawati, E., Bahri, P.A. Variable Redundancy Elimination and Automatic Structure Selection within Dynamic Operability Framework El-Farra, N.H., Mhaskar, P., Christofides, P.D. Hybrid Control of Uncertain Process Systems Goel, H.D., Weijnen, M.P.C Process design for reliability and maintainability:A case of multipurpose process plants Hoffmaster, W.R., Hauan, S. Generating and evaluating reactive distillation column designs using difference points and feasible regions Holland, S.T., Tapp, M., Hildebrandt, D., Glasser, D., Hausberger, B. Novel Separation System Design Using "Moving Triangles" Hu, Y.D., Xu, D.M., Li, Y.M., Wei, Q.Y., Hua, B. Hybrid genetic algorithmfor the optimum design of complex distillation column based on exergy economics objectives Huang, H.P., Wu, M.C. Monitoring and Fault Detectionfor Dynamic Systems Using Dynamic PCA on Filtered data Jia,X.P.,Han,F.Y. A Hierarchical Multiobjective Cooperative Problem Solving for Process Systems Synthesis and Optimization Jin, S.Y., Zhou, C.G., Zhao, W., Yang, C.H.
xviii Detecting Changes of Process Steady States Using the Period Extending Strategy Jin, X.M., Rong, G., Wang, S.Q. On Applying Model Predictive Control for Distillation System of Linear Alkylbenzene (LAB) Complex Kheahwom, S., Hirao, M. A methodology for environmentally benign process design under uncertainty Kim,M.J.,Han,I.S.,Han,C.H. Modified PLS Method for Inferential Quality Control Kim, J.H.,Yi,H.S.,Han,C.H.,Park,C.H.,Kim, Y.J. Plant-wide Multiperiod Optimal Energy Resource Distribution and Byproduct Gas Holder Level Control in the Iron and Steel Making Process under Varying Energy Demands Kittisupakom, P., Moolasartsatom, O., Arpomwichanop, A., Kaewpradit, P. Optimal operation and control scheme design of pervaporative membrane reactor Kong, M.F., Chen, B.Z., He, X.R. Gross Error Identification for Dynamic System Kwon, S.P., Song, S.O., Cho, J., Yoon, E.S. On-line Monitoring of Process Transient States by the Intelligent Multivariable Filtering System based on the Rule-Based Method Lang, P., Modla, G., Kotai, B. Batch Heteroazeotropic Rectification under Continuous Entrainer Feeding: II. Rigorous Simulation Results Lee, J.H., Lee, J.M., Tosukhowong, T., Lu, J. On Interj5acing Model Predictive Controllers with a Real-Time Optimizer Lee, J.M., Yoo, C.K., Lee, I.B. Efficient Fault Detection using Multivariate Exponentially Weighted Moving Average and Independent Component Analysis Li, Y.G., Han, F.Y., Zheng, S.Q., Xiang, S.G., Tan, X.S. An Automatic Approach for Designing Water Utilization Network Li,Y.,Du, J.,Yao,P.J. Wastewater minimization through the combination of process integration techniques and multi-objective optimization Li, W.K., Hui, C.W., Hua, B., Tong, Z.X. Material and Energy Integration in a Petroleum Refinery Complex Li, H.W., Hansen, C.A., Gani, R., Jorgensen, S.B. Analysis of Optimal Operation of an Energy Integrated Distillation Plant Li, X.N., Rong, B.G., Lahdenpera, E., Kraslawski, A,, Nystrom, L. Conflict-based Approach for Multi-objective Synthesis Li, Z.M. Pinch Analysis of Hydrogen System in Refineries Liefeldt, A., Engell, S. A Modelling and Simulation Environment for Pipeless Plants Lin, S.W., Yu, C.C. Interaction between Design and Controlfor Heat-Integrated Recycle Plants: Ternary System with Two Recycles Maurya, M.R., Rengaswamy, R., Venkatasubramanian,V. Qualitative Trend Analysis of the Principal Components: Application to Fault Diagnosis Modla, G., Lang, P., Kotai, B., Molnar, K.
Batch Heteroazeotropic Rectification under Continuous Entrainer Feeding: I. Feasibility Studies Morais, E.R.D., Toledo, E.C.V.D., Filho, R.M. Mixed Coolant Flow For Optimal Design of Fixed Bed Catalytic Reactors MCndez, C.A., Cerda, J. Short-Term Scheduling of Multistage Batch Processes Subject to Limited Finite Resources Papaeconomou, I., Jorgensen, S.B., Gani, R., Cordiner, J.L. Integrated Synthesis, Design and Modelling of Batch Operations Qian, Y., Jiang, Y.Y., Wen, Y.Q., Li, X.X., Jiang, Y.B. Novel Hybrid Representation of Knowledge in the Expert System for Real-time Faults Diagnosis Rejowski, R.Jr., Pinto, J.M. Efficient MILP formulations for multiproduct pipeline scheduling Rudinger, A.C.A., Spogis, N., Nunhez, J.R. Optimizing mixing and power consumption through the use of computationalfluid dynamics (CFD) Saha, P., Cao, Y. Globally optimal control structure selection using Hankel singular value through Branch and Bound method Sakizlis, V., Perkins, J.D., Pistikopoulos, E.N. Parametric Controllers in Simultaneous Process and Control Design Sato, C., Ohtani, T., Nishitani, H. Steps toward Novel Grade Transition Control for Gas-Phase Polymerization Process Sawaya, N.W., Grossmann, I.E. A cutting plane method for solving linear generalized disjunctive programming problems Smania, P., Pinto, J.M. Mixed Integer Nonlinear Programming Techniquesfor the Short Term Scheduling of Oil Refineries Tang, Y.T., Huang, H.P., Chien, I.L. Design of a Complete Ethyl Acetate Reactive Distillation Column System Tapp, M., Holland, S.T., Hausberger, B., Hildebrandt, D., Glasser, D. Expanding the Operating Leaves in Distillation Column Sections by Distributed Feed Addition and Sidestream Withdrawal Torgashov, A.Y., Park, K.C., Choi, H.C., Choe, Y.K. Stability analysis of distillation control using vector Lyapunov function Wang, F.Y., Bahri, P.A., Lee, P.L., Cameron, I.T. A Multiple Model Approach to Robust Control and Operation of Complex Non-linear Processes Wang, G.B., Liang, J.J., Own, S.M., Lee, H.Y. Robust Tuning of PI Controllers Based on Taguchi Method Wu,J.Y.,He,X.R.,Chen,B.Z.,Qiu,T. A General MILP Model for Scheduling of Batch Process with Sequence-dependent Changeover Xu, D.M., Hu, Y.D., Hua, B., Wang, X.L. Optimum design of water utilize systems featuring regeneration re-usefor multiple contaminants Xu,Q.,Chen,B.Z.,He,X.R.
Optimizationfor Semi-continuous Dynamic Process under Uncertainties Yamamoto, S., Yoneda, M., Yabuki, Y. The control of heterogeneous azeotropic distillation in industry considering entreiner ratio in reflux Yamashita, Y., Kidane, N., Nishitani, H. Observer design for nonlinear systems described by dzrerential-algebraic equations Yan, L.X., Wei, D.G., Ma, D.X. Line-up Competition Algorithm for Separation Sequence Synthesis Yao, Z.X., Qian, Y., Jiang, Y.B. A Description of Chemical Processes Based on State Space Yu, W.F., Hariprasad,, J.S., Zhang, Z.Y., Hidajat, J., Ray, A.K. Application of multi-objective optimization in the design of SMB in chemical process industry Yuan, X.G., An, W.Z. Synthesis of fully thermally coupled distillation columns for multicomponent separation via stochastic optimization Zhang, J. Reliable Optimal Control of a Batch Polymerisation Reactor Based on Neural Network Model with Model Prediction Confidence Bounds Zhao, W., Zhou, C.G., Jin, S.Y., Zhang, L., Han, F.Y. Studies of RSR system characteristic curves Zhu, P., Feng, X., Liu, Y.Z. Exergy-based environmental impact analysis in the industrial processes Zhuang, H.L., Ohshima, M., Chiu, M.S. A Multiple Models Based Predictive Control Strategy Applied In Polymerization Reactor Control Nontraditional Applications Aoyama, A., Naka, Y. Technological Information Infrastructure for R&D Management in Product Centered Manufacturing Chang, W., Lee, T.Y. Observation of Adsorption and Permeation Phenomena in Silica Membrane System through Molecular Dynamics Simulation Chen, H.P., Shao, M.J., Zhang, S.J. Modeling and prediction of crystal growth for vitamin C Dai, Z.Y., Jing, Z.H., Zhou, H., Liu, W. QSAR Studies of Cl-Bridged Flu-Cp Complexes of Zirconium Metallocene Eden, M.R., Jorgensen, S.B., Gani, R., El-Halwagi, M.M. Property Cluster based Visual Technique for Synthesis and Design of Formulations Karimi, I.A., Tan, Z.Y.L., Bhushan, S. An Improved MILP Formulation for Scheduling an Automated Wet-etch Station Kim, B.M., Kim, S.W., Yang, D.R. Cybernetic Modeling of the Cephalosporin C Fermentation Process Li, M.H., Shi, D., Christofides, P.D. Feedback control of HVOF thermal spray process: A study of the effect of process disturbances on closed-loop pe~ormance Lou, Y.M., Christofides, P.D.
1094 1100 1106 1112
1118 1123 1129 1135 1141
1147
Real-Time Estimation of Growth Rate and Surjace Roughness in Thin Film Growth Using Kinetic Monte-Carlo Models Pinto, J.F., Filho, R.M. Computational prediction of the solid phase extraction of herbicides in water Thaysen, M., Jorgensen, S.B. Process SofnYare Sensor for Plant Optimization Vally, T., Hildebrandt, D., Rubin, D., Crowther, N., Glasser, D. Application of Process Synthesis Methodology to Biomedical Engineering for the Development of Artificial Organs Wang, Y.H., Huang, D.X., Gao, D.J., Jin, Y.H. Wavelet Networks Based Soft Sensors and Predictive Control in Fermentation Process Wang, Y., Sun, Z.C., Wang, A.J., Li, C., Li, X., Zhao, B., Yao, P.J. A Kinetic Study on Hydrodesulfurization of Dibenzothiophene Catalyzed by Sulfided Ni-Mo/MCM-41 Wu, W., Huang, M.Y. Flexible Output Regulation Designs for Nonlinear Bioreactors Zhang, X.P., Zhang, S.J., Li, C.S. Simulation of protein crystallization on the basis of thermodynamic model Zhou, H., Da, Z.J., Xu, W., Zhang, L.W., Mu, X.H. Applications of Molecular Simulation Techniques in Petroleum Processing Support Technology
Abbas, A., Romagnoli, J. A modelling environmentfor the advanced operation of crystallisation processes Bayer, B., Becker, S., Nagl, M. Integration Toolsfor Supporting Incremental Modifications within Design Processes in Chemical Engineering Bafiares-Alcintara, R., Kokossis, A., Aldea, A., JimCnez, L., Linke, P. A knowledge management platform to extract and process information from the web Du, W.L., Liu, Z.M., Qian, F., Liu, M.D., Zhang, K. 4-CBA Online Soft-measurement Via Fuzzy GMDH Networks Han,F.Y., Jia,X.P.,Tan,X.S. Two Key Support Toolsfor Environmentally Friendly Process Optimal Synthesis Jin, Y.H., Kurooka, T., Yamashita, Y., Nishitani, H. Modeling and Simulation of Human Errors in Plant Operations Kim, J.H., Yi, H., Han, C.H., Park, C., Kim, Y. The Development of the Real Time Plant-wide Optimal Byproduct Gas Supply Management System Kristensen, N.R., Madsen, H., Jorgensen, S.B. A Unified Framework for Systematic Model Improvement Lahdenpera, E., Li, X.N. A Cluster Computing Approach Using Parallel Simulated Annealing for Multiobjective Process Optimisation Li, W.M.,Li, W.K.,Hui,C.W. Integrating Neural Network Models for Refinery Planning Li,S.J., Yang,Y.R.,Qian,F. Mass Exchanger Network Synthesis Using Genetic-Alopex Algorithms Lu, M.L., Moore, E.
A Flexible Topology Management Software Component To Support Business Process Integration Lu, M.L. A Model Architecture For Concurrent Process Engineering Ngigi, G., Glasser, D., Hildebrandt, D. MaPS (Managed Process Synthesis). A methodology, integrated with the experimental programme, to develop aflow sheet. A first step Park, Y.C., Chang, M.H. , Lee, T.Y. A Novel Global Optimization Algorithm Using D.C Envelope and Convex Cut Function Reis, M.P.S.D., Saraiva, P.M. Multiscale Latent Variable Analysis of Industrial Data Romanenko, A., Oliveira, N.M.C., Santos, L.O., Afonso, P.A.F.N.A. A system for chemical process control and supervision based on real-time Linux Rong, B.G., Kraslawski, A., Turunen, I. Synthesis of heat-integrated thermally coupled column configurations for multicomponent distillations Shi, L., Chen, P., Shi, H.C., Qian, Y. Supporting cleaner production audit with systems approaches: a case study of PVC process Su,D.P.,Chen,W.M.,Feng, J.S., Wang,H.S.,Su,Y.C. Data Mining in Metallurgical Industry Process Sun,L.,Fan,X.S.,Yao,P.J. Study on Multi-Objective Fuzzy Optimization Algorithm for Chemical Process Tsai, C.S., Wang, S.Y., Chang, C.T., Huang, S.H. Multi-variate Run Rules Wen, J., Rao, M. Human, Machine and Management Integration through In-time Knowledge Processing to Ensure Process Operation Safety Wibowo, C., O'Young, L., Ng, K.M. Workjlow Management in Chemical Process Development Yan, L.X., Hu, S.H., Ma, D.X. Visual Method for Operating Optimization of Process System Yang, S.H., Chen, X., Yang, L.L. Global Support to Process Plants over the Internet Yang, L., Hu, S.Y., Shen, J.Z., Li, Y.R., Wu, Z.J., Wang, T. Development of a Management Information System for an Eco-industrial Park Yi, H.S., Kim, J.H., Han, C.H. Optimal Grade Transition of a HDPE Plant by the Modijied Two-Step Hierarchical Dynamic Optimization Yin, Q.H., Yin, Q.Y., Deng, T., Qian, Y., Hua, B., Chen, Q.L. Metasynthesis Methodology of Technology and Management for New Product Development Project of Petrochemical Enterprises Zhang, Q.H., Qian, Y., Xu, B.G. A Simulating System for Intelligent Control of Electric Drives Zhang, L., He, X.R. Extended Association Rule Extraction from Process Operational Data Zhao, C.H., Bhushan, M., Venkatasubramanian, V. A Multi-Agent Architecture for Automated Process Safety Analysis Zheng,S.Q.,Yue, J.C.,Li,Y.G.,Tan,X.S.,Han,F.Y.
Study on Multiobjective Process Synthesis in Modular Simulator Environment Zou, Z.Y., Cao, B.Y., Gui, X.J., Chao, C.G. The Development of A Novel Type Chemical Process Operator-Training Simulator
PSE Education Hausmanns, C., Zerry, R., Goers, B., Urbas, L., Gauss, B., Wozny, G. Multimedia-supported teaching of process system dynamics using an ontology-based semantic network Moggridge, G.D., Cussler, E.L. Teaching Chemical Product Design Orapimpan, O., Kurooka, T., Yarnashita, Y., Nishitani, H. Support Systems for Knowledge Inheritancefrom Video Images of Plant Operations Rao, M., Wen, J., Zhang, Y. Incident Prevention Training Simulator Srinivasan, R., Doiron, J.A.G., Song, M. Enhancing Process Control Education using a Web-based Interactive Multimedia Environment Author Index
This Page Intentionally Left Blank
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
SINOPEC's Reform and IT Development Wang Jiming China Petroleum and Chemical Corp., Beijing, China Abstract The theme of the 8 th Intemational Conference of Process Systems Engineering is to provide the process industry with PSE for sound and professional decision-making. With the fast development of information technology, the global economic, industrial and corporate structures are being remade, and enterprise business model and operation are also changing significantly. The vast application of information technology has been playing an important role in such changes and reform. After the early irrationality, a lesson has been learnt that only combined with the conventional industries can information technology show its vitality. IT development, as the Chinese government points out, is a necessity in China's industrialization and modernization. Recognizing this, China Petroleum & Chemical Corporation upgrades its businesses with information technology to meet challenges of globalization and WTO accession. 1. SINOPEC'S PRESENT SITUATION AND OPERATION PERFORMANCE Petroleum and petrochemical industry, as the energy and raw material provider, is the backbone of China's economy. In 1998, the government made the decision to strategically restructure petroleum and petrochemical industry. In July 1998, China Petrochemical Corporation (SINOPEC Group) was founded, a vertically integrated company with upstream and downstream businesses and domestic and foreign trade. In 1999, further restructuring of businesses, assets, finance, organization and personnel was carried out. On Feb. 22, 2000, China Petrochemical Corporation set up China Petroleum & Chemical Corporation (SINOPEC Corp) with its quality assets. SINOPEC Corp. was publicly listed in Hong Kong, New York and London in October 2000 and in Shanghai in August 2001, becoming the first Chinese company going public in four stock exchanges home and abroad. Through the public listing, SINOPEC Corp. not only raised capital, diversified its equity structure and increased company competitiveness, but also pushed itself towards more standard management and supervision and promoted mechanism reform within the company. A milestone in the history of China's petroleum and petrochemical industry, SINOPEC's public listing paved the way for the company's speedy take-off and the company embraced a new era of development. Over the past three years, SINOPEC Corp. has basically set up a corporate governance structure of unified rights and responsibilities, smooth operation and effective supervision. The company defined the functions of the headquarter, the strategic business units and the
subsidiaries as decision-making centre, profit generating centre and cost control centre, respectively. In this way, unified management over operation strategy, financing and investment, capital operation, major research work and information system has been achieved. Faced with the fierce competition and the pressure from the stock market and investors, SINOPEC has actively promoted resources strategy, marketing strategy, investment strategy, technology and human resources strategy in the past three years. As a result, the company's overall competitiveness and operation performance have been improved. Stable growth of economic volume. The output of oil and gas has kept growing steadily. The total newly added proven reserves of oil in the past three years was 585 million tons, and that of natural gas was 212.2 billion cubic metres. The output of crude and natural gas of the three years was 110.785 million tons and 11.95 billion cubic metres, respectively. To compare with 1999, the output of crude and natural gas increased by 10% and 127%, respectively in 2002, and the reserves was up by 14.3%. In 2002, the company processed 104.94 million tons of crude, an increase of 19.1% on 1999. The production of refined products was 62.42 million tons, an increase of 19.1% and that of chemical light was 15.05 million tons, up by 36.1%. By tech upgrading and product line readjustment, chemicals production also saw remarkable growth. The ethylene production of 2002 was 2.72 million tons, up by 32.6%. The production of synthetic resin, synthetic rubber and synthetic fibre also increased tremendously. Increase of market share and competitiveness. Through acquisition, building and rebuilding service stations and storage and transport facilities, a complete marketing network has been set up and refined products sales volume grows year on year, especially for the retail, direct sales and export. The sales volume of refined products in 1999 was 63.49 million tons. The number jumped to 70.09 million in 2002. Retailed refined products reached 34.73 million tons, up by 181.4% over 1999. The company's retail market share in its principal market reached 68%, up by 28 percentage points. To explore overseas market, the company exported 5.02 million tons of refined products in 2002, increased by 34.6%. Fruitful technological achievement. SINOPEC always attaches great importance to technology innovation, which provides strong back-up for economic efficiency and industry upgrading. There have been 20 major technology break-throughs in the three years, among which Loop Reactor Polypropylene Technology, VRFCC Technologies (vacuum residue fluid catalytic cracking) and RHT (residue hydrotreating technology) won the first prize of the national technology advancement award. Three other technologies won the second prize of the national invention award and another 13 won the second prize of the national technology advancement award. The company made 1,830 patent applications, and was granted 709 patent rights, adding its total patent rights to 3,610. Continuous cost reduction and rising profitability. Persisting in maximizing the company's profit and shareholders' return is SINOPEC's business goal. Therefore, the company has been reducing cost unswervingly. In 2001, the company reduced cost by RMB 2.28 billion, and further reduced RMB 2.5 billion of cost in 2002. Although competition grew fiercer and fiercer, the company achieved a relatively good operation result in the three years. Profit of 2002 was RMB 25.9 billion and that of 2001 was RMB 21.6 billion. The company's 2002 revenue was RMB 312 billion with a profit of RMB 21.84 billion. Over the past three
years, the company's total assets increased from RMB 276.9 billion to RMB 375.0 billion, debt-to-equity ratio reduced from 53.35% to 28.2%, and shareholders' equity increased from RMB 87.12 billion to RMB 148.5 billion. These achievements laid a solid foundation for the company's future growth. 2. SPEED UP IT COMPETITIVENESS
DEVELOPMENT
AND
IMPROVE
THE
COMPANY'S
The above achievements would not be obtained without information technology. Since the beginning, SINOPEC Corp. has set up the strategy of upgrading its business with information technology and has been committed to promoting IT construction. During the IT construction process, we focus on an all-round development, that is, along with the IT construction, management, technology, economic efficiency and marketing ability should also be promoted. We have set up database, performance evaluation system and cost control system. At the same time, we optimized crude procurement, production process and improved management in financial information, sales information and e-business for materials and chemicals purchase. We also implemented human resources management, oil field information gathering and other economic and technological indices gathering. These efforts played a vital role in improving the company's competitiveness. In the past three years, we have made achievements in IT construction in the following five aspects. 2.1. The implementation of ERP system fastened business process reengineering Since its founding in 2000, SINOPEC Corp. has set up an ERP programme. The company aims to build up a basic framework in three years and complete the project in five years. After the completion, further expansion and perfection of functions should be carried out according to market changes. The company achieved substantial results in its production and sales subsidiaries during the ERP pilot and promotion stage. Under the guiding principle of 'international standard combining with SINOPEC characteristics', the successful implementation of ERP at subsidiaries in Zhenhai, Yizheng, Tianjin, etc. integrated finance, materials, sales and planning, optimized business process, integrated materials flow, capital flow and information flow, reduced management layers and increased the subsidiaries' decision-making power. It also helped in reducing inventory, strengthening budgeting, and improving the company's internal control and responsiveness. Through these pilots, the company advanced its management concepts and internal reform, which is conducive to the full promotion of the ERP system. 2.2. The implementation of supply chain technology optimized resources utilization SINOPEC has a strong demand for the supply chain technology. The company's 33 refineries process more than 60 types of crude and the import of crude in 2002 accounted for 54% percent of the company's total annual throughput. The sales of refined products also cover a
wide area of 20 provinces through vessels, pipeline, train and other means of transportation. In crude processing, we set up production models with PIMS software in the 33 refineries and an integration model at the headquarter. This measure provides valuable information when making import plan or production plans, and proved effective in allocating resources. In sales, we formed a refined products sales information network, collecting price and inventory information in more than 20,000 service stations. We also set up storage-to-stations models for optimizing products transport and distribution, reducing transportation cost effectively.
2.3. The implementation of e-business elevated the company's management level SINOPEC's annual materials procurement is about RMB 50 billion and its annual sales of petrochemical products is close to RMB 100 billion. To standardize the procurement procedure, enlarge the scope of suppliers and strengthen supplier management, the company started a materials procurement e-business system. A petrochemical products sales e-business system was also started for better transparency of the prices, better understanding of the customer's needs and increased direct sales proportion. In August 2000, the chemical products sales e-business web site opened to the public. Product information is publicized and viewers can make inquiries on line. The web site provides a platform for collecting orders for products and feedback from customers. Currently, there are more than 700 customer subscribers and 35 manufacturer subscribers. More than 1,000 kinds of products are available on line. By the end of 2002, online transaction volume had reached 5.69 million tons, with a revenue of RMB 33.34 billion. On our B2B online material procurement web site, demand for certain materials and guiding prices are publicized, so are inquiries and bidding announcements. Contract negotiation and contract drafting can also be done on line. So far, materials procured online have extended to 56 categories and 116, 000 kinds or so. Online suppliers now total 2,500 and there are 6,100 subscribers. By the end of 2002, cumulative procurement volume had amounted to RMB 20.8 billion. With the business process standardized and optimized, more materials are procured online, which reduced procurement expenses by RMB 520 million. 2.4. The implementation of advanced control technology improved profitability Improving SINOPEC's major refining and chemical plants control is an effective way to enhance operation performance. We formed strategic alliance and partnership with automation and software suppliers like Honeywell and AspenTech. We set safety, stability, longer run length, full workload and optimization as our goals and utilized advanced technology to resolve problems such as high energy consumption, low yield and poor efficiency. With the new technologies, we achieved stable production, higher product quality, longer run length in distillation units, FCC and polyolefins units. In production operation, we are able to optimize the product line distribution and increase target products volume. We also use process control and optimization technology to work out the immeasurable variables and take them into consideration during production. We utilized these technologies on over 60 units and realized stable output. These technologies require low investment, but provide high return. In this way, we were able to reclaim our project investment within one year.
2.5. The infrastructure construction backed up the IT application To standardize the information code is an important task for the company in building its information system. Unified definition of data sources and a standardized code system are necessary for the company's integrated information system. The company set up an internal code system which employs some 130, 000 pieces of code for 108 kinds of articles in the categories of work unit, product, fixed asset, material, finance and personnel. Each code is unique. As for the network construction, we have set up the IT infrastructure within and between the headquarters and all our subsidiaries. The main computer network links the intranet in the headquarters and subsidiaries as well as internet. We have our own satellite system covering all SINOPEC subsidiaries. Firewalls and other anti-hacking software were also installed which safeguarded safety and reliability.
3. DEVELOPMENT DEVELOPMENT
GOALS
AND FUTURE
PLANNING
OF
SINOPEC'S
IT
According to the blue print of building a well-off society in an all-round way, China will continue to see a sustainable and fast economic growth. Given this, domestic demand for crude, natural gas, refined products and chemicals will increase by a large margin. This will provide SINOPEC with a valuable opportunity for development. The company will definitely seize the opportunity to enhance the company's profitability and international competitiveness. It will focus on resources, investment, marketing, technology and human resources to further restructure the company's operation and management. Reform and restructuring, technology innovation, scientific management and sound financial operation will serve together to improve the company's competence. The development goal for SINOPEC is to become an internationally competitive, world-class, integrated energy company with prominent core business, quality assets, diversified equity structure, innovative technologies, advanced management and prudent financial practice by 2010. With another ten years, we wish to see a much more competitive company with strong market exploration ability, technology innovation ability and profit making ability. In SINOPEC's IT development, the company aims to leverage the petrochemical industry with information technology. The company compiled SINOPEC's Technology Development Plan and SINOPEC's ERP System Plan, making sure that the information system develops in a fast and all-round way. In the years to come, SINOPEC will try its best to make breakthrough in upgrading technology, enhance responsiveness to the market changes, improve internal control and increase efficiency. The detailed objectives are to integrate streams of materials and goods, information and cash flow, to set up the three-centre management system (with the headquarter as the decision making centre, the strategic business units as the profit making centre, and the subsidiaries as the cost control centre), to execute performance evaluation system and cost assessment system and to fully carry out integrated supply chain management.
The company will also build a unified information technology platform for the subsidiaries to share the real-time production and operation information. During the 'tenth five-year-plan' period, the major tasks for the company are as follows. The company will utilize advanced information technology in resources allocation, exploration and development and production to a wider extent. Such technology will also be used in marketing and procurement, enabling the company to be more adjustable to market. In addition, the company will build an integrated ERP operation management platform and Manufacturing Execution System (MES) platform. Focusing on the company's resources strategy, marketing strategy, investment strategy and technology and human resources strategy, relevant research work should be done on corporate strategy management, asset management, risk management, technology management, human resources management and market predicting, which would form the supporting system for decision making. SINOPEC's information system mainly consists of ERP, MES and PCS (Process Control System). On the management level, apart from the conventional ERP, we have database technology and have set up performance evaluation system and cost control system. Meanwhile, ERP further integrates with e-business system and supply chain system. IT construction is also developing fast at the strategic business units and subsidiaries. With the special systems, streams of goods, capital and information are optimized and resources are properly allocated. Oil field information integration and customer relation management are applied. Decision-making is now based on more of data rather than experience. At the subsidiaries, system integration is carried out in the aim of reducing cost and improving product quality. Cost assessment is disciplined and production process is advanced. However, due to mechanisms, personnel and technology, SINOPEC is left behind by international petrochemical majors in the application of information technology. The company is yet to form a complete database and the system integration level needs improving. Therefore, we would like to learn from other companies and share experiences. 4. CONCLUSION The 21st century posts opportunities and challenges for SINOPEC. On the one hand, we are faced with much fiercer competition from international companies after the WTO accession. In addition, resources and environment are putting more pressure on us. On the other hand, we are given the opportunity to cooperate with international companies, to learn their management concepts and models, and to upgrade the company's competitiveness. In this respect, SINOPEC will adopt new development concepts in IT construction and constantly leverage petroleum and petrochemical industry through information technology. We believe all this will help build SINOPEC into a world-class integrated energy and chemical company in an early stage.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Technological competitiveness in the chemical industry
Satoshi Kawachi
Senior Managing Director, Sumitomo Chemical Company, Limited, Tokyo, Japan
Abstract This is to describe several issues of business environments surrounding the
chemical industry and suggest ways to reinforce competitiveness for winning race in the industry, in particular, various approaches or use of technologies in R&D, including actual experiences in novel process developments. Also explained are the necessity of responsible care activity to ensure environment, safety and health and its technology buildup which in turn leads to further reinforcement of competitiveness together with deepening of technology in specific discipline and fusion of technologies between different disciplines. 1. INTRODUCTION In year 2001, the Japanese chemical industry shipped 24.4 trillion yen of products, which accounted for about 4.8% of Japan's GDP, and its employees were over 370,000, while the value of shipment of the American chemical industry in the same year was 427 billion dollars, about 4.6% of GDP of the U.S. Namely, relative importance of the chemical industry in respective national economies was almost the same. When we see the sales of chemical companies in the world, the largest is that of BASF (30.8 billion dollars), and the second largest is that of DuPont (28.4 billion dollars). Among Japanese chemical companies, Sumitomo Chemical ranks Number 12 in the world (9.4 billion dollars), Mitsubishi Chemical Number 13 (9 billion dollars), and Mitsui Chemical Number 15 (8.7 billion dollars). Sumitomo and Mitsui have a plan to merge together, and when these two are consolidated, the new company will become Number 6 chemical company in the world. Recently alignment of chemical industries is progressing by concentrating business resources on comparatively advantageous fields respectively, and as a result, many companies have changed into those which specialize in specific fields such as petrochemicals, basic chemicals, specialty chemicals or life science chemicals.
On the other hand, Sumitomo Chemical has chosen the way of diversified chemical industry where I believe there are certain advantages. It has a portfolio which makes it easier to adjust itself to a change of business environment. Its technologies are commonly usable among various disciplines within the company. It sometimes brings about a new and unexpected discovery through fusion of knowledge between different disciplines. It is my belief that diversified industry is particularly advantageous in creating new technologies and developing human resources. 2. ROLES OF THE CHEMICAL INDUSTRY One of the characteristics of chemical industry is that they are immensely rich in varieties; pharmaceuticals, agricultural chemicals, plastics, etc. These are the products which are manufactured and processed down to the finished products by the chemical industry, while there are other types of chemicals which are used by other industries as raw materials for manufacturing their final goods. When we see chemical industry's customers, chemical industry itself is the largest in sales, and others are agriculture, food, paper, printing, metal, machinery, electronics, automobile and many other industries. I would say almost all kinds of industries are our customers. Talking about finished goods, clothing, cosmetics, pharmaceuticals, home electric appliances, automobiles and many other daily commodities are the finished goods for which chemical materials are used. Let me take a liquid crystal television as an example, chemicals are used in various parts; body, film, liquid crystal, and IC encapsulation. Recently technological innovation has taken place in display devices through inventions of new materials such as liquid crystal and organic LED. I do not think it is exaggeration to say that new functional materials or high performance products invented by the chemical industry are giving great impetus to other industries which are also eagerly working on new product development. The driving force to create high functional or high performance materials is nothing but R&D, and depending on R&D strategy, a great difference will appear in the company's future business. In order for us to win the race in new product development, we must satisfy the following three conditions. (1): to discover chemicals or materials which demonstrate such functions as are exactly required.
(2): to discover manufacturing process which makes it possible to supply a newly invented product stably at a reasonable price. (3)" to put it on the market before our competitors do. 3. R&D ORGANIZATION In launching a new product, design of product functions and discovery of its manufacturing process are inseparable. If either of them is missing, it is impossible to launch any new product competitively in the market. In Sumitomo Chemical, in order to feed back quickly requirements of the market to R&D for new product development, R&D groups are organized within business sectors, by which R&D are directly connected to marketing endeavors. With respect to required manufacturing process, if it is just a matter of improvement of current technology, we handle it at the sector laboratory simultaneously with development of new product, but if entirely new process is looked for, we usually study it at a corporate laboratory. When I say " corporate laboratories", in our case, they are Organic Synthesis Research Laboratory where they are engaged in discovery of new catalysts and new synthesis methods, Process & Production Technology Center which develops manufacturing processes and Environmental Health Science Laboratory where they assess product safety and its possible impact on human and environment. Again, speaking of manufacturing process, there are many fundamental technologies which commonly apply to various fields of products, and, therefore, we think it is meaningful to work on process development and build up company's technology base at a corporate level laboratory like our Process & Production Technology Center. Corporate level research is of course not limited to process development, but is expected to create entirely new business opportunities, as its major mission, through their inventions. Thus, in our case, new business creation and common technology development are done by corporate research laboratories while product development catering to customers' needs is done by division research laboratories, and this method is working well. 4. R & D
Combinatorial synthesis and high through put screening (HTS) method have been invented to accelerate discovery of new functional materials. Sumitomo Chemical has introduced an HTS unit for the purpose of speeding up development of new synthesis catalysts. In the field of pharmaceuticals, discovery of new drugs by genomics has started. Sumitomo also has
10 positively capitalized on this innovative approach, judging that employment of such a new method is indispensable for new drug discoveries. Even when a new functional material is invented, if it is not accompanied by manufacturing process which makes it possible to supply the product at an reasonable price, business is not feasible practically. In a reverse way, novel process sometimes brings new products. We have an experience in polyolefin case where new process together with novel catalyst brought about creation of new materials. It may well be said that R&D of manufacturing process or technology is one of the key elements for winning competitions. We are accelerating process development with simultaneous cooperation between manufacturing group and research group by utilizing process systems engineering techniques (modeling, simulation, optimization, etc.). Studies on existing processes are also important. Since product life of basic chemicals is generally long, its process improvement brings us substantial economic benefits, and it also gives great impact on the environment. Response to fluctuating component ratios of raw materials or products as a result of change in market conditions is another challenging job. Sumitomo has research programs in this particular area; for example, we have developed an innovative process to manufacture caprolactam which does not produce any by-products. A new process to solely produce propylene oxide is another example, whereas a conventional method is simultaneous production of propylene oxide and styrene. The third example is our newly developed technology to oxidize hydrochloric acid which is by-produced from process using chlorine and to recycle it back to chlorine, reducing the load to the environment substantially. 5. RESPONSIBLE CARE It is a responsibility of the manufacturing group to manufacture the products without fault, which is certainly one of the fundamentals for winning the industrial competition. In addition, responsible care (RC) is another fundamental element for doing business in the chemical industry. RC is a voluntary activity to earn the trust from the society by ensuring "environment, safety and health" regarding company's products throughout their entire life cycles and keeping communications with the society. It is classified into such activities as "maintenance of environment", "security & prevention of disaster", "occupational safety" and "product safety". In Sumitomo Chemical, one more activity is added, that is "quality assurance". This way of thinking is applied from the beginning of work for development, and our
11 assessment system is working up to manufacturing under the company's development and industrialization rule. Among assessment items, those which are related to product safety and impact on environment are handled by the Environmental Health Science Laboratory. If their assessment result is negative about any newly developed product, we may give up launching such product. Within our Process & Production Technology Center, we have organized a group for studying safety engineering, who study security & prevention of disaster and feed back to the process group new knowledge if they find any problems so that they can improve manufacturing process accordingly. Through such self assessment of our own products and processes, we are able to build up technologies to find problems and solve them, which leads to further development of our R&D. 6. DEEPENING OF T E C H N O L O G Y IN SPECIFIC DISCIPLINE AND T E C H N O L O G Y FUSION B E T W E E N DIFFERENT DISCIPLINES
Sumitomo Chemical has research programs not only for polymerization catalysts of polyethylene but also synthesis of pharmaceuticals. These two may appear to be different from each other, but in reality they are built on a common technology base called "metal complex catalyst". Likewise, knowledge about ligand and central metal which are key elements of metallocene catalysts are shared by fine chemicals specialists, and actually scientists of both disciplines are jointly carrying out a research for post metallocene catalysts. This is one of the areas where fusion of technology between different disciplines is expected to bear a fruit. One more similar example is a joint work between heterogeneous catalyst researchers and organic synthesis researchers of our company, through which they arrived at an idea of modification of the surface of catalyst by organic synthesis method, and as a result, their study made a remarkable progress. There is no question as to the importance of building up technology base of specific discipline in R&D and technology accumulation in RC for reinforcing competitiveness. In addition, technology fusion between different disciplines is also useful in creating new business for the company's future. 7. C O N C L U S I O N In the above, Sumitomo Chemical's case for approaches to reinforcement of competitiveness in the chemical industry was mainly described. As a matter of fact, decision making in R&D is quite complicated and extremely difficult. Sumitomo Chemical is committed to continued endeavors for new process developments by fully utilizing the company's long fostered technology and know-how.
12
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
The Distributed Enterprize Integrates Business, IT and Process Systems Engineering B. Erik Ydstie*, Yu Jiao t Carnegie Mellon Department of Chemical Engineering Pittsburgh PA 15213 A b s t r a c t : In this paper we discuss the convergence of business systems like TQM and 6sigma, Information Technology and Process Systems Engineering. We propose that the proper framework advancing the conflux of these ideas is provided by distributed network computing. We furthermore suggest that the integrated systems of software, hardware and physical devices should be viewed as a complex adaptive system which is ever changing in non-predicable ways. The prospect of embedding such a system into a rigid, hierarchical and centralized computational structure is therefore dim. In the paper we discuss very briefly the emergence of a theory for complex adaptive systems. We outline the basic tenets of business finance and emphasize the importance of using value analysis for strategic decision making. We motivate a network model for the value chain and relate this to an electrical circuit description. This description allows us to draw conclusions about stability and optimality which are not pursued further in this paper. We also describe very briefly the basis for evaluating business performance, process and product development and risk. These activities are supported by case studies. We finish by reviewing very briefly how the drive towards excellence can make it easier for universities and industry to carry out cooperative R&D. K e y w o r d s : Technology management, information technology, finance, business systems, process control, lean manufacture.
* To whom all correspondence should be addressed: Carnegie Mellon, D e p a r t m e n t of Chemical Engineering, 5000 Forbes Avenue, P i t t s b u r g h PA 15213, e-mail
[email protected]. t P P G Industries Inc. Glass Technology Research Center. An extended version of the paper with a complete list of references is available on request from the authors.
13
1
Introduction
The purpose of this paper is to review issues and tools that have (1) found wide-spread acceptance in industrial management and resource allocation and (2) indicate in the most minimal way possible how the PSE community can help sharpen and develop these tools further. In order to do this we need to review the nature of the value generating enterprize. The main goal of such an enterprize is to survive and grow over time by generating products and services that have value in the eyes of its customers, In order to contribute towards value generation the effective leader 1. defines clear business objectives, 2. develops plans to achieve the objectives, 3. systematically monitors progress against the plan and 4. adapts the objectives and the plans as new needs and opportunities arise, Books have been written and courses have been developed that instruct managers and inspire them to develop objectives, plans and monitoring systems, Most of the concepts and paradigms that have been developed arc highly intuitive, graphical and descriptivc. An even wider range of methods have been developed to show the manager how to distribute the decision making processes. Distributed decision making makes the enterprize more manageable and adaptive so that it can rapidly respond to changing pressures in the market place. These tools are also highly descriptive and intuitive. They all rely on the use of highly distributed Information Technology (IT) systems, IT encompasses hardware and software for data storage, retrieval and high bandwidth communication. IT as a field grew out of the computer revolution and is responsible for the most profound changes to affect our industry since mass production became common early last century. IT technology tools allow frequent monitoring. It can be used to track transactions, inventory and human resource allocation in real time. Use of such tools have led to enormous savings and improved efficiency. A well implemented IT system simplifies data entry, unifies information representations, keeps
track of transactions, smoothes data flow and facilitates information exchange. Business systems, under various guises like lean manufacturing, just in time production, Total Quality Management (TQM), pull scheduling etcetera, grew out of idea that globalization, increased competition and a changing customer base should motivate low inventory, high flexibility and low wastage. The classical example is the Toyota manufacturing system which developed into 6sigma. These techniques are now implemented and adapted for use in almost all major US manufacturing industries and contribute towards faster turn around, lower inventory and improved cost structure. IT and business systems do not as a rule rely on the use of models that have predictive capability. Process Systems Engineering (PSE) on the other hand provides the models and numerical methods needed to optimize products, processes and their operation. It is therefore natural to attempt to interface PSE, business systems and information technology. The convergence of the three areas has proven to be difficult for a variety of reasons. One important reason is that the development of PSE followed a different trajectory than that of IT and business systems like 6 sigma. Computer and information networks evolved and adapted so that they could be applied to highly distributed systems. These tools are therefore amenable for direct application for decision support in large organizational structures with little centralized governance. Most of the tools developed for analysis and design in the process systems area are not well adapted for distributed decision making since they rely on hardware and software architectures developed in the 1960ies and 1970ies. PSE tools are having a considerable smaller impact than what one might expect at the business level in the global chemical industry. An organization that consists of physical and computational devices t h a t act in concert and are coordinated over a high bandwidth communication network is called an the adaptive enterprize. The adaptive enterprize is build around the idea that decision making processes is distributed and organized in a manner t h a t allows the system to be flexible and agile so that it can respond and adapt to changing needs. In order to extend the reach of process systems engineering and make it truly useful for decision
14 support in an adaptive enterprize environment it's future behavior and structure of such objects is not imperative therefore to develop methods that take completely predictable and it may displays sensitivadvantage of distributed computing and high band- ity with respect to choice of initial conditions. Howwidth communication. We need to address issues ever, the very nature and complexity of the adapof concern to the business leader communicate so- tive organism makes it able to behave in a good lution in the language that he can appreciate. We manner to unforeseen circumstances. The classialso must develop new ways to think about the sys- cal example of an adaptive system is an ant colony terns we study, which grows and adapts to the changing environThe paper is organized as follows. In the next ment. Another good example is the development section we describe very briefly the main compo- of the internet based, industrial computer control nents of complex adaptive systems theory. Busi- systems. ness is about making money so we review the main The architecture for industrial control systems components of financial analysis to set set stage of during the 1960ies and 1970'ies were built around the next section which is about our approach to what was then a powerful centralized computer where modelling the distributed enterprize. In the follow- all information was stored and calculations took ing chapter we review the dynamics of competition, place in sequential, pipeline machine as shown in the motivation for lean manufacture and the cur- Figure 2. The computer, often build around hardrent focus on quality and cost. In the following ware and software developed by DEC (the VAX section we review the rationale for developing new series), would interface directly with field devices, processes and products and how to enter into new like sensors and actuators through the AD/DA inbusinesses through mergers and acquisitions. We terrace. During the 1980'ies this picture changed then give overviews of the problems of how to man- quite dramatically. The computer resources beage risk. We finish by explaining how the mission of came more segmented as Honeywell, Fisher and the university has changed and how this gives new other process control system vendors developed disopportunities for cooperative university- industry tributed control systems. More and more cornresearch, puters were linked together using a data highway based on a Token Ring concept. Deterministic message passing ensured high reliability and fault tolerance and the "smarts" of the system was now distributed throughout the plant as shown in Figure An adaptive system is a system that is able to adapt 3. These networks however, although highly disto its environment and changing needs. Most bio- tributed, were not adaptive since they were based logical organisms are adaptive, human beings are on proprietary technology. It became increasingly adaptive and business systems are to an increasing expensive and difficult to maintain and configure degree adaptive as well. An adaptive system is fur- such networks as their size and complexity grew. thcrmorc said to be complex if it has the ability to Similar difficulties emerged in the software induslearn new behaviors and develop in ways that do tries and hardware and software vendors were forced not simply depend on its internal working princi- to develop open architecture products. ples, but also on the stimulae it receives through The major new development, which took place sensors from the environment. The environment is during the last decade, was the development of the in a constant state of flux and may itself be adap- non-deterministic, very high bandwidth communitive. This means that a system consisting of two or cation system, which forms the basis for modern more adaptive organisms that adapt to each other computer networking through ethernet and wirecan evolve in an ever changing manner (Figure 1). less. This architecture for process operation and Complex adaptive systems are often made out control, shown in Figure 4 is highly flexible and of adaptive devices that in and of themselves may adaptable. Computers, software, and field devices be quite simple. However, complex behavior emerges can be added, modified, exchanged and integrated as these are connected together to form large, inteinto the system in a "plug and play" manner. The grated networks were the adaptive devices compete "smarts" of the system is now highly distributed or cooperate to meet distributed design goals. The and more and more calculations take place at the
Complex Adaptive Systems
15
Figure 1: In a complex adaptive systems there are two or more adaptive devices that communicate forming a larger adaptive unit.
Centralized Computer
(DeC PDP 11rr3,...)
Production Scheduling Database
', : :
Inventory~ Dstabaee
Product
:
Quality Dl~eblse .........
Figure 2: The centralized computer architecture was build around the idea of a powerful centralized processing unit which would be capable of handling most process control and process operation related calculations.
15 Proprietary Industrial Network iii HII iii
i
m
I--'--'-I
I
Process Segments
I Manufacturing Process I I _ Process _ ~egmems z I,
I
I ,,,
I--" I
I I
Field Devices
*
Process Segments 3
~" Production Scheduling DatIbase
,
i
:
Inventory Database
',
Product
I Query
L._~_2_,_,,__,
Figure 3: The hardware architecture of the distributed control systems was based on the idea of linking scvcral centralized control systems together using industrial networking technology for message passing.
Enterprise Wlde Area Network I
I
I
I
I
I
I
I
I
I
I
I ~ i ~ 1
I
I
I
I
I
/
I
I
I
I
I
I
I
I
I
I
I
I
aNel CommunicMionLink I Personal I Computers I
Plant Wtde i
,
Plant Wide BusineIs Ethemet Network
-7--
m'
I
Induslzlsl Ethemet Network for Process Control System .....
9
9 I~I
NIlwolk (DeviceNel)
InstlrumentMIon
..,/; I
I
I
......
1
I I
I
! .... i'l
Manufacturing Process Process Segments 2 ,,
Process
Segments 3
I
" ..........
Production Scheduling Process
i" .......
i ;" ........
Inventory i ' Product Process : Quality ! ', Process .........
i
Figure 4: The hardware architecture of the modern distributed control systems is based on the asynchronous communication using internet technology and allows for rapid re-configuration, frequent updates and changes as the technology develops.
17 device level. For example, instruments and actuators come with their own, onboard computer systems with models for calibration, error checking, linearization, model predction and fault detection. What is emerging are distributed devices as shown for example in Figure 5 Adaptability and flexibility is further enhanced by the development of new and cheaper sensors for accurate process state measurement and information sharing over the internet. All process information is in principle available anywhere and the need for a physical presence of trained personnel has been significantly reduced. It is important to note, however t h a t these hardware and software systems are highly distributed are not coordinated by a central computing facility. The study of complex adaptive systems has taken importance in many fields. Including biological systems, fluid mechanics, astronomy and geo-sciences, computer science and business systems. In fact, as we illustrate below, the dynamics of large enterprizes can be modelled using very simple ideas from electrical networks and analog computing. The PSE community has recently begun to study how such network systems behave and more importantly how we can modify their structure and organization so that the integrated network can respond in an agile manner to changing needs of the business and and remain competitive in the long run.
is the book value of the holdings of a specific asset at a specific location and the number V -
~
vc
assets represents the bookvalue of the company. There has recently been a trend towards activity based analysis [2]. In this approach we attach specific value to the benefit generated by an activity. The activity may represent a production line, business segment, movement of an asset or any form of service t h a t generates value. The value of the activity depends on the physical process in question and how it enters into the value chain. Activities can therefore be are characterized by the triple ~ - - {f, x, w}. The scalar f gives the activity rate, x gives the specification of the activity in question and finally w represents the value added. It is now important to note that assets and activities are related in a continuous m a n n e r so t h a t w - - cs - Cp
where Cp is the value of the asset before the activity takes place and cs is the value of the asset after the activity is completed. We view Cp as the internal purchase price and cs as the internal sales price per unit normalized to a common standard, for example mass or energy. The internal cost of the activity can be expressed by the formula
~ = fw
Business Finance The instantaneous cash performance of a business unit can be evaluated by tracking the flow and storage of assets and assigning an internal costs to each and every activity. Assets are characterized by the triplet ,4 = {v, c, z}. The function v represents the amount stored at a given location, c represents the value of one unit of a specific asset and finally, z represents the specification. The specification identitles an asset by its name, SKU-number, chemical composition, patent number or some other identitier which should be unique. All these variables are functions of time and geographical coordinates. In financial terms the number
(1)
We choose sign so that asset flow is positive in the direction of increasing value and negative in the direction of decreasing added value. The internal cost is then always positive but the added value from performing an activity may be negative. For example, the added value in going from destination a to destination b is the negative of the added value of going from b to a, indicating t h a t cyclical activity does not generate value, it only adds costs. A more detailed assessment is given by Taylor and Brunt [8]. The instantaneous business performance is evaluated by taking difference between revenues R and expenditures E so t h a t T=R-E,
b -
vc
where the expenditures are the sum of all activity costs so that
E=
~ activities
fw
18 .... ....,
Virtuallinpu~ "~
Virtual Activity
II I
"",Virtual outputs ",
Virtual Smart Measurements I Actuators/Sensors
> ,,
Measurements J
I
Real Measurements / I
',
~
/ Raw mat@rjal
and energy'ioputs "'-..,
virtual Process
Process
Activity Mol n gla .~nd energyoutputs_.j/ ..........
--
Virtual Sensor Module
Figure 5" The virtual sensor module.
However, the rate of profit T, varies considerably from one instant to the next and it also ignores the value of asset accumulation, To overcome "jumpiness" it is common to integrate over a period of time and define the profit P during the period (t0, tl } so that P =
Tdt
For example, if {t0, tz } represents a quarter then P gives the quarterly net income from operations, A range of similar measures including profit margin, operating profit and ratios like Return on Assets, ROI, R O C E and Price Earnings ( P / E ) ratio arc also used to evaluate business performance [2]. The book value of the company therefore evolves according to the formula V(t) =
f
-r
T d t + V ( t - r)
There has therefore been a tendency to move away from the use of measures based purely on profit and past performance towards the use of measures that include value creation. For example, the financier, Warren Buffet [1] argues t h a t the performance should be evaluated using an underlying and potentially subjective measure he calls the "intrinsic value" of the company. The intrinsic value is defined to be the "discounted value of cash that can be taken out of a business during its remaining life [1]." This measure is expressed in mathematical terms by the conditional expectation ft+,H(t)= / aSE{T(t+s)lF(t)}ds+V(t),
s > 0 (2) The number 0 < a < 1 is the discount factor and the filtration F(t) represents all information available at time t that is used to estimate the future cash flow [3]. The book value of the company at time t is clearly F(t) measurable, in fact it has to Jt
These numbers represent faithfully the numbers used be according to the laws t h a t govern business conto represent the value of the company assets and are duct. However, the remainder of the formula relates to to the future and captures the best use of models used for tax and other reporting purposes. However, as argued in [2], focusing on account- and forecasts t h a t the business leader has available ing measures alone gives a myopic view of the value to make business decisions. of a business since it does not include the time The number II(t) clearly depends on a large value of accumulated assets and the future busi- number of factors inside and outside the reach of ness performance. In fact, savvy business leaders control. The forecast also needs to be adjusted and are more concerned about the future t h a n the past. updated as the context and changes. This is where
19 PSE come into the picture in full force.
Process systems engineering provides the integrated approach that allows us to combine predictive modelling, efficient solution procedures, optimization methods, distributed control and statistical techniques needed to evaluate and compare different alternatives as expressed by equation 2.
sions that allow the sustainable development so that the company can prosper and generate value over a long and indefinite period of time. The implied risk of a decision is therefore as important to consider than optimality, maybe more.
The Enterprize
as a Distributed
However, the emergence of complex networks N e t w o r k pose considerable challenges for the research and business community that transcend the mere ap- There are seven distinct factors that need to be plication of tools that have been developed for op- taken into account when we develop an abstract timization and control during the last decades. The model of an enterprize system in the commodity centralized (autocratic) approach which brings all chemicals industry. These include information together is likely to fail. W h a t is needed 1. Topology. are systematic tools that allow us to integrate predictive models, diverse information written in dif2. Transportation. ferent formats, different standards and distributed 3. Shipping and receiving. agent based optimization systems that reflect the structure of the adaptive system we seek to con4. Activities. trol. Early attempts based on genetic algorithms and simulated annealing showed that some prob5. Storage and holding. lem classes can handled efficiently using distributed 6. Forecasting and signal processing. computing. 7. Performance evaluation. Remarks: 1. The accuracy of the predictions depends on how much information one has available at a given time. More information is available for company insiders, especially with regards to the development of new technology and negotiation of contracts. Such developments are therefore guarded until they are consummated or protected, 2. The importance of evaluating intrinsic value as accurately as possible cannot be over estimated since it impacts the business strategy and may motivate significant changes in business operation, 3. It is important to consider the fact that we seek robust, rather than optimal solutions. The main objective of a large corporation is not simply make the most profit possible in a given period. This may lead to extreme solutions like breaking up the company and selling off all assets. As stated in the introduction, the objective is make business deci-
In addition one needs to consider intangible assets like legal position, know-how, intellectual property and how well this is protected, customer-supplier relationships and the quality of workforce and management. With the elements above we can represent the enterprize as a graph. A graph is normally characterized by a set of vertices and a set of edges. In light of the discussion above, we find it useful to introduce more structure and represent the topology of the supply chain using a graph with five classes of elements. These represent production, storage, terminals, routing points and activities. Transportation is represented by edges, while the other four elements are represented by vertices. More formally we have the graph G = (7), $, T, 7~, T{) Elements Pi E 7) , i = 1, ..., np represent the production sites where we assemble or disassemble chemical constituents or parts into pre-cursors or products. Elements s, E 8, i = 1,...,ns denote the
20 storage facilities. Elements ti C T , i = 1,...,nt denote terminals for receiving and shipping. Elements ri E ~ , i = 1, ..., nT represent points where material, energy, money and d a t a can be routed in different directions. Finally, hi E ~ , i - 1,..., nh denote transportation. An activyt is a collection of connected nodes and edges, By working on the basis of a common unit and the fact t h a t cyclical activity does not add value we can write the equations 0 0
=
asynchronous m a n n e r w i t h o u t referring to a central organizing structure. The system can now evolve and a d a p t to changing needs. However, in order to develop a theory for such a complex adaptive system and develop distributed control mechanisms for load balancing we need to address questions relating to stability, transient response and optimality of systems represented by network flow on a graph. Some of these are addressed in classical circuit theory and its extensions to network thermodynamics, a field of more recent origin.
~ w activities
(3)
routing point f
(4) Business Systems: The D y n a m -
The first equation corresponds to Kirchoffs voltage law and the second to the current law. In process modeling the first equation corresponds to the fact that pressures, chemical potentials and temperatures are equal at all interphases 1 One interesting feature of networks t h a t obey Kirchoffs laws is t h a t they have certain topological properties t h a t can be described in terms of
ics
of
Competition
A n u m b e r of business systems have been developed under the headings of 6 sigma, lean manufacturing, C F W (continuous flow manufacturing), T Q M , Just In Time (JIT) production and cycle time reduction. The idea in all cases is t h a t variations
in product quality, wastage, inventory level and overhead should be reduced while the service level Tellegen's theorem of electrical network theory [4]. should increase so t h a t the customer needs are satOne consequence of this theory is t h a t the simulated behavior of the network can give optimality isfied. Many of these ideas can be traced back to the developments at Toyota during the latter half of and stability, the 20th century [5]. I m p l e m e n t a t i o n of such busiA graph can bc used to represent information, ness systems requires a more highly skilled workmaterial and service flow of very complex manuforce t h a n the mass m a n u f a c t u r e t h a t dominated facturing proccsscs. The model can rcprcscnt a the 20th century industrial production. single business unit or a group of businesses workThe basic principles of lean production can bc ing together when free terminals are connected so summarized by the following five points that there is at least one continuous path from every terminal connecting raw material suppliers and 1. Value as defined by the customer. customers. All activities, including financial ones, 2. The value stream, which consists of the acshould be part of at least one such p a t h in order tions needed to bring the product/service to to be considered part of the business system. We the customer. also necd to map the internal activity costs so t h a t we can evaluate the relative merits of different ac3. The flow and continuity of products and sertivities as well as the relative merits of parallel acvices. tivities representing different contracts, technologies and work practices. Once the system dynamics 4. The responsiveness (pull) of the customer needs. and the cost structure is available we can develop 5. The pursuit of total quality (perfection, 6 sigma control systems to balance the load so t h a t all sysetc.) tem requirements and constraints are met and the activity costs are minimized in a distributed and These items are covered in depth in 6sigma or re1The theory can be extended to systems with disconti- lated courses at nearly all major US chemical comnuities and shocks when the minimum entropy principle is panies. Point 2 in the list above is especially iminvoked to select the appropriate characteristic at the shock front, p o r t a n t and typically illustrated using a diagram of
21 the type shown in Figure 6. More generally we find that the value generation may vary as a function of the throughput rate. There may be a minimum start up cost before any activity can start, beyond this the activity cost may vary in stepwise manner until at some point the activity saturates. At this level the activity is constraining and represents a bottle neck for further production improvement. Business systems of the type reviewed above are highly distributed and they can be used effectively to improve quality and reduce cost at every level in the organization. At a deeper level these systems are necessitated by the dynamics of competition, These can be understood at one level by making a comparison with network models, In the first case we consider a non-competitive market where the prices are stabilized by common understanding among the major producers (which is illegal), government regulation or price controls. In the network analogy we can model such a system by using resistors and capacitors where the per unit production cost increases as the production rate increases. There is therefore no incentive to expand the business. The integrated system therefore stabilizes at a point which corresponds to minimum dissipation. This follows by a simple application of Helmholtz minimum heat theorem. The profit will be distributed throughout the network but the customers suffer by having to pay high prices for services and products, In a competitive market onc or more the producer will try to expand his/her business by offering incentives by lowering the per unit cost as the production volumes increase. The minimum heat theorem no longer applies and what was originally a passivc system now dcstabilizcs as the market will tcnd favor thc low cost producer. The other producers will have to try to match the low cost produccr in quality and price to stay in business. This in turn leads to lower prices and an advantage for the end user. Case
Study:
Glass
Manufacture
til it is viscous enough to be lifted out as a continuous sheet. This sheet is about 2 mm thick and 9ft wide for production of automotive windshield glass and may be more than 20mm thick for other applications. Continuous and slow cooling after the tin-bath reduces thermal stress and finally the glass is checked for quality, cut, sorted and packaged. Scrap pieces and rejects are broken up and recycled. The objectives are to improve yield, maintain quality and stability and reduce product changeover costs. Diverse objectives and system complexity precluded us from simply "wrapping a loop around the entire process" and applying one particular brand of control theory. We therefore used the inventory based control approach to develop the control system structure. In this approach we write material and energy balances for each sub-activity so that dv d-t = r + p
where v represents, as explained above, the amount of stored assets (energy and materials), r = )-~, fi represents the net flow of assets to the control volume and p the net rate of production. We used a variety of models to find where to place new sensors, how to decompose the plant into smaller sub-sections, structure control systems and design estimators for unmeasured variables. The control system evolved over time and continues to be refined. It now includes nonlinear and constrained predictive control, Kalman filters, adaptation, stochastic control, vision systems with pattern recognition, and discrete feedback logic. The control loops are implemented on a number of different computer platforms, integrated via local area network and coordinated with the business system through the internet. The system has been implemented on most PPG glass plants and contributes towards significant yield improvements and improving the bottom line.
N e w P r o d u c t a n d P r o c e s s Tech-
In the glass process, sand, soda ash and other raw materials enter the batch house where they are mixed nology according to the given recipe and fed to the furnace (the hot end), where they are melted, refined and New process and product development often takes transported by fluid flow to the "tin-bath". The place inside the technical organization. One examglass there spreads out on molten tin and cools un- ple example is the development of HFC-134 series
22
I Qeue [I Set-upI [ ~ ~.
[ Wait ] ~-~
_y~__.____)
Value-added
Figure 6: Implementation of the value stream mapping tool.
Figure 7: Implementation tool for project management.
[PC/Workstation[
..I
[I/O I [I/0 I F E~
! I
i
Batch House .
Melter II
.....
' L
I I/o
' ~
PLC-Logic ,' StochasticControl /' KalmanFilter ,' / i,' ,,' t, Crown I
[PC/Workstation[
Hot End
"
I
[PC/Workstation[
'
'1
I I/O
'
I
't'
~
Cold End
PLC-Logic NonlinearEstimator StochasticControl Tweelcontrol EHAC MachineVision PID/FF MachineVision NonlinearAdaptiveCombustionAir NonlinearCompensationfor Air PressureControl Waist III
'" ]
Refiner IV
~I/O
~
'
I
Cutting/Packaging OptimalCutting MachineVision PLC-Logic
"~ Technical rT
~ '~'2~176176176 ~
Actuators
89 Seosors Activities Invento-" --"
Canal V
I
[
Figure 8: The control system for a large glass furnace was developed by dividing the process into subsections and representing the storage and flow of energy and materials from one sub-section to the next. The furnace itself was divided into five sub-sections as indicated in the Figure.
23 of refrigerants by duPont to respond to the environ- briefly described above is of relatively new origin, mental challenge posed by the growing ozone hole. however. Each process step represented a major inHowever, a large company may also be in a posi- novation and resulted in higher throughput, lower tion to buy technology and expand their sphere of cost and higher quality. The last innovation was the interest by purchasing new production capacity and development of the tin-bath process by the Pilkingexpand into areas where they have little expertise, ton brothers in the 1950'ies. The Pilkingtom proDifferent expansion projects can be represented cess allowed the production of distortion free glass as shown in Figure 9. Each project is represented without the need for time consuming and expensive by a larger circle whose area represents the intrin- polishing. It led to significant increases in throughsic value of the project as evaluated by equation put, lower cost and easier production. The R&D (2) and the area of the smaller circle represents the that went into the development of the tin-bath proNPV of the investment needed to carry out the cess was far from simple and cheap. It took over project. The area of the shaded doughnut rep- a decade before the first float glass was produced. resents the expected return relative to investing However, the development led to a tripling of proin monetary instruments. The coordinates are in ductivity through series of process improvements. this case labelled as old/new market and old/new The length of the production line was reduced by a product, an indication of the strategic impact of factor of two, the use of costly abrasives was elimthe investment decision. These coordinates can inated, the energy costs were reduced by 50% and be replaced by old/new process or low/high risk. the labor costs by 80%. The old method of making The representation illustrated here gives a quick glass was totally eliminated in matter of few years. overview of different strategic choices. A tool which can be refined using process sys- C a s e S t u d y : Solar Grade Silicon Protems engineering is the cost-curve, used to motivate d u c t i o n buying, selling or modernizing production equipment and for plant acquisitions. Such a curve is Silicon is prized for its semi-conductor properties, shown in Figure 10. Each bar represents a pro- its abilities to form polymer chains, and its exduction facility. The horizontal axis represents the cellent alloying properties with materials like aluproduction volume of a plant site. The total adds minum and steel. A by-product of the process, up to the world production if a global market is called microsilica, is used as a binder to form high considered. The vertical axis represents the pro- strength cement. However, these commodity marduction cost per unit produced. The lower the cost kets are fairly stable and even the market for electhe cheaper the production. Suppose for example tronics grade silicon (42,000 MT in 2000) is severely that a company owns plant sites with production depressed due to the slump in the micro-electronics characteristics indicated by the darker shade. It industry and the trend towards smaller and smaller might consider divesting the high cost site, invest- circuits t h a t use less silicon per transistor. ing in the large site in an effort to reduce cost and The market for photo-voltaics grade silicon is buy capacity in the geographical areas correspond- growing at a remarkable robust and stable rate of ing to the lower cost producers at the left of the 25-30% per year. This market is in part supported diagram, by government subsidies in countries like Japan and Germany, the energy needs of highly distributed Case Study: Glass Production wireless communication systems and third world countries with unreliable and poorly developed elecThe development of the Pilkington glass process trical grids. The supply chain leading to the manugives a good example of the development of disrup- facture of photo-voltaic grade silicon is highly comtive process technology. Glass is a very old ma- plex however and relies heavily on the use of reject terial, in fact it is one of the oldest manufactured materials from the micro-electronics grade indusmaterials. Exquisite examples of highly refined and try. This source of silicon is rapidly drying up due masterfully shaped glass artifacts have been discov- to the rapid growth and new sources and producered that goes all the way back to the Etruscans. tion technology is needed to support the growing The modern, continuous float glass process very solar cell industry. Many companies are engaged in
24
Figure 9: The figure shows the industry wide cost curve for silicon production units.
Figure 10: The figure shows the industry wide cost curve for silicon production units.
developing simplified and cheap routes for developing solar grade material. K P M G estimates t h a t if such a material is produced at a cost of around $15 per kg then large scale production methods will reduce the cost of electricity generated by solar panels sufficiently to compete with coal and oil-based electricity generating plants in countries like J a p a n
sure a reasonable rate of capital investment. Staying in the same line of business, markets and products for long period time generally lead to lower margins and reduced value of capital investment. Risk in a technical program can be assessed by the formula [9] R = p 9E
and Germany.
where p is the probably of failure and E is expenditure. This expression can be used to compare different technologies and programs and evaluate their relative risks. Another approach is approach is use a Pareto chart. The most common way to manage risk is to take break large programs with a substantial cost and pay-off, into smaller sub-programs with specific milestones. This approach distributes the risk and allows intermediate assessment of go and no -go decision criteria. In the area of process control it easier to motivate and implement a distributed control system where incremental gains can be realized along the way. The development effort behind a centralized control system is larger and more expensive. The
Managing Risk Risk can bc viewed as movement away from the current state in the three areas of new product, new market and new process technology as shown in Figure 11 [9]. Companies were very quick in adapting the concept of lean manufacture since the risk is low and the pay-off directly measurable using classical accounting tools. Risk plays a more i m p o r t a n t factor in new process technology, market or product, however the payoff is greater in these areas and businesses need from time to time enter new business areas to en-
25 New Product New Process Tecnology .~
New Market
% Current state
Figure 11" The figure shows three directions in the space of risks.
pay-off is offered at the end. The inherent risk is higher and the maintenance more cumbersome and costly, There are other ways to minimize risk, including developing partnerships and joint ventures, engaging consultants, out-sourcing and buying a company that has the capabilities needed. All these strategies lessen the impact of uncertainty. Significant risks are associated with the market outlook, cxchangc rates, legal and international trade agreements or lack thereof, Case study:
Carbothermic
Aluminum
ALCOA, ELKEM and Carnegie Mellon has for the past four years been working on the development of a new process for making aluminum. The new process is based on carbothermic reduction rather than electrolysis as is the case with the classical Hall-Heroult process. The new process is estimated to reduce emission of greenhouse gases. The capital and operating costs are estimated to be about 30% smaller than current technology. However, the new technology is expensive to develop and test since it involves extremely high temperature in the smelting stages. One of the stages is expected to operate at 2050 ~ C. A simple way to visualize the risk associated with the development of the carbothermic aluminum process is to express sensitives of the intrinsic value through a "tornado diagram" as shown in Figure 12. Each bar represents the variation of NPV as a function of a set of chosen sensitivity parameters, The calculations that go into generating such a diagram are clearly quite extensive since each point
in principle represents a specific plant design optimized and controlled to the given conditions. In this case the uncertainty in the London Metal Exchange (LME) price of aluminum dominates. This illustrates that market forecasts and market uncertainty dominate over technology in the area of finance.
Our experience is that managers trust their technologists, their ability to solve difficult technical prob lems and believe that they are able to design engineering systems that work well. They find it difficult justify why and how technical solutions can be used to generate value. Finally, it is easier for the technologist to learn to speak the language of finance than it is for the Chief Financial Officer to learn how to evaluate engineering solution based on its technical merits. An objective of PSE should therefore be to design better interfaces with business finance.
The
Drive
Towards
Excellence
Globalization of business and increased competition led to the development of business systems that emphasize quality and the needs of customer. Businesses that do not provide acceptable service level at competitive prices are not allowed to survive in the long run. This trend has led to re-structuring and significant shifts in the manufacturing industry. Similar trends have emerged in the universities. A significant shift in the mission and philosophy
26 I Base case value $00'0 ] LME price of Aluminum ($/Ib) Cost of power (S/kWh) Capital cost ($MM) Coke price ($/MT) Pilot development cost ($000) Hourly labor (hr/MT) Alumina (MT/MT) Basic R&D cost ($000)
I
[ I
I
NPV @ 15%, $000
Figure 12: The tornado diagram represents the sensitivity of the N P V with respect to variations in the expected financial parameters.
of higher education and university R&D has taken place. The classical idea of the university as a n a tional, cultural institution has all but disappeared [6]. Universities no longer primarily serve national interests, except in the narrowly defined areas like defence and homeland security, It has become very difficult to support theoret-
ical and knowledge-base research which cannot be directly linked to economic objectives that are realized in the short run. Most of the programs of the US Government are mission oriented and grow out of perceived industry and society needs. Universities therefore have become subject to the same market forces and competitive pressures as those that characterize industrial markets. They have become global in and compete for the best students, star professors and scarce resources. These trends have led a re-alignment of research interests and the establishment of centers of excellence t h a t to an increasing degree support industry as a whole, This is especially apparent in the area of PSE where such centers proliferate and grow. We find such centers at Imperial College, Kyoto University, Purdue , Carnegie Mellon, in Aachen and Lausanne. Larger, joint programs are also being formed. Examples include a joint program between University of Wisconsin and University of Texas, Austin and another one between Umass and UCSB. The centers provide service by leveraging industry and government research. T h e rationale for their existence is that university research can focus on generality, depth and education of high level technologists who
will contribute to value creation by developing new process technology, new products and new markets. However, there are m a n y hurdles to overcome and diverging interest put d e m a n d s on technology management, funding structure and communication t h a t university and industry R&D must prepare to meet. A common language and framework must be established so t h a t issues of the concern to the company, like technology protection, technology ownership, licensing and exclusive rights of use, have been dealt with. Financial implications must be analyzed. Academic concerns like right to publish results of general interest, degrees of freedom to pursue fundamental problems, opportunities to fulfill educational objectives and most i m p o r t a n t l y the opportunity to write MS and P h D theses, must also be taken into account. Publication challenge security issues as sensitive d a t a may become distributed more widely. Our experience however, is t h a t diverging objectives can be overcome and t h a t win-win situations are becoming more easy to establish as university R&D is becoming more market oriented and industry R&D has been become more decentralized, downsized and focused on short term business needs 2 2There are some signs that this discouraging trend is reversing and that at some larger companies are in the process of re-enforcing their centralized R & D organizations. Examples that come to mind include GE and IBM.
27 Summary and Conclusions
the way business is being conducted and how we think about pSe as a field. New paradigms will Distributed computing has emerged as the new parR- emerge. These will include distributed networks of digm for modelling complex financial and engineer- computational devices adapted to the physical sysing systems. Complex adaptive systems theory is tem through extensive sensory systems. These netslowly emerging as the paradigm for understanding works will be capable of modelling the system and how these systems can be designed and how they they can be generate forecasts that can be used for evolve over time and adapt to new and unforeseen resource allocation and strategy development. needs. Distributed computing arises naturally in several ways. The systems we want to model are distributed - we can no longer model process segments, R e f e r e n c e s business units and enterprizes in meaningful way [1] Buffett, W. (2002), "Berkshire without considering their geographical coordinates Hathaway, Shareholder Letters", and how they are integrated into the global market. Information, physical infra-structure and human resources are distributed across the globe and the computer networks we use for information exchange are also distributed. Moreover the topology of the network changes and adapts rapidly as
new needs arise and old sub-systems are exchanged with newer ones, new products and processes are brought on line and new businesses are added or old ones closed. Adaptive networks integrate physical devices, information systems, finance and personnel and they must be designed to operate in a stable and close to optimal manner.
~r~w.berkshirehathaway, com/letters, html
[2] Helfert, E. A. (2001), Financial Analysis Tools and Techniques, McGraw Hill, New York [3] Overhaus, M. A Ferraris, T. Knudsen, R. Milward, L. Nguyen-Ngoc, G. Schindlmayr (2002), Equity Derivatives, John Wiley & Sons Inc., Wiley Finance, New York [4] Penfield, P., R. Spence, S. Duinker (1970), Tellegen's Theorem and Electrical Networks, Research Monograph 58, The MIT Press, Cambridge, MA.
PSE has so far focused most of their efforts on scientific and large scale computing using modelling and optimization techniques and concepts based on methodologies, computer hardware and software architectures established in the 1960ies and 1970ies. What is needed now is the development of process systems engineering tools suitable for distributed decision making and network computing.
[5] Yasushiro Monden (1992), Toyota production system - an integrated approach
It will not entail a complete rethinking of the field or a need to abandon or change research directions that after all have been extremely successful. However, there is a need to broaden the scope of PSE and focus on the structural properties and architectures complex adaptive systems. What is is needed is a coherent and systematic theory that allows us to design, analyze and operate highly integrated and distributed systems of semi-independent devices and agents. Progress in this direction will allow the convergence of IT, business systems and PSE. A conflux of these three areas promise will change
[8] Taylor, D. and D. Brunt (2000), Manufacturing operations and supply chain management: The lean approach, Thomson Learning, London.
[6] Reading, B. (1996), The university in ruins, Harvard Press., cambridge Mass [7] Stern, C. W. and G. Stalk, Jr. (1998), Perspectives on Strategy from the Boston Consulting Group, John Wiley & Sons Inc., New York
[9] Steel, L. W. (1988), Managing technology: The strategic view, McGraw-Hill, New York.
28
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
Challenges in the New Millennium: Product Discovery and Design, Enterprise and Supply Chain Optimization, Global Life Cycle Assessment Ignacio E. Grossmann Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
Abstract. This paper first provides an overview of the financial state of the process industry, major issues it currently faces, and job placement of chemical engineers in the U.S. These facts combined with an expanded role of Process Systems Engineering, are used to argue that to support the "value preservation" and "value growth" industry three major future research challenges need to be addressed: Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment. We provide a brief review of the progress that has been made in these areas, as well as the supporting methods and tools for tackling these problems. Finally, we provide some concluding remarks.
Keywords: Process Systems Engineering, Process and Product Design, Supply Chain Optimization, Life Cycle Analysis. 1. INTRODUCTION When Professors Chen and Westerberg invited me to give a keynote lecture on research challenges in the new millennium at the PSE2003 meeting I was rather hesitant to accept their invitation since I was unsure whether I would be in a position to contribute to such an imposing topic. However, after realizing I did not have to provide a research agenda for the entire next millennium, but at best for the next few years, what also made me decide to accept were three major facts. The first is that the future of Process Systems Engineering (PSE) has been very much in my mind through an AIChE perspectives article that I coauthored with Art Westerberg (Grossmann and Westerberg, 2000), and through my involvement in a committee of the National Research Council, which is in the final stages of putting together the report "Beyond the Molecular Frontier: Challenges in Chemistry and Chemical Engineering in the 21 st Century." Second, having been department Head at Carnegie Mellon for the last eight and one half years, my interactions with employers and advisory boards has given me a useful perspective on the educational and research needs of the chemical industry. Third, I have had the opportunity to be involved in the Council for Chemical Research, and in the AIChE Journal as an associate editor. These activities, as well as the headship at Carnegie Mellon, have helped me to gain a broader appreciation of our profession. Therefore, it is in this context that I would like to take the opportunity to speculate about future research challenges in Process Systems Engineering.
29 The paper is organized as follows. We first define the landscape in industry by presenting several financial facts, economic and social challenges, and job placement data in the process industry. Next we discuss the major trends in the industry, as well as the expanding role of Process Systems Engineering. Finally, we discuss the future challenges identifying Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment as major themes for future research. 2. L A N D S C A P E OF THE P R O C E S S I N D U S T R Y
Since Process Systems Engineering has as a major objective to develop new methods and tools that allow industry to meet its needs by tying science to engineering, it is important first to identify the major facts and trends in industry that motivate the challenges for future research in this area. In this section we consider first several financial facts about the chemical process industry. We then discuss some of the major economic and social issues that are being faced by the industry, and lastly we present some data for placement of chemical engineers. We restrict the data to the U.S. industry for the sake of consistency. 2.1 Financial facts. While it has become trendy over the last few years to question the future of chemical engineering and the process industry, it is important to note that the latter still remains a major sector of the economy. As an example, Chemical Engineering News reported in its June 24 issue of 2002 (pp.44-57) that the total revenues of chemicals in the U.S. in 2001 amounted to $447 billion. The breakdown by product types in billions is shown in Table 1. If we expand the process industry by adding the petroleum industry, the revenues reported by major U.S. oil companies in 2001 were $595 billion. Thus, between chemicals and petroleum, the total revenue in the U.S. is close to $1,000 billion per year. Table 1. Revenues of chemicals in the U.S. in 2001 (billions) Basic chemicals $146.8 Pharmaceuticals 119.9 Specialty chemicals 108.6 Consumer products 48.4 Crop protection 13.7 Fertilizers 10.4
Expenditures in capital spending of the top 25 U.S. chemical producers (i.e. basic and specialty chemicals) were $7.074 billions in 2001, which is considerably lower than the $9.333 billion in 1998. The R&D expenditures were $4.798 billions in 2001 versus $5.023 in 1998. In contrast, the pharmaceutical industry saw their R&D expenditures increase from $15.2 billion in 1998 to $20.9 billion in 2001. It is also interesting to note that in 2001 one in 12 researchers in industry worked for a chemical or pharmaceutical company, while the expenditure of R&D in the chemical sector was about 10% among all industrial sectors.
30 Table 2. Revenues of maior U.S. chemical companies in 2001 (billions) ExxonMobil $191.6 ChevronTexaco 99.7 Merck 47.7 Procter & Gamble 39.2 Johnson & Johnson 33.0 Pfizer 32.2 Dow 27.8 DuPont 26.8 Bristol-Myers Squibb 21.7 Amgen 3.5 Genentech 1.7 To provide another perspective, the revenue in 2001 of major companies in the process industry is shown in Table 2. From the figures in this table it is clear that the petroleum companies have the largest revenues followed by pharmaceuticals, consumer products and chemical companies. Biotechnology companies are last with relatively small revenues. In terms of profitability and approximate return, however, the order is first biotechnology (2030%), then pharmaceutical (15-20%), petroleum (6-10%) and finally chemical companies (5-
8%). 2.2 Economic and social issues. The chemical process industry faces very important economic and social issues (Siirola, 2000). Globalization of the industry has opened new markets. While potentially this can help to increase the standard of living throughout the world, globalization has also resulted in growing worldwide competition. Furthermore, the introduction of e-commerce is producing greater market efficiencies, while at the same time greatly reducing the profit margins. Added to these challenges are increased investor demands for predictable earnings growth despite the cyclical behavior inherent in most of the chemical industry, which tends to be capital intensive. Socially, sustainability and protection of the environment will become even more important challenges for the process industries. Many of the raw materials used, especially those derived from oil, gas, and some plants and animals have been, and in some cases continue to be, depleted at rates either large compared to known reserves, or faster than replenishment. Also, by the very nature of chemistry, there are always contaminants in the raw materials, incompletely converted raw materials, unavoidable byproducts, or spent catalysts and solvents that produce waste. These challenges also apply to the production of the energy from the fuels produced by or consumed by the processing industries. Another concern that has recently received great attention is the potential detrimental effects of carbon dioxide emissions to the atmosphere. Recent estimates indicate that the level of carbon dioxide in the atmosphere has increased by a third since the beginning of the industrial age, and that it currently contributes about 73% to the potential for global wanning. Finally, another concern is the management of water, which is expected to become a major problem in this century. Almost all chemical manufacturers in the U.S., and increasingly world-wide, now subscribe to a program called Responsible Care, a pledge by the manufacturers to make only products
31 that are harmless to the environment and to its living occupants, and by processes that are also environmentally and biologically benign. Closely related to the environmental challenges are the energy challenges. Currently about 85-90% of the world's energy is obtained by burning fossil fuels (petroleum, natural gas, and coal), but this must change at some point. Regarding alternative energy sources for the process industry, better ways need to be devised to use solar and wind energy to effectively convert them to electricity. Combustion needs to be eventually replaced by fuel cell technology, and the safe and economic use of hydrogen needs to be made a reality. There are also important future implications for petroleum companies if rechargeable batteries become practical in electric motor vehicles, obviating the need of gasoline engines. Finally, the industry also has to respond to diseases and poverty, particularly in the developing world. For instance, a major challenge is the development and manufacture of low cost drugs to combat diseases such as AIDS. 2.3 Placement of chemical engineers. Another angle that is important to analyze is the change in the job placement of graduates in chemical engineering, which has been a result of changes in the industry. Twenty years ago most graduating students would join petroleum and chemical companies. Nowadays this has greatly changed. Figs. 1(a) and 1(b) which are a result of a survey conducted by AIChE, show the distribution of jobs offered to graduates with B.S. and Ph.D. degrees in 2001. The data is for about 60% of the graduating students in each case. At the B.S. level the remaining 40% works for the government (2%), pursues a graduate or professional degree (12%), is unemployed (3.5%), or their employment status is unknown (20%). At the Ph.D. level the other 40% works for the government (0.8%), takes on a faculty position (17%), does postdoctoral work (13%), is unemployed (2.8%), or their employment status is unknown (6%).
What is striking from Fig. 1 is the diversity of industries that employ chemical engineers. At the B.S. level chemicals, fuels and food/consumer product companies hired almost 50% of the students. Electronics hired as many as the fuels companies (16%), while the share of biotechnology and pharmaceutical, and engineering services and consulting, was close to 10% for each. At the Ph.D. level the trends are similar, although chemicals, fuels and consumer products show a share of only 35%, which is only slightly above electronics at 30%. Also biotechnology and pharmaceuticals has a 16% share. To appreciate the significant changes that have taken place over the last ten years in the placement of chemical engineers, the percent changes at the B.S. and Ph.D. levels in the four major sectors is shown in Table 3. There is a striking decrease in chemicals, and increase in electronics. Bio/pharma also shows growth but at half the rate of electronics. The drop in fuels is also noticeable, but not as dramatic as in chemicals. Food and consumer products is fairly constant.
32
Ca)
(b)
Fig. 1. Distribution of 2000-01 industry placement of (a) B.S. and (b) Ph.D. graduates in the U.S.
Table 3. Change in percentaee placement B.S. 91 Chemicals 43.7 Fuels 21.2 Food/Consumer Prods. 7.2 Electronics 2.4 Bio/Pharma 3.1
of graduates in the U.S. B.S. 01 Ph.D. 91 23.3 46.0 15.7 15.3 10.6 5.5 15.9 4.8 9.3 4.0
Ph.D. O1 21.3 10.6 4.3 29.5 15.9
3. DISCUSSION OF TRENDS IN INDUSTRY The material that we presented in section 2 could be the subject of a separate article. Our intent here is to use these data and issues as a basis to motivate the future research agenda in Process Systems Engineering in terms of three major themes: Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment. The data of the previous section clearly show that although the traditional chemical industry (i.e. chemicals, petroleum, consumer products) has greatly shrunk in terms of employment and R&D expenditures, its revenues are still very large (see Tables 1 and 2). The data also show that the biotechnology and pharmaceutical sector has become of significant importance even though their revenues are one quarter of the total industry in the U.S. (see Table 1). Surprisingly, the electronics sector has become almost as important as the other sectors in terms of employment. In this case, companies such as IBM (revenue $86 billion), HP (revenue $45 billion), and 1NTEL (revenue $27 billion), which are dominated by electrical and computer engineers, have hired chemical engineers largely for their process skills, which are important in chip manufacturing.
33 One implication of the above, and of the financial and placement data in section 2, is that it is important to recognize on the one hand the very large magnitude of the "value preservation" part of the industry (e.g., large-scale commodity chemicals, petroleum), and on the other hand the great potential of the "value growth" part of the industry (e.g., specialty chemicals, biotechnology and pharmaceutical products). Obviously, most companies are in fact dealing with both aspects, particularly since the specialty chemical today becomes the commodity chemical of tomorrow, and this transition seems to be accelerating, even in the case of the pharmaceutical industry. Therefore, for companies in the process industry to remain competitive and economically viable, this requires for the "value preservation" part of the industry the optimization of the enterprise and its supply chain by reducing costs and inventories, operating efficiently and continuously improving product quality. For the "value growth" part it means innovating and becoming smarter and quicker in product discovery and design, as well as in their commercialization, particularly in reducing the time to market. But the challenges posed by the energy and the environment discussed in section 2.2 must also be urgently addressed, as they are likely to have a profound effect on the long-term viability and acceptance of the chemical industry. It is here where global life cycle assessment will become an essential task that must be undertaken effectively. 4. EXPANDING THE SCOPE OF PROCESS SYSTEMS E N G I N E E R I N G
Process Systems Engineering has traditionally been concerned with the understanding and development of systematic procedures for the design, control and operation of chemical process systems (Sargent, 1991). However, as discussed in Grossmann and Westerberg (2000), the scope of PSE can be broadened by making use of the concept of the "chemical supply chain" shown in Fig. 2. The supply chain starts at the molecular level with of chemicals that must be discovered or synthesized. Subsequent steps aggregate the molecules into clusters, particles and films as single and multiphase systems that finally take the form of macroscopic mixtures. Through scale-up, we next move to the design and analysis of the production units that must be integrated in a process flowsheet. Finally, that process becomes part of a site with several plants that are connected through suppliers, warehouses and distribution centers, which ultimately defines a commercial enterprise. Based on Fig. 2, Grossmann and Westerberg (2000) define Process Systems Engineering as the field that is concerned with the improvement of decision making processes for the creation and operation of the chemical supply chain. It deals with the discovery, design, manufacture and distribution of chemical products in the context of many conflicting goals. A major change with this definition of PSE is the move away from process units and plants. In one direction the move is towards the molecular level in order to aid in the discovery and design of new molecular structures. In the other direction, the need is to move towards the enterprise level in order to help the coordination of the logistics for manufacturing and production planning with the entire supply chain. These two trends are consistent with the themes Product Discovery and Design, and Enterprise and Supply Chain Optimization, respectively. Furthermore, since the objective is to ultimately integrate from the R&D level through the process manufacturing level, and finally to the distribution level, this gives rise
34 to the theme of Global Life Cycle Assessment that requires covering all the length and time scales of the subsystems involved in Fig. 2.
Fig. 2. The "chemical supply" chain*. 5. FUTURE CHALLENGES Based on the previous sections, it would appear that Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment are likely to emerge as major research challenges in the PSE area over the next decade. Methods and tools, traditional strengths of PSE, will also continue to be of central importance. Below we discuss each of these themes.
5.1 Product Discovery and Design. In order to move towards the molecular level, traditional process design is expanding to include molecular product design (Joback and Stephanopoulos, 1989). Promising work in this area, which has been labeled as CAMD (Computer-Aided Molecular Design), has been recently summarized in the book by Achenie et al. (2002). Good progress has been made here in developing optimization models for synthesizing molecular structures of solvents, refrigerants and polymers (e.g. see Pretel et al., 1994; Camarda and Maranas, 1999; Duvedi and Achenie, 1997; Sahinidis and Tawarmalani, 2000). A major challenge that remains is the need to develop more accurate predictive capabilities for properties of compounds in order to apply the optimization methodologies (Sinha et al., 2002). Ideally one would like to resort to molecular simulation models (De Pablo and Escobar, 2002), which are increasingly "This figure is courtesy of Professor Wolfgang Marquardt, RWTH Aachen (Marquardtet al, 2000).
35 providing very good predictions as for instance shown in the work by Bowen et al. (2002). Closely related to CAMD is the design and analysis of proteins and biological molecules (see Maranas et al, 2003), where predictive properties tend to be probabilistic as they large rely on experimentation. On the other hand, force-field and ab-initio models are being developed for protein structure prediction as reported by Floudas and Klepeis (2001), Klepeis and Floudas (2002) and Westerberg and Floudas (1999), who have made use of global optimization techniques to solve these problems. To support the expansion to R&D through product discovery, optimal planning and scheduling techniques for new product development are also receiving increased attention to better coordinate the pipeline of new products and their testing in the agrochemical and pharmaceutical industry (e.g. see Blau et al., 2000; Maravelias and Grossmann, 2001; Subramanian et al., 2003; Shah, 2003) At the macroscopic level, product design is also emerging as an intriguing area of research (Cussler and Moggridge, 2001; Westerberg and Subramanian, 2000), although industry has practiced product design for a long time. The emphasis in research is the tie of new products to market needs, and the systematic exploration of alternatives for developing new products, which normally must be accomplished in multidisciplinary teams that are composed by scientists, engineers of other disciplines and business people. An interesting problem here that has not received enough attention is the integration of product and process design, an example of which is the work by Vaidyraman and Maranas (1999) on synthesis of refrigeration synthesis and refrigerant selection. Also, the design of electronic and optic devices would seem to be a worthwhile area given the number of chemical engineers who are hired by the electronics industry. Process design will of course still involve significant research challenges. The lack of commercial tools for the synthesis of large-scale commodity flowsheets clearly indicates the need for new developments and approaches (e.g. see Gadewar et al., 2001; Ismail et al., 1999; Shelley and Halwagi, 2000; Wilson and Maniuosiouthakis, 2000; Yeomans and Grossmann, 1999). Also, among the assessment of operability measures safety is the one that still remains a major challenge (e.g. see Huang et al., 2002). The interaction between design and control also continues to attract attention (Bansal et al., 2002; Design and control of batch and biological processes will also become more prominent. Another related challenge is the area of process intensification that requires discovering novel unit operations that integrate several functions and that can potentially reduce the cost and complexity of process systems (Stankiewicz and Moulin, 2000). While significant progress has been made in the area of reactive distillation (e.g. see Nasoli et al. 1997; Lee et al., 2000; Jackson and Grossmann, 2001), there is not yet a systematic procedure for synthesizing more general units, let alone processes, that integrate functions in a novel way. Other areas that are likely to receive increased attention due to the growth in new industries include synthesis of micro-systems and design of micro-structured materials, where for instance crystallization is a problem of great significance in the pharmaceutical industry (e.g. see Winn and Doherty, 1998). Also, biological related processes, starting at the level of the genome (Hatzimaikatis, 2000), going through metabolic networks (Stephanopoulos, 2002) and finally to process units and flowsheets (Lee et al. 1997; Steffens et al, 2000) will give rise to design and control problems that are likely to attract attention, and lead to problems
36 that have not been investigated previously (e.g. synthesis of separation of very dilute systems). Finally, biomedical applications such as drug delivery (Parker and Doyle, 2001), and anesthesia (Morari and Gentilini, 2001), provide new opportunities for applications of process control methodologies. Surprisingly, there are only very few applications in the electronics area (Edgar, 2000), despite the fact that this sector hires a large number of chemical engineers as was seen in section 2.3.
5.2. Enterprise and Supply Chain Optimization. This area is attracting a great deal of attention from industry and academia, as was shown in the recent FOCAPO2003 meeting in Coral Springs. While the applications are aimed mostly at commodities, there are also increasingly applications in specialties, pharmaceuticals (Shah, 2003), and in the food industry (Masini et al, 2003). Major challenges in this area include development of models for strategic and tactical planning for process networks (Shapiro, 2003) that often require the solution of large-scale multiperiod optimization problems. Furthermore, these models must be eventually integrated with scheduling models. While very significant progress has been made, these models still lack sufficient generality despite significant advances made in this area (e.g. Bassett et al, 1997; Kondili et al., 1993; Pantelides, 1994; Ierapetritou and Floudas, 1998). Also, the efficient solution of these models and extension to rescheduling (Mendez and Cerda, 2003) is required for their application to real-time scheduling and their handling of uncertainties (e.g. Honkomp et al., 1997; Balasubramanian and Grossmann, 2002). The incorporation of uncertainty in planning and scheduling models through stochastic optimization still remains a great challenge due to the very large computational requirements that are needed (Sahinidis, 2003). However, this is clearly an area that might be ripe for significant progress. Another interesting challenge is the characterization of dynamics in supply chains and the application of control methodologies in order to improve responsiveness (Vargas-Villamil and Rivera, 2000; Perea et al, 2001; Nishi et al., 2002). Interestingly the work by Vargas-Villamil and Rivera deals with semi-conductor manufacturing. In order to fully address the enterprise and supply chain optimization requires a tighter integration with the operation at the plant level. Here the areas of data reconciliation and sensor location still offer interesting challenges for optimization (see Bagajewicz, 2003; Chmieliwski et al., 2002), while the areas of verification, abnormal events management, and synthesis of operating procedures (Barton, 2003; Venkatsubramanian, 2003) are being approached by knowledge-based systems, and increasingly with emerging methods for hybrid systems (Silva et al., 2001). The development of effective plant-wide model predictive control is also very relevant in order to provide a seamless integration of the plant with the planning and supply chain optimization models. The major pending research problem that still remains is the integration of planning, scheduling and control, whether at the plant level, or at the supply chain level. Major difficulty is ensuring consistency, feasibility and optimality across models that are applied over large changes in times scales (years, months, down to days and seconds). Another outstanding problem is the design of supply chains in the face of restructuring in the industry.
37 5.3. Global Life Cycle Assessment.
Supporting the goal of Responsible Care by the chemical industry will require development of systematic methods and tools for design of environmentally benign products and processes. At the process level significant progress has been made in the synthesis and optimization of water networks (for a review see Bagajewicz, 2000). Progress has also been made to better understand the implications of waste at the level of the synthesis and analysis of a process flowsheet (e.g. Pistikopoulos et al., 1994; E1-Halwagi, 1997; Cabezas et al., 1999; Linninger and Chakraborty, 1999; Skidar and E1-Halwagi, 2001). Little work, however, has been made to assess environmental implications at the level of product design, and the integration with processing. Examples have been the work by Sinha et al. (1999) for design of environmentally benign solvents, and the work by Hostrup et al. (1999) for design of solvents and synthesis of separation systems. More importantly, however, is the need to adopt broader approaches to the Life Cycle Assessment of Products and Processes in order to predict more accurately their long-term sustainability (Heijungs et al., 1996; Koreevar, 2001; Nebel and Wright, 2002). While few interesting measures have been proposed to support this assessment in the PSE community in terms of thermodynamics (Bakshi, 2000) and in terms of IT to document all the elements involved in the life cycle of a plant design (Schneider and Marquardt, 2002), an open question is still what are good measures for sustainability. Also more comprehensive approaches to sustainability are required as has been done in other disciplines (e.g. Hendrickson et al., 1998). An interesting related question is how to effectively recycle excess products by using a "reverse supply chain" (Biehl et al., 2000). Also, the consideration of atmospheric chemistry and global climate change (Seinfeld and Pandis, 1997; Seinfeld, 2000) should be incorporated to provide a truly global assessment. Furthermore, this would also provide a stronger basis for investigating the carbon sequestration problem (Johnson and Keith, 2001). 5.4. PSE Methods and Tools.
A comprehensive approach to Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment will involve the solution of a number of challenging problems, including the integration of several parts of the chemical supply chain in Fig. 2. This will require the multi-scale modeling ranging from the atomic level to the enterprise level, as well as the development and improvement of supporting methods for simulation, optimization, control and information processing. While significant progress has been made in optimization in areas such as nonlinear and mixed-integer optimization (Biegler and Grossmann, 2002) there are still many outstanding problems (Grossmann and Biegler, 2002). The solution of very large-scale differentialalgebraic methods whether for dynamic simulation and real-time optimization of entire plants involving millions of variables, or for simulating systems at multiple scales (e.g. fluid mechanics and molecular dynamics) is a capability that is still at relatively early stages (Biegler et al, 2002). There is also need for methods for simulating and optimizing under uncertainty, a capability that is still limited to fairly small problems due to the great potential computational expense (Sahinidis, 2003). Related capabilities where good progress has been made are new approaches to flexibility analysis (Ierapetritou, 2001; Novak and Kravanja, 1999) and parametric programming (Dua and Pistikopoulos, 1999), although current methods are still restricted to relatively small problems. Another important capability will be advanced optimization tools that can handle mixed-integer, discrete-logic and quantitative-
38 qualitative equations to model synthesis and planning and scheduling problems more effectively (Grossmann et al., 2002; Grossmann, 2002). Generalized Disjunctive Programming (Lee and Grossmann, 2003) and Constraint Programming (Hooker, 2000) are novel approaches that offer alternative solution methods, with the former appearing to be especially useful for synthesis problems, and the latter on scheduling problems. An interesting problem here is the development of hybrid methods that effectively combine these techniques (Jain and Grossmann, 2001). Another very important problem that is still in its infancy is mixed-integer dynamic optimization (Bansal et al., 2001). Global optimization has also seen significant progress for solving problems with a specific structure (Adjiman et al., 1998; Ryoo and Sahinidis, 1996; Sahinidis, 1996). Nevertheless, there is still a need to effectively address the solution of large-scale problems, problems involving arbitrary functions, equations (Lucia, 2002) and problems involving differential equations (Papamichail and Adjiman, 2002). A very different approach that is being explored for the approximate global optimization is agent-based computations (Siirola et al, 2002), which has been motivated by the area of complex systems (Ottino, 2003) and that has been recently applied to supply chain problems (Julka et al., 2002) In the area of process control vigorous efforts continue in the areas of model predictive control (Rawlings, 2000; Morari et al., 2003), which is being increasingly applied to new systems such as simulated moving bed chromatography system (Natarajan and Lee (2000). Other efforts include system identification (Jorgensen. and Lee, 2002), process monitoring (Kourti and MacGregor, 1996), and fault diagnosis (Zhao et al., 1998; Chiang et al., 2001). Intriguing problems that merit further research is the use of passivity theory (Hangos et al., 1999; Ydstie and Alonso, 1997) for providing a stronger physical foundation to process control, and network theory for structured modeling in control (Mangold et al. 2002). A new interesting development in process control has been the consideration of PDE models (Baker and Christofides, 1999; Christofides, 2001) that in principle will allow tackling dynamic problems at a much greater level of detail, although the computational challenges are formidable. Another very interesting development has been hybrid dynamic systems that involve discrete and continuous variables (Avraam et al, 1998; Benmporad and Morari, 2001; Kowalewski, 2002; Morari, 2002), and that provide a framework for integrating regulatory control with higher level supervisory control functions. A challenging development in hybrid systems has been the incorporation of global optimization methods (Barton, 2003). Finally, the integration of measurements, control and information systems will emerge as a problem of increased importance with advances in IT (Ydstie, 2001). Modeling will continue to be a major focus of PSE research. New directions include greater integration of traditional algebraic or DAE flowsheet models with PDE models for Computational Fluid Dynamics calculations (Oh and Pantelides, 1996), integration of macroscopic models with molecular simulation models (Stefanovi6 and Pantelides, 2000) in order to support multi-scale computations. Also, modeling tools are needed for the natural specification of logic and discrete decisions (Vecchietti and Grossmann, 2000) as well as for accommodating hierarchical decisions such as in conceptual design (Douglas, 1985). Information modeling tools (Davis et al., 2001; Eggersmann et al., 2002), will also become increasingly important for supporting integration problems, and for problem solving by large and globally distributed teams (Eggesmann et al., 2003).
39 Advances in computing, both in performance growth of computing hardware and in object oriented software development will help to realize some of the supporting tools described above. Higher number of cycles and larger memories can be expected in the future, which will help in addressing a number of the larger problems described above (e.g. Mallya et al., 1999, for process simulation). The need for advanced computing has led to the development of cheap, high performance clusters, such as the Beowulf class computers (e.g. see http://beowulf.cheme.cmu.edu). These have leveraged the availability cost effective components (microprocessors, motherboards, disks and network interface cards) and publicly available, hardware independent software. Beowulf clusters allow the possibility of large-scale parallel computing for the price of standard components. Finally, wireless computing may also create new needs for effectively supporting team work by diverse and distributed specialists. 6. CONCLUDING REMARKS The financial trends, issues and changes in job placement, and the broadening of PSE, would indicate that in order to support "value preservation" and "value growth" in the process industry will require new advances and developments from PSE in three major areas: Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment. Furthermore, to make progress in these areas continued work is required in basic PSE methods and tools. While this is not a surprising conclusion, we can make a few remarks and observations based on the data and review of recent work that we presented: 1. While a shift to product design is a welcome change in PSE to support "value growth," it should not be separated from process design, which is a core capability of chemical engineers and PSE. Furthermore, to support discovery it is paramount to connect the molecular level design with properties at the macroscopic level. 2. The area of enterprise and supply chain optimization offers a unique opportunity to PSE given its potential to impact the "value preservation" part of the industry, which is under great competition. Supply chain optimization of course also offers opportunities for the "value growth" industry such as pharmaceuticals. 3. Despite the great importance of sustainability and environmental issues, the research efforts from the PSE community have been rather timid. A bolder and more creative approach is clearly needed. One intriguing opportunity might be process intensification as a way to revolutionize chemical plants. Another opportunity could be stronger interaction between product and process design as part of a life cycle analysis of chemicals. 4. There are many potential and exciting possibilities in biological related research in PSE. However, industrial growth in that area is unlikely to become sufficiently large so that it should become the major focus of PSE research. 5. The electronics area has received be little attention by the PSE community despite the significant number of chemical engineers who have been hired by that industry. PSE should be able to contribute to the simulation and optimization of chip manufacturing processes, as
40 well as in the area of product design by exploiting our knowledge o f chemistry and chemical engineering. 6. Since many of the computational challenges of PSE tools arise with large problem sizes, there might be the temptation to think that faster and more advanced computers is all that is needed. While these developments will obviously have significant impact, the breakthroughs will come with synergy of new theories and algorithms as it has happened in the case of LP and MILP optimization (Bixby, 2002). Finally, we hope that this paper has shown that in PSE there are many problems that are intellectually challenging and that are relevant to industry. The important goal is to make sure that new research directions in the "new millennium" emphasize both points.
ACKNOWLEGDMENTS. Many thanks to Christos Maravelias, Conor McDonald and Startos Pistikopoulos for their useful comments and feedback on this paper.
REFERENCES Achenie, L.E.K., R. Gani and V. Venkatasubramanian (Editors), "Computer Aided Molecular Design: Theory and Practice," Elsevier Publishers (2002) Adjiman, C.S., S. Dallwig, C.A. Floudas, and A. Neumaier, "A Global Optimization Method, aBB, for General Twice-Differentiable Constrained NLPs - I. Theoretical Advances", Computers and Chemical Engineering, 22, pp. 1137-1158 (1998). Adjiman, C.S., I.P. Androulakis, and C.A. Floudas, "A Global Optimization Method, aBB, for General TwiceDifferentiable Constrained NLPs - II. Implementation and Computational Results", Computers and Chemical Engineering, 22, pp. 1159-1179 (1998). Ahmed, S. and N. V. Sahinidis, Analytical Investigations of the Process Planning Problem, Computers & Chemical Engineering, 23( 11- 12), 1605-1621, 2000. Avraam, M. P., N. Shah, and C. C. Pantelides, "Modelling and Optimisation of General Hybrid Systems in the Continuous Time Domain," Computers and Chemical Engineering, 22, Suppl., $221-$228 (1998). Bagajewicz, M., "A Review of Recent Design Procedures for Water Networks in Refineries and Process Plants," Computers and Chemical Engineering, 24, 2093-21 I4 (2000). Bagajewicz, M., "Data Reconciliation and Instrumentation Upgarde. Overview and Challenges," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp. 103-116, CACHE (2003). Baker, J. and P. D. Christofides, "Output Feedback Control of Parabolic PDE Systems with Nonlinear Spatial Differential Operators," Ind. & Eng. Chem. Res., 38, 4372-4380, 1999. Bakshi, B.R. "A thermodynamic framework for ecologically conscious process systems engineering," Computers and Chemical Engineering, 24, 1767-1773 (2000). Balasubramanian, J. and I.E. Grossmann, "Scheduling to Minimize Expected Completion Time in Flowshop Plants with Uncertain Processing Times," Computers and Chemical Engineering 26, 41-57(2002).
41 Bansal, V., V. Sakizlis, R. Ross, J. D. Perkins, E. N. Pistikopoulos, "New Algorithms for Mixed-Integer Dynamic Optimisation," Report Centre for Process Systems Engineering, Imperial College, London (2001). Bansal, V.; Perkins, J. D. and Pistikopoulos, E. N., "A Case Study In Simultaneous Design and Control Using Rigorous, Mixed-Integer Dynamic Optimization Models," Industrial & Engineering Chemistry Research 41, 760-778 (2002). Barton, P. and C.K. Lee, "Design of Process Operations using Hybrid Dynamic Optimization," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp. 89-102, CACHE (2003). Bassett, M.H., J. F. Pekny and G. V. Reklaitis, "Using Detailed Scheduling to Obtain Realistic Operating Policies for a Batch Processing Facility", Ind. Eng. Chem. Res., 36, 1717-1726 (1997). Bemporad A. and M. Morari Optimization-based hybrid control tools Proc. of the 2001 American Control Conference, Arlington (VA), US, Vol. 2, pp. 1689-1703 (2001) Biegler, L. T., Cervantes A. and Waechter, A. "Advances in Simultaneous Strategies for Dynamic Process Optimization," Chemical Engineering Science, 57, pp. 575-593 (2002). Biegler, L.T. and Ignacio E. Grossmann, "Retrospective on Optimization," Computers and Chemical Engineering, submitted (2002). Biehl, M., E. Prater, and M. Realff, "Modeling and Simulation of a Reverse Supply Chain in an Uncertain Environment," INFORMS, San Jose, 2002. Bixby, R.E., "Solving Real World Linear Programs: A Decade and more of Progress," Operations Research, 50, pp. 3-15 (2002). Blau, G., B. Mehta, S. Bose, J. Pekny, G. Sinclair, K. Keunker and P. Bunch, "Risk Management in the Development of New Products in Highly Regulated Industries," Computers and Chemical Engineering, 24, pp.659-664 (2000). Bowen, T.C., J. L. Falconer, R. D. Noble, A. I. Skoulidas, and D.S. Sholl, "Comparison of Atomistic Simulations and Experimental Measurements of Light Gas Permeation Through Zeolite Membranes," Industrial and Engineering Chemistry Research, 41, 1641-1650 (2002). Cabezas, H., Bare, J. C., & Mallick, S. K., "Pollution prevention with chemical process simulators: the generalized waste reduction (WAR) algorithm-full version. Computers & Chemical Engineering, 23,623 (1999). Camarda, K.V. and C.D. Maranas, "Optimization in Polymer Design using Connectivity Indices," Industrial & Engineering Chemistry Research 38, 1884-1892 (1999). Cano-Ruiz, J. A., & McRae, G. J. (1998). Environmentally conscious process design. Annual Review on Energy Environment, 23,499 (1998). Chiang, L. H., E. L. Russell, and R. D. Braatz. Fault detection and diagnosis in industrial systems. SpringerVerlag, London (2001). Chmielewski, D. J., T. Palmer, V. Manousiouthakis, "On the Theory of Optimal Sensor Placement," AIChE J., 48 (5), 1001-1012 (2002).
42 Christofides, P. D., "Control of Nonlinear Distributed Process Systems: Recent Developments and Challenges," AIChE J., 47, 514-518, 2001 Cussler, E.L. and G. D. Moggridge, Chemical Product Design, Cambridge University Press Davis, J.G., E. Subrahmanian, S.L.Konda, H. Granger, M. Collins, A.W. Westerberg, "Creating Shared information Spaces for Collaborative Engineering Design," Information Systems Frontier, 3(3), 377-392 (2001). De Pablo, Juan J. and F. A. Escobedo, "Molecular Simulations in Chemical Engineering: Present and Future," AIChE J., 48, pp. 2716-2721 (2002) Douglas, J.M., "A Hierarchical Decision Procedure for Process Synthesis," AIChE J., 31,353-362 (1985). Dua, V. and E.N. Pistikopoulos, "Algorithms for the Solution of Multi-Parametric Mixed-Integer Non-Linear Optimization Problems", Ind. Eng. Chem. Res., 38, 3976-3987 (1999). Duvedi, A. P. and Achenie, L. E. K., "On the Design of Environmentally Benign Refrigerant Mixtures: A Mathematical Programming Approach," Computers & Chemical Engineering, 21, 8, 915-923, 1997 Edgar, T.F., S. Butler, W.J. Campbell, C. Pfeiffer, C. Bode, S.B. Hwang, K.S. Balakrishnan and J. Hahn. Automatic Control in Microelectronics Manufacturing: Practices, Challenges, and Possibilities. Automatica Vol. 36(11): pp. 1567-1603, 2000. Eggersmann, M., R. Schneider and W. Marquardt, "Modeling Work Processes in Chemical Engineering - From Recording to Supporting, ", European Symposium on Computer Aided Process Engineering - 12, (Eds. Grievink, J. v. Schijndel), Elsevier, 871-876 (2002). Eggersmann, M., S. Gonnet, G.P. Henning, C. Krobb, H.P. Leone, W. Marquardt, "Modeling and Understanding Different Types of Process Design Activities," Lat. Am. Appl. Res. 33, pp. 167-175 (2003). EI-Halwagi, M. M., "Pollution Prevention through Process Integration: Systematic Design Tools," Academic Press, 1997. Floudas, C.A. and J.L. Kleipis, "Deterministic Global Optimization for Protein Structure Prediction" Book in Honor of C. Caratheodory, N. Hadjisavvas and P.M. Pardalos, eds., 31, 2001. Gadewar, S. B., Doherty, M. F. And Malone, M. F., "A Systematic Method for Reaction Invariants and Mole Balances for Complex Chemistries," Computers Chem. Engng., 25, 1199-1217 (2001). Grossmann, I.E., "Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques for Process Systems, Engineering," Optimization and Engineering, 3,227-252 (2002). Grossmann, I.E. and L.T. Biegler, "Future Perspective on Optimization, " Computers and Chemical Engineering, submitted (2002). Grossmann, I.E., S.A. van den Heever and I. Harjunkoski, "Discrete Optimization Methods and their Role in the Integration of Planning and Scheduling," AIChE Sympsium Series No. 326, Vol. 98, pp.150-168 (2002) Grossmann, I.E. and A.W. Westerberg, "Research Challenges in Process Systems Engineering," AIChE J. 46, pp.1700-1703 (2000). Hangos, K., A. A. Alonso, J. P. Perkins and B. E. Ydstie, "Structural Stability of Process Systems using Thermodynamics and Passivity Analysis", AIChE Journal, Vo145, pp 802-816 (1999).
43 Hatzimaikatis, V., "Bioinformatics and Functional Genomics: Challenges and Opportunities," AIChE J., 46, pp. 2340-2343 (2000) Heijungs, R., G. Huppes, H. Udo de Haes., N. Van den Berg and C.E. Dulith, "Life Cycle Assessment," Paris, France UNEP (1996). Hendrickson, C.T., A. Horvath, S. Joshi and L. B. Lave, "Economic Input-Output Models for Environmental Life Cycle Assessment," Environmental Science & Technology, pp. 184A-191A, (1998). Honkomp, S.J., L. Mockus and G. V. Reklaitis. "Robust scheduling with Processing time uncertainty," Computers Chemical Engineering, Vol 21, pp. 1055-1060 (1997). Hooker, J., "Logic-Based Methods for Optimization: Combining Optimization and Constraint Satisfaction," John Wiley & Sons (2000). Hostrup, M., P.M. Harper and R. Gani, "Design of Environmentally Benign Processes: Integration of Solvent Design and Separation Process Synthesis," Computers and Chemical Engineering, 23, 1395-1414 (1999). Huang, H., C.S. Adjiman and N. Shah, "A Quantitative Framework for Reliable Safety Analysis", AIChE Journal, 48, 78-96 (2002) Ierapetritou M.G., "A New Approach for Quantifying Process Feasibility: Convex and one Dimensional QuasiConvex Regions," AIChE J., 47, 1407, 2001. Ierapetritou M.G. and C.A. Floudas, "Effective Continuous-Time Formulation for Short-Term Scheduling. 1. Multipurpose Batch Processes", Industrial and Engineering Chemistry Research, 37, pp.4341-4359, (1998). Ismail, S.R., E.N. Pistikopoulos and K.P. Papalexandri "Modular Representation Synthesis Framework for Homogenous Azeotropic Separation", AIChE Journal, 45, 1701-1720 (1999). Jackson, J. and I.E. Grossmann, "A Disjunctive Programming Approach for the Optimal Design of Reactive Distillation Columns," Computers and Chemical Engineering 25, 1661-1673 (2001 ). Jain, V. and Grossmann, I.E. "Algorithms for Hybrid MILP/CP Models for a Class of Optimization Problems," INFORMS Journal of Computing, 13,258-276 (2001). Joback, K.G. and G. Stephanopoulos, "Designing Molecules Possessing desirded Physical Property Values," Proceedings FOCAPD (Eds. J.J. Siirola, I. Grossmann and Geo. Stephanopoulos), CACHE, Elsevier (1989). Johnson, T.L. and D.W. Keith, "Electricity from Fossil Fuels Without CO2 Emissions: Assessing the Costs of Carbon Dioxide Capture and Sequestration in US Electricity Markets," Journal of the Air & Waste Management Association, 51, pp. 1452-1459 (2001). Jorgensen, S. B. and J. H. Lee, "Recent Advances and Challenges in Process Identification," AIChE Sympsium Series No. 326, Vol. 98, pp.55-74 (2002) Julka, N., I. Karimi and R. Srinivasan, "Agent-Based Refinery Supply Chain Management," ESCAPE-12, The Hague (2002) Klepeis J.L. and C.A. Floudas, (PostScript (253K), PDF (372K)), "Ab Initio Prediction of Helical Segments in Polypeptides", Journal of Computational Chemistry, 23, 1-22 (2002).
44 Kowalewski, S., "Hybrid Systems in Process Control: Challenges, Methods and Limits," AIChE Sympsium Series No. 326, Vol. 98, pp.121-135 (2002) Kourti, T. and J.F. MacGregor, "Multivariate Statistical Process Control Methods for Monitoring and Diagnosing Process and Product Performance", J. Qual. Tech., 28, 409-428 (1996). Korevaar, G., 'Sustainable Criteria for Conceptual Process Design', 21st Annual European AIChE Colloquium, The Hague, April 20, 2000 Lasschuit, W. and N. Thijssen, "Supporting Supply Chain Planning And Scheduling Decisions in the Oil and Chemical Industry," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.37-44, CACHE (2003). Lucia, A. and Y. Feng, "Global Terrain Methods", Computers & Chem. Engineering, 26, 529 (2002). Lee, J.W., S. Hauan and A.W. Westerberg, "Graphical Methods for Reaction Distribution in a Reactive Distillation Column," AIChE J., Vol. 46(6), 1218-1233, June (2000). Lee, J., S.Y. Lee and S. Park, "Fed-batch Culture of Escherichia coli W by Exponential Feeding of Sucrose as a Carbon Source", Biotechnology Techniques, 11, pp. 59-62 (1997). Lee, S. and I.E. Grossmann, "Logic-based Modeling and Solution of Nonlinear Discrete/Continuouis Optimization Problems," to appear in Annals of Operations Research (Eds. M. Guignard-Spielberg and K. Spielberg) (2003). Linninger, A. A. and Chakraborty, A. "Pharmaceutical Waste Management under Uncertainty", Comp. Chem. Eng., 25,675-681 (2001). Maranas, C.D., G.L. Moore, A.P. Burgard, and A. Gupta (2002), "Systems Engineering Challenges And Opportunities In Computational Biology," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp. 13-26, CACHE (2003). Mallya, J.U., S.C. Zitney, S. Choudhary and M. A. Stadtherr, "Matrix Reordering Effects on a Parallel Frontal Solver for Large Scale Process Simulation," Computers and Chemical Engineering 23, pp.585-593 (1999). Maravelias, C.T. and I.E. Grossmann, "Simultaneous Planning for New Product Development and Batch Manufacturing Facilities," I&EC Research 40, 6147-6164 (2001). Masini, G., N. Petracci and A. Bandoni, "Supply Chain Optimization in the Fruit Industry," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.237-240, CACHE (2003). Mangold, M., S. Motz, and E.D. Gilles, "A network theory for the structured modelling of chemical processes," Chemical Engineering Science, 57,19, pp. 4099 - 4116 (2002) Mendez, C. and J. Cerda, "An MILP Framework for Reactive Scheduling of Resource-Constrained Multistage Batch Facilities," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.335-338, CACHE (2003). Marquardt, W., L. v. Wedel, and B. Bayer, "Perspectives on Lifecycle Process Modeling, "(Eds. M.F. Malone, J.A. Trainham, B. Camahan), AIChE Symp. Ser. 323, Vol. 96, 192-214 (2000). Morari, M., "Hybrid System Analysis and Control via Mixed Integer Optimization, " AIChE Sympsium Series No. 326, Voi. 98, pp.136-149 (2002)
45 Morari, M. and A. Gentilini A., "Challenges and Opportunities in Process Control: Biomedical Processes," AIChE J., 47, 2140 (2001). Morari, M., J. H. Lee, C. E. Garcia and D. M. Prett, Model Predictive Control, to be published by Prentice Hall, 2003. Natarajan, S. and J. H. Lee, "Repetitive Model Predictive Control Applied to a Simulated Moving Bed Chromatography System," Computers and Chemical Engineering, 24, pp. 1127-1133, 2000. Nebel, B.J. and R. T. Wright, "Environmental Science: Toward A Sustainable Future," Prentice-Hall (2002). Neiro, S.M.S. and J.M. Pinto, "Supply Chain Optimization of Petroleum Refinery Complexes," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.59-72, CACHE (2003). Nishi, T., Masami Konishi, Shinji Hasebe, and Iori Hashimoto, "Autonomous Decentralized Supply Chain Optimization System for Multi-stage Production Processes", Proceedings of 2002 Japan-USA Symposium on Flexible Automation, pp. 131-138 (2002). Nisoli, A., Malone, M.F., And Doherty, M.F., "Attainable Regions for Reaction with Separation", AIChE Journal, 43, 374-387 (1997). Novak, Z. and Z. Kravanja, "Mixed-Integer Nonlinear Programming Problem Process Synthesis under Uncertainty by Reduced Dimensional Stochastic Optimization," Ind. Eng. Chem. Res., 38, pp 2680 - 2698 (1999). Oh, M. and C. C. Pantelides, "A Modelling and Simulation Language for Combined Lumped and Distributed Parameter Systems", Comp. Chem. Engng., 20, 611-633 (1996) Ottino, J.M., "Complex Systems," AIChE J., 49, 292-299 (2003). Pantelides, C.C., "Unified Frameworks for the Optimal Process Planning and Scheduling," Proceedings 2nd Conference FOCAPO (Eds. D.W.T Rippin and J. Hale), pp.253-274 (1994). Papamichail, I., C.S. Adjiman, "A Rigorous Global Optimization Algorithm with Ordinary Differential Equations," Journal of Global Optimization, 24, 1-33 (2002). Parker, R.S. and F.J. Doyle III, "Control-relevant Modeling in Drug Delivery", Adv. Drug Delivery Reviews, 48, 211-228, (2001). Perea, E., I.E. Grossmann, E. Ydstie and T. Tahmassebi, "Dynamic Modeling and Decentralized Control of Supply Chains," I&EC Res. 40, 3369-3383 (2001). Pistikopoulos, E. N., Stefanis, S. K., & Livingston, A. G." A Methodology for Minimum Environmental Impact Analysis," AICHE Symposium Series, 90, 139 (1994). Pretel, E.J., P.A. Lopez, S.B. Bottini and E.A. Brignole, "Computer-Aided Molecular Design of Solvents for Separation Processes," AIChE J., 40, 1349-1360 (1994). Rawlings, J.B., "Tutorial overview of model predictive control," IEEE Control Systems Magazine, 20(3):38-52, June (2000). Ryoo, H. S. and N. V. Sahinidis. A branch-and-reduce approach to global optimization. Journal of Global Optimization 8(2): 107-139 (1996).
46 Sahinidis, N.V., "Optimization under Uncertainty: State of the Art and Opportunities," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp. 153-165, CACHE (2003). Sahinidis, N. V., BARON: A General Purpose Global Optimization Software Package, Journal of Global Optimization, 8(2), 201-205, 1996. Sahinidis, N. V. and M. Tawarmalani, Applications of Global Optimization to Process and Molecular Design, Computers and Chemical Engineering, 24, 2157-2169, 2000. Sargent, R.W.H., "What is Chemical Engineering?," CAST Newsletter, 14 (1), pp.9-11 (1991 ). Schneider, R. and W. Marquardt, "Information Technology Support in the Chemical Process Design Life Cycle," Chem. Engng Sci. 57 (2002), Issue 10, 1763-1792. Seinfeld, J.H. and Spyros N. Pandis, "Atmospheric Chemistry and Physics: From Air Pollution to Climate Change," Wiley (1997). Seinfeld, J.H., Clouds and Climate: Unraveling a Key Piece of Global Warming, AIChE Journal, 46, 226-228 (2000). Shah, N., "Pharmaceutical Supply Chains: Key Issues and Strategies for Optimization," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.73-85, CACHE (2003). Shapiro, J., "Challenges of Strategic Supply Chain Planning and Modeling," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.27-34, CACHE (2003). Shelley, M. D. and M. M. EI-Halwagi, 2000, "Componentless Design of Recovery and Allocation Systems: A Functionality-Based Clustering Approach", Comp. Chem. Eng., 24, 2081-2091 Siirola, J.D, Hauan, S. and Westerberg, A.W. "Toward Agents-Based Process Systems Engineering: Proposed Agent Framework," Submitted to Comp. Chem. Eng., April 2002. Sikdar, S. and M. M. EI-Halwagi, Eds., "Process Design Tools for the Environment", Taylor and Francis (2001 ), New York Sinha, M., L.E.K. Achenie and G.M. Ostrovsky, "Environmentally Bening Solvent Design by Global Optimization," Computers and Chemical Engineering, 23, 1381-1394 (1999). Sinha, M., Ostrovsky, G., and Achenie, L. E. K., " On the Solution of Mixed-Integer Nonlinear Programming Models for Computer Aided Molecular Design," Computers and Chemistry, 26(6), 645-660, 2002. Silva, B.I., O. Stursberg, B.H. Krogh, and S. Engell, "An Assessment of the Current Status of Algorithmic Approaches to the Verification of Hybrid Systems," Proc. 40th IEEE Conf. on Decision and Control, Orlando (Florida), 2867-2874 (2001). Skogestad, S., "'Self-optimizing control: the missing link between steady-state optimization and control", Comp.Chem.Engng., 24, 569-575 (2000). Stankiewicz, A. and J. A. Moulin, "Process Intensification: Transforming Chemical Engineering," Chemical Engineering Progress, 96(1), pp.22-34 (2000) Stefanovir, J. and C. C. Pantelides, "Towards Tighter Integration of Molecular Dynamics within Process and Product Design Computations", AIChE Symposium Series, Vol. 96 No. 323, pp. 236-249, (2000).
47 Stephanopoulos, G. "Is There a Need for a New Process Systems Engineering," 7th Roger Sargent Lecture, Nov. 30 (2000). Stephanopoulos, Gregory, "Metabolic Engineering: Perspective of a Chemical Engineer," AIChE J.,48, pp.920926 (2002). Steffens, M.A., E S Fraga and I D L Bogle, "Synthesis of downstream purification processes using physical properties data," Biotechnology & Bioengineering, 68, 218-230 (2000). Subramanian, D., J.F. Pekny, G.V. Reklaitis and G.E. Blau, "Simulation-Optimization Framework for Stochastic Optimization of R&D Pipeline Management," AIChE Journal 49, 96-112 (2003). Vaidyaraman, S. and C.D. Maranas, "Optimal Refrigeration Cycle Synthesis and Refrigerant Selection," AIChE Journal 45, 997-1017 (1999). Vargas-Villamil, F.D. and D.E. Rivera, "Multilayer Optimization and Scheduling Using Model Predictive Control: Application to Reentrant Semiconductor Manufacturing Lines," Computers and Chemical Engineering, 24, pp. 2009-2021 (2000). Vecchietti, A. and I.E. Grossmann, "Modeling Issues and Implementation of Language for Disjunctive Programming," Computers and Chemical Engineering, 24, pp.2143-2155 (2000). Venkatsubramanian, V., "Abnormal Events Management in Complex Process Plants: Challenges and Opportunities in Intelligent Supervisory Control," Proceedings FOCAPO 2003 (Eds. I.E. Grossmann and C.M. McDonald), pp.117-132, CACHE (2003). Westerberg K.M. and C.A. Floudas, "Dynamics of Peptide Folding : Transition States and Reaction Pathways of Solvated and Unsolvated Tetra-Alanine", Journal of Global Optimization, 15, 261-297 (1999). Westerberg, A.W., and E. Subrahmanian, "Product Design," Comp. Chem. Engng, Vol. 24 (2-7), 959-966 (2000). Wilson, S., V. Manousiouthakis, "IDEAS Approach to Process Network Synthesis: Application to Multicomponent MEN," AICHE Journal, 46 (12), 2408-2416, (2000). Winn, D. And Doherty, M.F., "A New Technique for Predicting the Shape of Solution-Grown Organic Crystals," AIChE Journal, 44, 2501-2514 (1998) Ydstie, B.E., "New Vistas for Process Control: Integrating Physics and Communication Networks," AIChE J., 48, 422-426 (2002). Ydstie, B.E. and A.A. Alonso, "Process Systems and Passivity via de Clausius-Planck Inequality," Systems and Control Letters, 30, 253 (1997). Yeomans, H. and I.E. Grossmann, "A Systematic Modeling Framework of Superstructure Optimization in Process Synthesis," Computers and Chemical Engineering, 23, 709-731 (1999). Zhao ,J., B. Chen, and J. Shen, "Multidimensional non-orthogonal wavelet-sigmoid basis function neural network for dynamic process fault diagnosis, Computers & Chemical Engineering,", 23(1)83-92 (1998)
48
Process SystemsEngineering2003 B. Chen and A.W. Westerberg (editors) 9 2003 Publishedby Elsevier Science B.V.
Control and Operations" When Does Controllability Equal Profitability? Thomas F. Edgar Department of Chemical Engineering, University of Texas, Austin, Texas 78712, U.S.A.
Abstract The justification of process control in the context ofbusiness decis ion-making may include the following economic or operating considerations: increased product throughput, increased yield of higher valued products, decreased energy consumption, decreased pollution, decreased off-specification product, improved safety, extended life of equipment, improved operability, and decreased production labor. However, identifying a direct relationship between each type of economic benefit (profitability) and how controllers are designed or operated (controllability) is an elusive target. Perspectives of how process control has influenced business decision-making have changed radically over the brief history of process control (1950 to the present). Thus it is valuable to have an historical view of the changing role of process control in operations and profit/loss measures. Today the influence of process control on business decisionmaking is at its highest level ever, but there are still many challenges that must be met for process control to maximize its economic impact on an enterprise-wide scale. Keywords process control, profitability, optimization, business drivers 1. INTRODUCTION Profitability is the criterion by which most if not all decisions are made in the chemical industry. It is necessary to quantify profitability mathematically in order to apply modem tools used in process design, operations, and control. However, when process control strategies are to be determined or changed, the key economic considerations, or business drivers, are not easily formulated by a single objective function. Table 1 lists six business drivers for process control that are being used today. Different drivers have been emphasized at different times during the past 50 years. In the 1960s, a plant was considered successful if the products met customer specifications and could be manufactured reliably and more or less consistently (BD 1). The view of process control then was a minimalist one; plant operations personnel were mainly concerned that the controllers did not harm the process, which led to a mentality of "don't touch that dial" once the process was making the desired product. Automation systems in the 1970s utilized supervisory control based on rudimentary optimization tools to maximize profits, thus justifying the investment in computing equipment (BD2). By optimizing the steadystate operating conditions, the return on hardware investment was calculated from the increased profits, but process dynamics and feedback control (i.e., "controllability")
49 played no explicit role in determining economic feasibility. In the 1980s the statistical quality control movement focused on minimizing product variability in order to achieve profitability (BD3). The focus on quality recognized the importance of making the right product that met quality specifications the first time, which eliminated the negative effects on profitability of waste, rework, blending, or reduced selling price (when offspec products were made). Feedback control became a principal tool for achieving BD 1. Meeting safety and regulatory requirements via process control became much more important during the 1980s as well, because when violations occurred there was sometimes a large penalty cost (BD4). In the 1990s additional imperatives on manufacturing were added, name ly that process equipment should be used fully (maximum asset utilization) and that the plant should be operated as flexibly as possible, in order to adapt to market, raw materials, and energy considerations (BD5). This led to the watchwords of"good, fast, cheap, and clean" to describe the goals of manufacturing. Improving the efficiency of information and control systems and workforce productivity became an added driving force in the late 1990s (BD6).
Table 1 21 st Century Business Drivers for Process Control (adapted from Ramaker et al. [~] and Shunta TM) BD1.
Deliver a product that meets customer specifications consistently.
BD2.
Maximize the cost benefits of implementing and supporting control and information systems.
BD3.
Minimize product variability.
BD4.
Meet safety and regulatory (environmental) requirements.
BD5.
Maximize asset utilization and operate the plant flexibly.
BD6.
Improve the operating range and reliability of control and information systems and increase the operator's span of control.
This paper covers the chronology of how the business drivers for process control have evolved, in the context of major "epochs" since 1950: (1) (2) (3)
the early days (1950-70) the energy crisis and the adoption of digital control systems (1970-80) the quality movement and addressing health, safety, and environmental factors (1980-90)
50 (4)
the 21 st century enterprise view of process control (1990 - present)
An important dichotomy in relating process control to economic benefits occurs in batch vs. continuous processing; batch processing has a more natural connection to profitability through an explicit economic objective function, as explained in this paper. 2. PROCESS CONTROL IN THE EARLY DAYS In the 1950s the PID controller was the foundation for the practice of automatic process control in the process industries. However, the important controller design question, namely how to select the values ofKc, zi, and zD that give satisfactory closedloop performance, had no direct connection to process economics. Most operating plants used a combination of manual control and PID control in order to make the desired products. During the 1950s and 1960s, optimization theory and stability analysis received heavy emphasis in the development of control theory due to the aerospace programs in the U.S.S.R. and the U.S.A. This approach was called "modern" control theory, which provided a more sophisticated alternative to the classical control theoryused to design PID controllers. Oil and chemical companies began to recognize that more sophisticated mathematical tools and the use of digital computers held considerable promise to improve plant operations and thus plant profitability. However, the cost of a plant-wide computer control system for a large process plant could range from $250,000 to $1 million, which meant significant benefits had to be derived from steady-state optimization in order to justify such a large capital investment. In the 1960s steady-state optimization (such as linear programming) was normally performed in an off- line computer once a day or once a week, then the results were given to the operator who would change set points manually. Tests in 1957 using Direct Digital Control (DDC) were carried out in the Gulf Coast region by Texaco and Monsanto, but these early applications did not achieve the economic benefits predicted or hoped for. As a result, the chemical industry resisted changing from reliable PID controllers, which worked well enough, were not terribly expensive, and were easy to understand. Why trade them for extremely expensive systems that were unreliable and required extensive research and development as well as highly trained personnel for implementation? How could such systems increase profitability? In the 1960s and early 1970s there were a few documented studies on economic benefits resulting from process control. Barkelew TMreported a 17% reduction in utility costs and 25% increase in capacity with feedforward control. Harter [41 described an ethylene plant where a multilevel scheme with supervisory optimizing control and regulatory control was installed in 1969. With so few successful applications of digital computer control, it became clear that aerospace control technology was not easily transferred to the process industries, whose companies did not benefit from government funding and had processes that were difficult to model. Modem control theory generated controllers whose performance was quite sensitive to model inaccuracies, hence it was not considered to be practical for chemical processes. When it came to control system design, the chemical industry used maxims
51 such as "you can get 80% of the profit with 20% of the effort" and "what can go wrong will go wrong" (Murphy's "feedback" law). Once a plant was making a satisfactory product, any efforts to change the plant or to optimize the operating conditions were opposed. In spite of the resistance in industry, academic researchers continued to pursue the holy grail of optimal control during the 1960s and 1970s as the ultimate answer to industrial process control problems. Because of the plethora of academic papers published on the subject of optimal control, a well-publicized "gap" developed between control theory and practice. At that time it appeared that a number of important process control problems could be solved by the minimum principle, e.g., the minimum time startup of a batch reactor or the maximum conversion in a tubular reactor. Although fundamental models for such systems were fairly simple, implementation of an optimal controller on an actual process was a formidable challenge because of hardware and software limitations. It was not until the 1970s that such algorithms were successfully applied to pilot plants. Ten years later they became commercially feasible. Because the general optimal control solution was an open-loop rather than feedback type of control, there was considerable academic interest in the linear-quadratic problem (LQP), which is now called the LQG (linear-quadratic-Gaussian) problem. The LQG problem is formulated as minimizing a quadratic objective function subject to a linear state equation. This is an important type ofoptimal control problem for which there exists a closed-form analytical representation for the optimal controller in closed-loop or feedback form, u(t) = -Kx(t), where x and u are the state and control vectors, respectively. For LQG, the gain matrix K is obtained independent of the initial conditions. The key question is how to tune the controller performance by adjusting the state weighting matrix Q or the input weighting matrix R in the objective function
J = So' [x~ Qx + u ~Ru]dt
(1)
The use of an objective function in Eq. (1) misled some researchers into thinking it was a meaningful economic measure of control. Unfortunately, the cost of deviation of a state variable (e.g., a concentration) about a desired set point is usually different for positive deviations (profit reduction) and negative deviations (off-spec product). The cost of control can range from zero to the cost of utilities, which may be significant. When positive control changes incur a significant cost, a pure feedback control framework is not optimal except in some fortuitous cases where the control effort could be positive for all time, thus giving economic meaning to the urRu term. In general this will not happen, because a well-tuned output response will yield overshoot in both state and control variables. Explicit control weighting in the objective function can be eliminated by adding a quadratic term involving du/dt. Thus control effort is ignored but changes in the input are penalized. This approach effectively incorporates integral control into the overall feedback control law. Penalizing the rate of change of the control induces more inertia into the controller, causing it to change position less often, which is desirable because constant adjustment of a control valve causes faster valve wear (a hidden cost). Another hidden cost might be the disturbances caused in other units by an overactive control system.
52 Chintapilli and Douglas TMperformed a study of optimal control of a nonlinear reactor using various types of objective functions. For this system they found that a linear feedback controller derived from a quadratic objective function and linearized model was almost as good as the open- loop optimal controller computed for an actual profit function and the nonlinear model. Thus the LQG could be used to synthesize multivariable controllers having profitability implications. O'Connor and Denn[6] showed that the optimal solution for a first-order-plus-time-delay model and a quadratic error objective function could be put in the form of a PID controller, which has settings very similar to that for Ziegler-Nichols tuning. Subsequently other approaches such as Internal Model Control (IMC) or Box-Jenkins minimum variance control have derived similar tuning relationships (although with a smoothing filter). While an optimal control formulation can yield a familiar control structure like PID, it was not a "silver bullet" for ensuring profitability of a plant. Addressing profitability in the chemical industry required a two-pronged (or hierarchical) approach combining steady-state optimization and regulatory control. In the 1970s it became apparent that most ofthe direct financial gain arose fi'om steady-state optimization that maintained production within specifications and calculated improved set points when operating conditions changed. The main value of dynamic control was to provide satisfactory regulatory control and set point changes, although dynamic control could permit operation at unstable steady states which are desirable economically (e.g., in crystallization and polymerization). Once deviations became small enough, there was little economic value to make them smaller. The two-pronged approach also suggested that two types of models were required, a linear or nonlinear steady-state model and a linear dynamic model. 3. THE ENERGY CRISIS AND THE ADVENT OF DISTRIBUTED CONTROL SYSTEMS A number of events occurred during the late 1970s that stimulated the widespread introduction of computer control in the chemical and refining industries. Because of the high cost of computers and associated control equipment and the lack of demonstrated economic benefits from closed-loop computer control, early installations were justified based on a return on investment by using the computer to perform supervisory control (set point optimization), as discussed in the previous section. Typically a payback period of one year or less was required to justify a computer control project. A broader view began to emerge on potential financial benefits of process control, which included improvements in yield, throughput, quality, and energy usage. Process yield could be improved by reducing physical losses of materials in vents and waste streams and by reducing chemical losses due to excessive recycle of reactants. Throughput improvements offered tremendous benefits for a product where the market is sold out and additional products can be sold at the same price. Quality improvements were harder to quantify because quality changes were not always reflected in higher sales volume or selling price. The importance of energy savings relative to yield or quality depended on the nature of the process; specialty chemicals or pharmaceuticals typically do not have a high energy cost contribution, while refining processes are more energy-
53 intensive. All of these considerations can be incorporated into profitability calculating carried out on a daily or hourly basis, which is now called real-time optimization or RTO. In order to perform supervisory control or real-time optimization (RTO) for an operating plant, an objective function (or economic model) must be defined, which includes the costs of raw materials, values of products, and costs of production as functions of operating conditions, projected sales or interdepartmental transfer prices, and so on. An operating model must be developed that relates product variables to operating conditions (the set points). Both the operating and economic models typically include constraints on operating conditions, feed and production rates, storage and warehousing capacities, and product impurities. In addition, safety or environmental constraints might be added, such as a temperature limit or an upper limit on a toxic species. Once models and constraints are defined, optimization methods can be applied to find the optimal set points. 3.1 The Energy Crisis The tripling of fuel costs in the mid- 1970s due to the Arab States oil embargo was a shock to industry and individual consumers alike and caused a permanent change in thinking in the process industries, i.e., energy was no longer "cheap". This event provided the impetus for the initiation of wide-ranging programs on energy conservation, which ranged from housekeeping items such as checking steam traps or adding more insulation to computer control and RTO on equipment such as distillation columns, boiler systems, and refrigeration units. To estimate the potential significance of energy conservation via process control, a 1% energy savings due to improved control and a fuel cost of $2/MBtu would save around 300 million dollars per year (based on 15xl 09 Btu/year consumed in the U.S.A. process industries). An oft-cited figure that distillation columns consume 3% of all energy used in the U.S.A. gives an indication of the large incentives to optimize distillation energy usage. For example, most columns in the 1970s were manually operated at high reflux ratios in order to meet product quality specifications, which required a re-balancing of the economics of column operation using automation. Buckley[71 and ShinskeyTMdiscussed several design and operations steps to minimize energy consumption in distillation, based on improved control techniques and optimizing operating conditions. Adding equipment for heat recovery purposes entailed recovering or reusing heat contained in the column product streams, using multiple effect distillation or vapor recompression. One impediment at that time was the added control requirements for auxiliary condensers or reboilers, which was not recommended by Buckley (1981) because of added investment cost and increased instrumentation and control complexity. Turning auxiliary reboilers and condensers on and off was problematic so a small load on these heat exchangers was always maintained, which obviously wastes energy. An analysis by Luyben[91 showed that multi-point composition control yields minimum energy consumption in a distillation column. When the composition of both product streams from a binary distillation column are controlled simultaneously, the quantity of product which might be discarded is also reduced. Beyond distillation columns, energy management strategies were developed for boilers and steam systems in order to reduce costs (Bouilloud[l~ Edgar et all 1I]). During
54 the 1970s, DuPont reported that energy management systems were optimized for several types of problems [12]. In individual boilers, boiler excess oxygen was optimized using CO measurement, by adjusting excess air in the boiler feed. The steam load was distributed among multiple boilers by using linear programming. If the desired load could be achieved by running only six of seven available refrigeration machines, sometimes shutting down one unit reduces costs. Turning equipment on and off involved mixed-integer programming or penalty functions. However, computer control systems in the 1970s had to be extensively customized in order to implement such optimization strategies.
3.2 Computer Hardware Developments Development of the microcomputer and the cumulative effects of Moore's law started a revolution in the application of computing in the process industries. The reductions in hardware cost of process control computers (due to increases in computing speed by a factor of 100 each decade) was a significant impetus for adopting computer control and advanced control techniques. The emergence of standard software packages and architecture also facilitated applications in process control. Computer control algorithms based on optimization were not feasible for real-time computing in the 1970s but during the 1980s such methods became usable because of increased processing speed. Early digital installations used for process control were not failure-proof and required a totally redundant system in case of component failure. In most cases, the backup system was the analog (pneumatic) system used before the introduction of computer control, which involved extra costs. The distributed computer process control system (DCS) was pioneered during the 1970s by Honeywell. The DCS is still dominant in the process industries (although new architectures have emerged in the 1990s) and employs a hierarchy of computers, with a single microcomputer controlling 8 to 16 individual control loops. Detailed calculations such as supervisory control are performed using workstations that receive information from the lower-level devices. Set points are sent from the higher level to the lower level. The hierarchical design was compatible with different supervisory and regulatory functions and the need for database accessibility. Software could be located where calculations need to be made. Because the system was modularly designed, failure at any one point in the network did not shut down the entire systern[13] . 4. NEW PLANT OPERATING OBJECTIVES During the 1980s the effect of global competition on the chemical process industry caused a heightened awareness of the importance of product quality in affecting profitability. Process control began to be employed to ensure satisfactory product quality, in many cases using statistical process control rather than feedback control. Because past hazardous waste disposal practices created a number of pressing environmental problems, the trend of increasingly more stringent environmental regulations also began in this period. Chemical companies changed design and operating strategies (via process control) to minimize waste production because of the prohibition against discharge and/or disposal of toxic substances. New plants moved toward a "zero-
55 discharge" concept, and protecting the safety of operating personnel took on heightened emphasis after the Three-Mile Island, Chernobyl, and Bhopal incidents. Quality control (and its inherent variability) has an overriding effect on profitability, and quality can be controlled on a dynamic basis. Lower variability means that the process can be operated closer to constraints in order to maximize throughput []4]. Making the specified product also maximizes equipment utilization because equipment does not have to be shut down for blending or rework. With lower product variability, cycle time (the elapsed time between receipt of raw materials and shipping finished products) can be minimized. The effectiveness of the control system in ensuring consistent product quality depends on speed of response of the controllers and the effectiveness of the control strategy and the measurement system (e.g., all product quality variables are measured online). A 1988 study by E.I. DuPont de Nemours estimated that increased profits of $200 to $500 million dollars/year could be realized in their facilities through implementation of advanced control and optimized operating conditions. Downs and Doss [15] discussed the relationship between process variability and control system design using a reactor feed preheater. From a process control point of view, the control system relocates variability in the feed exit temperature to variability in the hot stream flow rate. As the control algorithm and/or the controller tuning change, different amounts of the variation are transferred. Depending on the process it may be more desirable to have all the variation in the flow rate of the hot stream (tight control) or all the variation in the feed exit temperature (controller in manual or no control at all). The control algorithm adjusts the amount of inlet temperature variation to be shared between the outlet temperature and the hot stream flow rate. However, understanding the objectives of the process are critical to the design and tuning of the controller. The use of advanced control techniques such as adaptive control or a nonlinear control may be warranted for tight control of the reactor feed temperature, but this requires a clear statement of the process objective. Decreasing the variability of the process became a principal way of quantifying the potential benefits of improved process control. Figure 1 shows a typical pattern for product quality variabilit~ TM. The limit represents a hard constraint, in other words, one that should not be violated, such as a product specification or an unacceptable impurity level. If the product quality can be maintained more consistently near the upper limit, the resulting cost savings are roughly proportional to the size of A 2 - A] in Fig. 1.
Limit
Limit ~
Operating Variable
Average, A2
Average, A1 Time (a)
Time (b)
Figure 1. Product variability over time: (a) before improved control; (b) after.
56 A corresponding frequency distribution or histogram for variations in product quality is another standard interpretation[2]' [131. While the use of computer control can sometimes be justified solely by safety and environmental concerns, the decreased variability of the process is a principal way of quantifying the potential benefits of improved process control[ 16]. The rebirth in the process control field in the 1980s, both in theory and practice, was highlighted bythe emergence of a new generation of model-based control theory that was tailored to the successful operation of modem plants. The most noteworthy algorithm was model predictive control (MPC), where a mathematical model is explicit in developing a control strategy that handles multivariable interactions and manipulated and controlled variable constraints. MPC is based on selecting the current and future control actions using a dynamic model to predict the process dynamic behavior. In MPC, control actions are obtained from on-line optimization (usually by solving a quadratic program, or QP), which handles the process variable constraints. MPC also unifies treatment of load and set-point changes via the use of disturbance models and the Kalman filter. MPC was essentially a straightforward extension of the LQG problem discussed earlier, but was more effective due to the greatly improved computational ability, especially for input constraints. 5. AN ENTERPRISE VIEW OF PROCESS CONTROL The latest era of process control has an equal emphasis on process modeling, control, and optimization and began in the 1990s. The capability of using more sophisticated mathematical models in automation and control has grown during the past twenty years. Given the current state of the art in control and optimization theory, the major uncertainty in controller design is selection of the model and its level of detail and complexity. Once the model is actually chosen and verified, there are usually several methods available to compute a control or optimization strategy. Edgar et al. [~11have shown there are five levels of control activities in a manufacturing process where various optimization, control, monitoring, and data acquisition activities are employed. Data from the plant (flows, levels, temperatures, pressures, compositions) as well as so-called enterprise data, consisting of commercial and financial information, are used to make decisions in a timely fashion. The highest level (level 5) deals with planning and scheduling, sets production goals to meet supply and logistics constraints, and addresses time-varying capacity and manpower utilization decisions; this is called enterprise resource planning (ERP). In a refinery the planning and scheduling model can be optimized to obtain target levels and prices for inter-refinery transfers, crude and product allocations to each refinery, production targets, inventory targets, optimal operating conditions, stream allocations, and blends for each re finery (Shobrys and White [17]). In level 4 RTO is utilized to coordinate the network of process units and provide costeffective set points for each unit, as discussed in previous sections of this paper. For multivariable control or processes with active constraints, set-point changes are performed in level 3 (e.g., model predictive control), while for single-loop or multi-loop control the regulatory control level is also carried out in level 3. Leve 1 2 (safety, environment, and equipment
57 protection) includes activities such as alarm management and emergency shutdowns. Level 1 (process measurement and actuation) provides data acquisition and on-line analysis and actuation functions, including some sensor validation. The time scale for decision-making at the highest level (planning and scheduling) may be of the order of months, while at lower levels (e.g., process control), decisions affecting the process can be made frequently, e.g., in fractions of a second. Even with the multi-level view of process control, RTO is still a significant economic driver; operating profits can exceed $200,000 per day (Bailey et al.[18]). The scale at which industrial RTO can be implemented is impressive, solving problems with over 100,000 variables and equality/inequality constraints. Other recent examples of integrating operations and business life cycles through optimizing asset utilization have been discussed by Howell et al. [19]. Marlin and Hrymak t2~ reviewed a number of industrial applications of RTO, mostly in the petrochemical area. They reported that typically there are more manipulated variables than controlled variables, so some degrees of freedom exist to carry out both economic optimization as well as establish priorities in adjusting manipulated variables while simultaneously carrying out feedback control. Skogestad [2~] and Perkins E22] have discussed the interplay of constraints, the selection of the optimal operating conditions and the preferred control structure. Skogestad identified three different cases for RTO: (a) constrained optimum (b) unconstrained flat optimum (c) unconstrained sharp optimum. Cases (a) and (b) are preferred. In both cases (b) and (c), if the process is operated at the optimum, a change in the sign of the process gain occurs around the optimum, which makes linear feedback control more difficult. Significant potential benefits can be realized by using a combination of MPC and RTO. At the present time, most commercial MPC packages integrate the two methodologies in a configuration such as the one shown in Fig. 2. The MPC calculations are imbedded in the prediction and controller blocks and are carried out quite often (e.g., every 1-10 min). The prediction block predicts the future trajectory of all controlled variables, and the controller achieves the desired response while keeping the process within limits. Set-Point I Calculations Set Points (Targets)
Prediction
Predictions
I Jnputs_I ] -1 Process
Control -' Calculations I Inputs
,
_l v[
Model
,,
,
I
Process
Outputs Model
Outputs ~
Residuals
Figure 2. Block diagram for Model Predictive Control. The targets for the MPC calculations are generated by solving a steady-state optimization problem (LP or NLP) based on a linear process model, which also finds the best path to
58 achieve the new targets (BackxI23] ). These calculations may be performed as often as the MPC calculations. Linear model predictive control based on a quadratic performance index has been successfully applied to many continuous plants, which has encouraged the consideration of control strategies based on nonlinear fundamental models. Bacl~ TM has stated that the performance and robustness of the control systems are directly related to the quality and accuracy of the prediction models in the control scheme. It is important that the models describe all relevant process dynamics and cover the full operating range, consisting of operating points as well as transition states or trajectories, which may not be possible with linear models. To generalize further, one can define objective functions that include profits earned along a trajectory plus a capital inventory term. For continuous processes this permits computing an optimal transition between operating conditions or the optimal path to recover from disturbances to the normal operating point. The control strategies based on an explicit economic objective function can change depending on different prices for product quality and on market conditions. 6. BATCH PROCESSING Batch process control has historically received much less attention than continuous process control from the process control community. Because the volume of product is normally small, large product demands are achieved by repeating the process on a predetermined schedule. It is usually not economically feasible to dedicate processing equipment to the manufacture of a single product due to the small product volumes. Instead, batch processing units are organized so that a range of products (from a few to possibly hundreds) can be manufactured with a given set or subset of process equipment. The key challenge for batch plants is to consistently manufacture each product in accordance with its specifications while maximizing the utilization of available equipment. Benefits include reduced inventories and shortened response times to make a specialty product (vs. larger continuous processing plants). Typically it is not possible to use blending operations in order to obtain the desired product quality, so product quality specifications must be satisfied by each batch. Maximization o f yield is a secondary objective to obtaining the specified yield. [24j Batch processing is widely used to manufacture specialty chemicals, metals, electronic materials, ceramics, polymers, food and agricultural materials, biochemicals and pharmaceuticals, multiphase materials/blends, coatings, and composites - an extremely broad range of processes and products. In the U.S.A. there are more batch chemical plants than plants using continuous processing, but this is not apparent from reviewing the process control literature. Batch control systems operate at various levels: (1) Control during the batch: This includes defining an optimal trajectory for the batch plus feedback control of flow rate, temperature, pressure, composition, and level, also called "within-the-batch" control (Bo nvin[25]). (2) Run-to-run control: Also called batch-to-batch, this is a supervisory function based on off- line product quality measurements at the end of a run. Operating conditions and profiles for the batch are optimized between runs to improve the product quality.
59 (3)
Batch production management: Scheduling of process units is based on availability of raw materials and equipment and customer demand. Bonvin[25] and Juba and Hamer [26] have discussed the operational challenges for optimal control during a batch, notably nonlinear behavior and constrained operation. There are several advantages of a batch process over an equivalent continuous process in meeting product quality requirements: 1. 2. 3.
The batch duration can be adjusted in order to meet quality specifications. Because a batch process is repetitive in nature, it offers the possibility of making improvements on a run-to-run basis. Batch processes tend to be fairly slow so that operating conditions can be optimized in real time.
In a batch process, frequently it is possible to use optimal control to determine the most economically advantageous trajectory for the process variables during the batch. For a multi-step reaction such as A + B ? C and 2A ? D in a fed-batch reactor, the feed-rate profile that maximizes the concentration of C after a specified batch time consists of a singular/bang-bang control, i.e., maximum flow rate followed by a period of unconstrained flow (singular arc) and then a time period at minimum flow. Usually it is possible to relate directly the conversion to C to profitability concerns. Another example of direct formulation of an economic objective function for a batch process is the minimum time addition of the reactant such that the undesirable product D is minimized at the final time. By minimizing the batch time, asset (reactor) utilization can be maximized. Bonvin et al. [271 distinguish between model-based optimization (MBO) used in batch processing and model predictive control (MPC) used in continuous processing. The goal in MPC is to choose inputs to track a reference signal, and it has an objective function (typically a quadratic error function) that reflects the quality of control. In contrast, MBO defines an actual cost function to be optimized. MPC applications almost always deal with continuous processes. One practical matter with batch processing is that there is no steady state operating point but rather a trajectory, which makes developing a linearized model problematic. In MBO the key issues are feasibility and optimal feedback control. MBO typically has solutions that lie on the constraints. In contrast, typically MPC is designed by introducing a compromise between tracking performance and input effort. 6.1. Run-To-Run Control
Recipe modifications from one run to the next are common in specialty chemicals manufacture. Typical examples are modifying the reaction time, feed stoichiometry, or reactor temperature. In run-to-nm control modifications are done at the beginning of a run (rather than during a run). Run-to-nm control is frequently motivated by the lack of on- line measurements of the product quality during a batch run. In batch chemical production, on- line measurements are often not available during the run but the product can be analyzed by laboratory samples at the end of the run[251. The task of the run-to-nm
60 controller is to adjust the recipe after each run to reduce variability in the output product from the stated specifications. In semiconductor manufacturing, the goal is to control qualities such as film thickness or electrical properties which are difficult, if not impossible, to measure in real time in the process environment. Run-to-run control is particularly useful to compensate for drifting processes where controlled variable fluctuations are correlated in time. For example, in a chemical vapor deposition process, the reactor walls may become fouled due to byproduct deposition. This slow drift in the reactor chamber condition requires occasional changes to the batch recipe in order to ensure that the controlled variables remain on-target. Eventually, the reactor chamber must be cleaned to remove the wall deposits, effectively causing a step disturbance to the process outputs when the inputs are held constant (Edgar et al. [281 ; Moyne et a1.[29]).
6.2 Batch Production Management A production run typically consists of a sequence of a specified number of batches using the same raw materials and making the same product to satisfy customer demand; the accumulated batches are called a lot. When a production run is scheduled, the necessary equipment items are assigned and the necessary raw materials are allocated to the production run. As the individual batches proceed, the consumption of raw materials must be monitored for consistency with the original allocation of raw materials to the production run, because parallel trains of equipment may be involved. Various optimization techniques can be employed to solve the scheduling problem, ranging from linear programming to mixed-integer nonlinear programming (Pekny and Reklaitis[3~ 7.0 CONCLUSIONS AND FUTURE CHALLENGES The business drivers for process control have evolved over the past 50 years from a single one (meeting product specification) to a multi-objective set that requires practical trade-offs in order to maximize plant profitability. Using a multilevel view of process control, greater efficiencies can be achieved through having all levels work together in harmony, rather than as a set of decoupled control functions. This is especially true at higher levels such as planning and scheduling and real-time optimization where large potential improvements in profits are possible. Increased usage of batch processing will permit process industries to emphasize rapid delivery of smaller quantities of differentiated products, which will allow plants to be downsized and located closer to customers. Such plants will also be more flexible (or agile) in operation and will more easily satisfy increasingly stringent safety, health, and environmental regulations, but they will require a more sophisticated level of process control. The increased usage of advanced batch process control may make direct optimization of an economic criterion more common compared to its use in continuous processing. REFERENCES [1]
B.L. Ramaker, H.K. Lau, and E. Hemandez, AIChE Symp. Ser., 316 (1997) 1.
61 [2] [3] [4] [5] [6] [7] [8] [9] [10] [ 11] [ 12] [13] [14] [ 15] [ 16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30]
J.P. Shunta, Achieving World Class Manufacturing Through Process Control, PrenticeHall, Englewood Cliffs, NJ, 1995. C.H. Barkelew, AIChE Symp. Ser., 159 (1976) 13. M.D. Harter, AIChE Symp. Ser., 159 (1976) 146. P. Chintapilli and J.M. Douglas, IEC Fund., 14(1975) 1. G.E. O'Connor and M.M. Denn, Chem. Engr. Sci., 27(1972) 121. P.S. Buckley, Chemical Process Control 2, 347, T.F. Edgar and D.E. Seborg, (eds.), Engineering Foundation, New York, 1982. F.G. Shinskey, Energy Conservation Through Control, Academic Press, New York, 1978. W. Luyben, IEC Fund., 14(1975) 321. P. Bouilloud, Hydrocarb. Proc. (August, 1969) 127. T.F. Edgar, D.M. Himmelblau, and L.S. Lasdon, Optimization of Chemical Processes, 2nd ed., McGraw-Hill, New York, 2001. Simpkins, Chemical Process Control 2,433, T.F. Edgar and D.E. Seborg, (eds.), Engineering Foundation, New York, 1982. D.E. Seborg, T.F. Edgar, and D.A. Mellichamp, Process Dynamics and Control, Wiley, New York, 1989. P.L. Latour, ISA Trans., 25(4) (1986) 13. J.J. Downs and J.E. Doss, Chemical Process Control IV, 53, Y. Arkun and W.H. Ray, eds., CACHE/AIChE, Austin, TX, 1991. T. Marlin, Process Control, 2nd ed., McGraw-Hill, New York 1999. D.E. Shobrys and D.C. White, Comput. Chem. Engng., 26 (2002) 149. J.K. Bailey, A.N. Hrymak, S.S. Treiber, and R.B. Hawkins, Comput. Chem. Engng., 17 (1993) 123. A. Howell, K. Hanson, V. Dhole, and W. Sim, Chem. Engr. Prog. (September, 2002) 54. T.E. Marlin and A.N. Hrymak, AIChE Symp. Ser., 316 (1997) 156. S. Skogestad, Comput. Chem. Engng., 24(2000) 569. J.D. Perkins, AIChE Symp Ser., 320 (1998) 15. T.C. Backx, AIChE Symp. Ser., 326 (2002) 43. P. Terwiesch, M. Agarwal, and D.W.T. Rippin, J. Proc. Cont., 4 (1994) 238. D. Bonvin, J. Process Cont., 8 (1998) 355. M.R. Juba and J.W. Hamer, Chemical Process Control - CPC III, M. Morari and T.J. McAvoy (eds.), 139, CACHE-Elsevier, Amsterdam, 1986. D. Bonvin, B. Srinivasan, and D. Ruppen, AIChE Symp. Ser., 326 (2002) 255. T.F. Edgar, S.W. Butler, W.J. Campbell, C. Pfeiffer, C. Bode, S.B. Hwang, K.S. Balakrishnan, and J. Hahn, Automatic a, 36 (2000). J. Moyne, E. del Castillo, and A.M. Hurwitz (eds.), Run to Run Control in Semiconductor Manufacturing, CRC Press, Boca Raton, FL, 2001. J.F. Pekny and G.V. Reklaitis, AIChE Symp. Ser., 320 (1998) 91.
This Page Intentionally Left Blank
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
63
MOPSD" A Framework Linking Business Decision-Making to Product and Process Design Ka M. Ng Department of Chemical Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR Multiscale objective-oriented process synthesis and development, MOPSD, relates business decision-making to the design and development of products and processes. Business decisions are made in a hierarchical manner, from corporate goals, marketing decisions, product design, to plant design and development. To implement such a framework, the RATIO concept is introduced. The objective, information, tools, time needed, activities, and human and monetary resources for completing each step of the business project are identified. Keywords: Business Process, Decision-Making, Product Design, Process Design, Process Development 1. INTRODUCTION The chemical processing industry (CPI) is the largest global industrial sector with a total shipment of US$1.59 trillion in 1999 [1]. This is higher than the 2001 China GNP of US$1.16 trillion [2]. The CPI, similar to other industries, has been striving to innovate in response to new technological developments and changes in the world economy. During the 70s, improvement of equipment and process performance was the focus of much research and development, building on a better understanding of transport phenomena, and improved simulation and optimization techniques. In the 80s, the CPI made a significant amount of effort using the pinch technology to minimize energy consumption and advanced control to maximize productivity. These efforts have led to notable results. For example, between 1982 and 2001, the operating cost for downstream petroleum processing in the US has declined from US$10 per barrel to approximately US$4 per barrel in constant year 2000 dollars [3]. However, due to competition, the gross margin has also decreased by the same magnitude, resulting in no gain in net margin. It became clear in the 90s that one should look at the entire supply chain for additional savings. To meet this need, companies such as Aspentech [4], i2 [5], SAS [6] and PricewaterhouseCoopers [7] offer a wide range of tools for enterprise resource planning, demand, production and distribution planning, etc. In the past several years, much attention has turned to the design and manufacturing of differentiated products [8-12]. In hindsight, this is hardly surprising in view of the profit margin in different industrial sectors of the CPI. Most chemical companies have their profit margin hovering around 8%, whereas it is 12% and 20% for specialty chemical and pharmaceutical companies, respectively. This of course does not
64 imply that drug firms which tend to have a higher price-earning ratio are a better investment. The rationale for the numerous reorganizations, spin-offs, mergers and acquisitions of the CPI in the past decade was varied. Some such as ICI attempted to shift from commodity chemicals to specialty chemicals, thus placing more weight on productcentered processing rather than process-centered processing (Figure 1). Some mergers such as those between BP and Amoco, and Exxon and Mobil enhanced economy of scale. Spin-offs such as DuPont and Conoco, Kodak and Eastman Chemical, and Monsanto and Solutia resulted in an improved corporate focus. All of these M&A activities, particular the mega ones such as that of Pfizer, Pharmacia and Searle have significantly changed the landscape of the global CPI.
Corporate Strategy
Process-Centered Processing
Product-Centered Processing
Business Process Model
Figure l. The corporate strategy decides on the mix of high-volume or high-value-added products. This in turn affects the business process as well as corporate R&D. All of these changes, either technical or financial, are the results of deliberate decision making. Indeed, thousands of decisions are made every day in a corporation. Corporate-wide strategic decisions can have a life span of tens of years and affect stakeholders around the globe. Decisions made in business units, such as a pigment division or a monomer division, tend to have a shorter duration in time. For example, they tend to focus on seasonal, monthly, weekly, and daily demand and production planning. To meet these production targets, engineers and technicians have to make decisions on equipment operations. Business decision-making is not limited to management and manufacturing. The R&D effort has to be aligned with the corporate-wide strategies, business unit directions, plant operations and product requirements. Decisions in R&D also span a wide range of length and time scales. The researcher may have to consider the entire process, each equipment unit, the transport phenomena within each unit, and the molecular nature of the product [ 13-16]. Indeed, this multiple length and time scale approach is expected to play a key role in process systems engineering [ 17]. This article proposes a framework for viewing a chemical enterprise from a multiple length and time scale perspective. Similar to Douglas' procedure for conceptual design [ 18], this framework is hierarchical in nature, with decision-making divided into a number of levels. By making decisions more or less in the order of decreasing length and
65 time scales, iterations among the various levels are minimized. Thus, corporate goals guide marketing strategies, customer desires determine product attributes, which in turn dictate materials selection and process flowsheet. The objective, information, tools, time, activities, and resources in terms of personnel and money involved at each scale are also identified [ 19]. 2. M U L T I S C A L E O B J E C T I V E - O R I E N T E D PROCESS DESIGN AND DEVELOPMENT 2.1.
L e n g t h a n d T i m e S c a l e s in CPI Let us begin with a review of the length and time scales considered in this framework (Figure 2) [ 16]. The length scale spans from the size of a molecule to that of an enterprise. Here, 108m is roughly the circumference of the earth, suggesting a global company, whereas 109s is roughly 32 years, signifying a long-term corporate strategy. Following the enterprise, we have production plants, equipment inside the plant, transport phenomena within the equipment, and the molecules involved in the reactions. Multiple
10-16 10-1~ 10"6
__//i//I
10-4
10-2
10 ~
10 2
10 4
i
I
I
I
I
1
I i eacii;n Chemisi ............
.........................
1- . . . . . . . . . . .
J
--
104
--
10 2
--
l0 o
_
10-2
--
10-4
...............................................
i i i
Fluid Dynamics and Transport
i ........
109S /
Equipment
I ................................................
1
10Sm
/ / --~..... ~-t Enterprise tqant f........................L
Particle Nucleation .................. i and Growth t .i.._.~_~_2 ~._-~. _5_.2.~2-.-22-7.~~.;--.~.~.~_-I...........
/
] _ ~_.__ J 1 i
10-14
Molecular / Electronic 1 ,,
10-16
Figure 2. The length and time scales covered in MOPSD. Note that the different scales overlap to various extents. The more the overlap, the more the interactions among them. For example, there is an overlap between enterprise and plant. Corporate strategy helps determine the products to be manufactured for the market, and plant design determines the appropriate manufacturing process. There is considerable overlap between equipment, transport, reaction, and particle formation, indicating the significant interplay of these factors in determining the overall performance of a piece of equipment.
66 2.2. Relating Shareholder Value Added to the Objectives in the MOPSD Framework An international chemical company may have thousands of employees working for the company. The employees at each level may have a different objective. For example, the business VP has to balance the demand and production of a particular product, whereas a plant manager focuses primarily on ensuring smooth plant operations and product qualities, and improving uptime. Despite the diversity of job functions, shareholder value added (SVA) is perhaps the singular financial metric that should be shared by all employees: SVA = After Tax Operating Income - C o s t o f Capital x Net Investment
(1)
It captures the common goal of a corporation - creation of wealth for the shareholders. (The State-Owned Enterprises in China which have to meet certain social responsibilities are an exception.) It represents the gain above the amount that their investment could earn in the financial market. To relate SVA to plant design and operations, we can express the ratio of after tax operating income to net investment in terms of retum on net assets (RONA): R ONA =
(2)
Sales Volume x Selling P r i c e - C o s t s Net permanent investment + Working capital
This can be seen more explicitly in a corporate cash flow diagram (Figure 3). The after tax operating income is derived from the sales. Construction, financed with equity, borrowing, and operating cash flow, results in permanent investment.
/'~
S A L E S $
/ / EARNINGS BEFORE INCOME TAX (EBIT)
\ x~~
INVESTMENT TAX CREDIT
AFTERTAX ~ OPERATING~ INCOME (ATOI)
] $ I
I ~
-
-
DIVIDENDS ~
OPERATING J.// CASHFLOW--~ CASH (OCF) ~ CONSTRUCTION \
Figure 3. Corporate cash flow diagram showing that cash is generated with sales, shareholder investments and borrowings, and that it is used for wages, taxes, dividends and reinvestment in the form of manufacturing plants.
67 Equation (2) clearly shows that we can improve SVA by raising the price or increasing the sales volume, or alternatively by cost reduction. High-value-added products are more likely to have better selling prices. Product design methods such as Quality Function Deployment can be used to capture the customer and technical information. Commodity chemical companies tend to have capacity expansion and cost reduction as their business strategy. In addition, a company can improve the uptime to reduce the necessary permanent investment and minimize inventory to reduce working capital. For the latter, supply chain management plays an important role to achieve the desirable outcome by optimizing the entire cycle of buy, make, move, store and sell. Other financial metrics such as after tax profit margin, sales growth, controlled fixed cost productivity (CFC), etc. can also be used to relate SVA to the various technical objectives of the MOPSD framework. Consider controlled fixed cost productivity which is defined as follows: CFC =
Sa/es
(3)
C o n t r o l l e d f i x e d c o s ts
Here, the controlled fixed costs include payroll and benefits. For the performance of a batch plant, it serves as an important measure because its labor costs constitute a much larger percentage of the total cost than that in a continuous process. Thus, it provides a possible optimization objective in the design and scheduling of batch processes [20]. In general, the technical objectives in product and process design should be set with the cash flow diagram, SVA or other financial metrics in mind, if possible. 2.3. Individual Levels of MOPSD We need the participation of all organizational levels in the company to carry out the corporate strategy. The organizational levels are equivalent to the levels (or length scales) in MOPSD. The number of levels in MOPSD should be chosen according to the culture and capability of the company, business unit, plant site, research division, laboratory, etc. and thus is company specific. Nonetheless, let us illustrate this concept with a greatly simplified example (Table 1).
Table 1. Various objectives and length scales (i.e., organizational levels) in a typical chemical company are presented in column 1 and 2, respectively. The personnel involved at each level are also shown. The sub-columns of column 2 show the role of the personnel in meeting the various objectives. Organizational Level and Personnel Objectives
Corporate Goals
Corporation
Business Unit
CEO, CTO, CFO, Board members Set corporate goals and allocate resources
Business VPs, Marketing managers Identify business opportunities to meet
Manufacturing Site Plant Managers, Operating personnel Identify interbusiness operational improvements to
R&D Laboratories R&D Director, Chemists and Engineers Identify new products and processes across business units to
68 corporate goals Set business and marketing plans
Business Unit Strategy
Listen and review
Production Processes and Plans
Listen and review
Listen and review
New Products and Processes
Listen and review
Listen and review
meet corporate goals Identify interproduction site operational improvements to meet business goals Reduce downtime, safety, quality assurance, etc. Listen and review
meet corporate goals Identify new products and processes to meet business unit goals Develop new methods and tools for manufacturing Allocate resources to meet long and short term R&D objectives
The broad objectives in Table 1 have to be reduced into sub-objectives for project planning. For example, the design of a new process by the R&D laboratory requires conceptual design, determination of basic data, process simulation and optimization, control system design, etc. 2.4. RATIO for the Implementation of MOPSD
RATIO is the acronym for objective, information, tools, time, activities, and resources (Table 2). It describes the key components in the execution of each subobjective in MOPSD. The broad objective as well as the sub-objectives has to be defined first. For business decisions, some objectives such as customer satisfaction cannot be measured quantitatively. Often, one has to deal with multi-criteria decision-making and Pareto-optimality. Table 2. The components in the execution of MOPSD - RATIO [ 19]
Define objective of the task Determine the input and output information Identify appropriate tools Estimate the time needed to meet the objective Identify the activities to be performed Identify human and monetary resources to perform the activities
Next, we identify and obtain the necessary input information. While historical data may be available in company archives, one has to take advantage of the human resources. Experience shows that chemists and engineers involved in similar projects can point out the right directions and potential pitfalls, thus greatly enhancing the chance of success and reducing the time and effort. Appropriate tools should also be identified. This
69 can be software such as the wide variety of computer programs for process simulation and modeling, and supply chain management. They can also be systematic design methods for process synthesis such as those for distillation [21 ], crystallization [22-24] and reactions [25-27]. Likewise, these can be experimental setups and procedures. For example, the many high-throughput screening techniques can expedite the identification of the best catalyst for a given reaction. The use of such tools by the people involved constitute activities, which can be estimation, synthesis, modeling, simulation, experimentation, etc. Finally, we allocate human resources and capital for these activities and tools. As mentioned, an objective can be further broken down into sub-objectives. Figure 4 shows that RATIO is applied to each sub-objective to achieve the overall objective. This represents the essence of this objective-oriented approach in which tasks are purposely performed.
Figure 4. A hierarchy of objectives. An accurate estimate of the time required to complete a given task or to achieve a certain objective is important in the implementation of MOPSD. Such estimates allow the maximum number of tasks to be performed concurrently and help predict the time needed for the overall project [ 16].
70
2.5. MOPSD: An Integration of Business Process Engineering and Process Systems Engineering Much has been achieved in business process engineering as well as business process reengineering [28]. Kirchmer [29] argued that there should be a market- and product-oriented design of business processes. Smart et al. [30] pointed out the five stages of a business reengineering methodology: Stage 1 Identify or create corporate, manufacturing and information technology strategies Stage 2 Identify key processes and process objectives Stage 3 Analyze existing processes Stage 4 Redesign processes Stage 5 Implement MOPSD follows a similar strategy but has two major differences. First, we follow the natural length and time scales of the entire business and manufacturing process. Therefore, we can more easily identify both business and technical sub-objectives. Second, we use SVA as the overall objective to ensure that the development of new products and manufacturing technologies is in alignment with the corporate directions. 3. AN EXAMPLE - MOPSD AND PRODUCT-CENTERED PROCESSING Let us consider Figure 5 which shows a systematic procedure for the synthesis and development of chemical-based consumer products [9, 31]. The Head Office has identified a family of products for which our company has a competitive advantage in terms of marketing, technical know-how and IP position. For this reason, we have decided to carry out a product and process development project. At the enterprise level, market trends are used to identify the product forms, the functionalities of the product, and the projected demand. At this stage, existing and potential competitors are identified as well. With an estimated product cost, capital budgeting is performed to determine the intrinsic rate of return. Assuming that the rough estimate satisfies the corporate financial return target, the project moves forward. The quality factors are identified. These are related to technical factors which are met by properly selecting the ingredients and by synthesizing the process alternatives for the transformation of the ingredients into the final product. In Figure 5, the round-comered rectangles represent the outcomes; i.e. the output information. The vertical arrows indicate the activities. The input information and tools for each activity are given in the rectangles on the right 4. CONCLUSIONS A conventional company tends to have a business ladder and a technical ladder for their employees. Often, there is limited interaction between business personnel, and chemists and chemical engineers within the company. This problem is compounded for a global enterprise for which business and technical decisions are made with people in different parts of the world. This gap has to be narrowed to produce the right product, improve product quality, lower production cost and reduce time-to-market. To this end, MOPSD provides a framework linking business decision-making to the synthesis and development of products and processes. In a hierarchical manner, from large scale to progressively smaller scales, company strategy is executed through all the organizational levels within the company. Process design is treated in a similar manner by including more fine details as one proceeds through the hierarchy.
71 I
Market trends ~
Product conceptualization
I
Financial analysis
Identification of quality factors
(
t Pr~176 1 Product packaging
)
JL
~
] I
Capitalbudgeting Financialmetrics Typical quality factors & performance indices
~
Performance vs material & structure High Throughput Screening techniques for material selection
Quality factors and 1 performance indices
Selection of ]~ ingredients and microstructure Ingredients & structural')
attributes
/ Generic flowsheet of |
)
Generation of process alternatives
manufactudng process I
I Process alternatives & ) operating conditions Process and product evaluation
base & know-how
I
_~
Functionality and packaging ii
Knowledge
1
I
Equipmentunits Heuristics
I Structurevs Operation
[I
(Manufacturingprocess1 Figure 5. Step-by step procedure for product-centered process synthesis and development. To implement such a framework, a product and process design project is divided into a number of tasks, each with its own objective. These tasks should be executed concurrently if possible in order to minimize development time, but whether this is feasible depends on the required input information and the availability of resources. Thus, it is important to clearly identify the objective, information, tools, time, activities and resources (RATIO) for each task in planning a project. MOPSD attempts to integrate business process engineering and process systems engineering. With a changing global environment, the demarcation between disciplines has become blurred and process systems engineering is bound to expand its scope. Biology is now widely considered to be a foundation science of chemical engineering. Will management be next for PSE?
72 ACKNOWLEGMENTS My thinking on the relationship between business decision-making and process development has been influenced by many of my industrial collaborations. In particular, I have greatly benefited from my interactions with George Stephanopoulos, Haruki Asatani, Hironori Kageyama, Takeshi Matsuoka, Toshiyuki Suzuki, and many others at Mitsubishi Chemical Corporation, and Lionel O'Young, and Christianto Wibowo of CWB Technology. I would also like to thank Bruce Vrana for his teachings on corporate finance during my stay at DuPont Central R&D. Finally, the financial support of the Research Grants Council, HKUST6018/02P, is gratefully acknowledged. REFERENCES
[11 "Facts and Figures from the Chemical Industry," C&EN, June 26 (2000) 48. [2] International Monetary Fund, World Bank [3]
[4] [5] [6] [7] [8] [9] [10] [ 1l] [12] [13] [14] [15] [16] [17] [ 18] [ 19] [20] [21 ]
C. J. Kim, "Supply Chain Management in Process Industry," keynote presentation at PSE Asia, 2002, Taipei. www.aspentech.com www.i2.com www.sas.com www.pwcglobal.com C. Wibowo, and K. M. Ng, "Product-Oriented Process Synthesis and Development: Creams and Pastes," AIChE J., 47 (200 l) 2746. C. Wibowo, and K. M. Ng, "Product-Centered Processing: Chemical-Based Consumer Product Manufacture," AIChE J., 48 (2002) 1212. K. Y. Fung, and K. M. Ng, "Product-Centered Process Synthesis and Development: Pharmaceutical Tablets and Capsules," accepted for publication in AIChE J. (2002). A. W. Westerberg, and E. Subrahmanian, "Product Design," Comp. Chem. Eng., 24 (2000) 959. E. L. Cussler, and J. D. Moggridge, Chemical Product Design, Cambridge University Press, Cambridge, UK (2001). J. Villermaux, "Future Challenges in Chemical Engineering Research," Trans. IChemE 73 (part A) (1995) 105. A. V. Sapre, and J. R. Katzer, "Core of Chemical Reaction Engineering: One Industrial View," Ind. Eng. Chem. Res. 34 (1995) 105. J.J. Lerou, and K. M. Ng, "Chemical Reaction Engineering: A Multiscale Approach to a Multiobjective Task," Chem. Eng. Sci., 51 (1996) 1595. K. M. Ng, "A Multiscale-Multifaceted Approach to Process Synthesis and Development," ESCAPE 1l, Ed. R. Gani and S. B. Jorgensen, Elsevier (200 l) 41. I.E. Grossmann, and A.W. Westerberg, "Research Challenges in Process Systems Engineering," AIChE J. 46 (2000) 1700. J.M. Douglas, Conceptual Design of Chemical Processes, McGraw-Hill, New York (1988). C. Wibowo, L. O'Young, and K. M. Ng, "Workflow Management in Chemical Process Development," paper in preparation. L.T. Biegler, I. E. Grossmann, and A.W. Westerberg, Systematic Methods of Chemical Process Design, Prentice Hall, New Jersey (1997). M.F. Malone, and M. F. Doherty, "Separation System Synthesis for Nonidela Liquid Mixtures," AICHE Symp. Series 91 (1995) 9.
73 [22] C. Wibowo, and K. M. Ng, "Unified Approach for Synthesizing Crystallization-Based Separation Processes," AIChE J., 46 (2000) 1400. [23] K.D. Samant, and K. M. Ng, "Representation of High-Dimensional Solid-Liquid Phase Diagrams for Ionic Systems," AIChE J. 47 (2001) 861. [24] C. Wibowo, K. D. Samant, and K. M. Ng, "High-Dimensional Solid-Liquid Phase Diagrams Involving Compounds and Polymorphs," AIChE J. 48 (2002) 2179. [25] K.D. Samant, and K. M. Ng, "Synthesis of Extractive Reaction Processes," AIChE J. 44 (1998) 1363. [26] K.D. Samant, and K. M. Ng, "Synthesis of Prepolymerization Stage in Polycondensation Processes," AIChE J. 45 (1999) 1808. [27] V. V. Kelkar, and K. M. Ng, "Development of Fluidized Catalytic Reactors- Screening and Scale-up," AIChE J. 48 (2002) 1486. [28] A. W. Scheer, Business Process Engineering, 2nd ed., Springer-Verlag, Berlin (1994) [29] D.J. Elzinga, T. R. Gulledge, and C. Y. Lee, ed., Business Process Engineering: Advancing the State of the Art, Chapter 6, Kluwer Academic Publishers, Norwell, MA (1999). [30] D.J. Elzinga, T. R. Gulledge, and C. Y. Lee, ed., Business Process Engineering: Advancing the State of the Art, Chapter 12, Kluwer Academic Publishers, Norwell, MA (1999). [31] K. M. Ng, "Teaching ChE to Business and Science Students," Chem. Eng. Edu., Summer (2002) 222.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
74
PSE and Business Decision-Making in the Chemical Engineering Curriculum Warren D. SeideP, J. D. Seader b, and Daniel R. Lewin c
aDepartment of Chemical and Biomolecular Philadelphia, Pennsylvania 19104-6393
Engineering,
University
of Pennsylvania,
bDepartment of Chemical and Fuels Engineering, University of Utah, Salt Lake City, Utah 84112-9203 CPSE Research Group, Department of Chemical Engineering, Technion, Israel Institute of Technology, Haifa 32000, Israel This manuscript discusses approaches for providing chemical engineering students a modem experience in product and process design with an objective of exposing them to process systems engineering (PSE) and business decision-making. After typical mechanisms for business decision-making are reviewed, a template is introduced that presents the steps in product and process design. These involve a blend of heuristic and algorithmic methods, with emphasis on the usage of modem computer tools, especially the process simulators. Then emphasis is placed on the use of case studies, design projects, and other aspects of product and process design courses, to teach students the role of these steps and computer tools in supporting the high-level business decisions required in the process industries. Abstract
Keywords product design, process design, simulators, equipment design, cost estimation 1. INTRODUCTION Most departments of chemical engineering teach courses in process design and process control as vehicles for introducing students to process systems engineering (PSE) techniques. Recently, the growing involvement of chemical engineers in product design has spurred interest in either including this subject in the process design course or adding it as a new course to the curriculum. In this paper, the objective is to consider the role of PSE and business decision-making in the chemical engineering curriculum through a close examination of a modem product and process design course(s). In this introduction, both the scope of business decision-making and the scope of product and process design are reviewed, followed by brief statements concerning the origin of design projects. Then, in the next section, the key steps in product and process design are examined, many of which result from or influence business decisions. Finally, the last section focuses on those steps and computer tools used by students (and practitioners) that support most directly high-level business decisions.
75 1.1
Business Decision-Making
Companies in the chemical industry, and many other commercial and govemmental organizations, have business decision makers, typically within high-level management, who receive inputs from the many sources discussed below. The many inputs are processed and decisions are issued in three principal categories: (1) the concept is approved with the authors of the proposals authorized to proceed to the next step; usually to prepare a more detailed evaluation, keeping preliminary capital limits in mind, (2) the concept is recycled for further study, given reviews that are the basis for the decision, and (3) the concept is rejected. Note, however, that rejected proposals are often not entirely rejected. In many cases, research and development managers find some combination of time, equipment, and motivated employees able to rework the proposal with a "new look." The inputs, or proposals, often come from business managers, whose teams work with current customers, seeking to learn about customer needs. Inputs also come from application groups, who interact closely with business managers, working to synthesize solutions using existing technologies and promising laboratory results. Often ideas from business managers and application groups are fed to research and development (R&D) groups, who work to invent new products and technologies. Their most promising ideas and concepts are sent usually to a business center in a request for a budget to carry out an engineering study. When approved, process systems engineering work is undertaken to carry out product and process synthesis, preliminary design studies, cost estimates, and profitability analyses. When promising results are obtained, these are the basis for proposals, or inputs, to the business decision makers. Another source of inputs for business decision makers comes from manufacturing sites, which often work to resolve operating problems, leading to ideas for variations on existing products, retrofits, or even new processing methods. Often these ideas are fed to R&D groups, with engineering studies undertaken, as described above. Here, also, the most promising results provide inputs to business decision makers. Finally, it is important to note that most business areas have a group of financial analysts who carry out detailed economic analyses, including sensitivity and uncertainty analyses, to accompany inputs to the business decision makers. 1.2
Product and Process Design
The design of chemical products begins with the identification and creation of potential opportunities to satisfy societal needs and to generate profit. Thousands of chemical products are manufactured, with companies like Minnesota Mining and Manufacturing (3M) having developed over 50,000 chemical products since being founded in 1904. The scope of chemical products is extremely broad. They can be roughly classified as: (1) basic chemical products, (2) industrial products, and (3) consumer products. As shown in Figure l a, basic chemical products are manufactured from natural resources. They include commodity and specialty chemicals (e.g., commodity chemicals- ethylene, acetone, vinyl chloride; specialty chemicals- difluor~ylene, ethylene-glycol mono-methyl ether, diethyl ketone), bio-materials (e.g., pharmaceuticals, tissue implants), and polymeric materials (e.g., ethylene copolymers, polyvinylchloride, polystyrene). The manufacture of
76 industrial products begins with the basic chemical products, as shown in Figure lb. Industrial products include films, fibers (woven and non-woven), and paper. Finally, as shown in Figure l c, consumer products are manufactured from basic chemical and industrial products. These include dialysis devices, hand-warmers, Post-it notes, ink-jet cartridges, detachable wall hangers, solar desalination devices, transparencies for overhead projectors, drug delivery patches, fuel cells, cosmetics, detergents, pharmaceuticals, and many others. Many chemical products are manufactured in small quantities and the design of a product focuses on identifying the chemicals or mixture of chemicals that have the desired properties, such as stickiness, porosity, and permeability, to satisfy specific consumer needs. For these, the challenge is to create a product that has sufficiently high market demand to command an attractive selling price. After the chemical mixture is identified, it is often necessary to design a manufacturing process. Other chemical products, otten referred to as commodity chemicals, are required in large quantifies. These are often intermediates in the manufacture of specialty chemicals and industrial and consumer products. These include ethylene, propylene, butadiene, methanol, ethanol, ethylene oxide, ethylene glycol, ammonia, nylon, and caprolactam (for carpets), together with solvents like benzene, toluene, pher~l, methyl chloride, and tetmhydrofumn, and fuels like gasoline, kerosene, and diesel fuel. These are manufactured in large-scale processes that produce billions of potmds annually in continuous operation. Since they usually involve small well-defined molecules, the focus of the design is on the process to produce these chemicals from various raw materials.
Manufacturing Process
Natural Resources
._ v
Basic chemical products (commodity and specialty chemicals, bio-materials, polymeric materials)
(a)
Basic chemical products
._[ Manufacturing [ "-[
Industrial products (films, fibers, paper .... )
Process[ (b)
Basic chemicals Industrial products
._l Manufacturing] "-[ Process]
r-
Consumer products (dialysis devices, Post-it notes, transparencies, drug delivery patches, cosmetics .... )
(c) Fig. 1 Manufacture of chemical products
77 Design projects have many points of origin. Often they originate in the research labs of chemists, biochemists, and engineers who seek to satisfy the desires of customers for chemicals with improved properties for many applications (e.g., textiles, carpets, plastic tubing). In this respect, several well-known products, such as Teflon (polytetrafluoroethylene), were discovered by accident. (At Dt&ont, a polymer residue that had accumulated in a lab cylinder of tetrafluoroethylene was found to provide a slippery surface for cookware and was capable of withstanding elevated temperatures, among many similar applications.) In other cases, an inexpensive source of a raw material(s) becomes available and process engineers are called on to design processes that use this chemical, often with new reaction paths and methods of separation. Other design problems originate when new markets are discovered, especially in the developing countries of Southeast Asia and Africa. Yet another source of design projects are engineers themselves who often have a strong feeling that a new chemical or route to produce an existing chemical may be very profitable, or that a market exists for a new chemical product. When a new chemical product is envisioned, the design project can be especially challenging, as much uncertainty often exists in the chemical(s)or mixture of chemicals best suited for the product, as well as the product configuration, the modes of manufacture, and the market demand. For the manufacture of a commodity chemical, the design project is usually less comprehensive, as the focus is usually on the design of the manufacturing facility or chemical process. 2
STEPS IN PRODUCT AND PROCESS DESIGN
In this section, the objective is to introduce a template that presents the steps in product and process design. These involve a blend of heuristic and algorithmic methods, with emphasis on modem computer tools, especially the process simulators. Note that, because the inputs to and responses from business decision makers are not always clearly positioned, the focus of Section 3 is on those steps and computer tools that support high-level decision-making in the product and process design courses. Figure 2 shows many of the steps in designing chemical products and processes. Beginning with a potential opporttmity, a design team creates and assesses a so-called primitive problem. When necessary, the team seeks to find chemicals or chemical mixtures that have desired properties and performance. Then, when a process is required to produce the chemicals, process creation (or invention) is undertaken. When the gross profit is favorable, a base-case design is developed. In parallel, algorithmic methods are employed to find better process flowsheets. Also, in parallel, plantwide controllability assessment is undertaken to eliminate processes that are difficult to control. When the process looks promising and/or when a configured industrial or consumer product is to be designed, the design team carries out detailed design, equipment sizing, and optimization. These steps are expanded upon in the subsections that follow. 2.1
Create and Assess Primitive Problem
Product and process designs begin with a potential opportunity, often a gleam in the eye of an engineer. Usually, the opportunity arises from a customer need, often identified by interviewing customers. Given an array of needs, an effort is made to arrive at specifications
78 PotenUal Opportunity Create and Assess Primitive Problem i~/reeds G~mte InlelVleW~ll~Nml
Msrket~landbusiness,tudkm _~Is
I
the chemlcel -'~ structure known)'
Find chemicals or chemical mixtures that have the desired
ro.2sa~
Yes
J J
.......
I
;== a==~=a.Y~V=
I No i~ Is s process required to'~ ~,,produce the chemlcels?J tY~ Process Creation
1'
t~-=~,
Eq~pmemSe~bun
/
H ~ *, r,=~,.=.,
t, Yes Detailed Process Synthesis Algorithmic Methods
Development of Base Case
( .....
J Create| P...... Flowshea! J
I. . . . . . . . . .
-
:rCe;:.~ . . . . Salah
, - ...... .--.
I
I j I
I~ll~.-lk,o.~. " " " ....N.~..'I.I
......
..........
I
"'"
Ijyl~lNIIillol
,=..=.,.,..=~1
,I =:'
='1
I j
Plantwide Controllability Assessment
I
j
No flstheProcess ~tI \stm Pr=m=,lnO~) Yes
~
Detailed Design, Equip. Sizing, and Optim. (
F
afety Analysis
ty chemical
.....
~-- Specialty chemical
1
'vd ==u,.== ,=:orj .V.=:~: ........ .'t=':.,/
:.[
i,-, ,.s
I~
I I
d/or Product Still ,J4------~ls the Process and/or Product Feasible? )
Promis,.,, j
-
','- I
i
r=,~.,~7-.i
r
i0.,.............
§ i Written Design Report and Oral Presentation
I .......
i. ~=: ..... ="=..j Is~aupl + IOpera"o-J
Fig. 2 Steps in product and process design
I
i
79 for the product; for example, desired density, viscosity, and latent heat of crystallization for a solution to be used in a hand-warmer. In many cases, a design team engages in a formal session to identify needs and generate ideas for the product. This involves brainstorming to arrive at concepts that are potentially promising in satisfying the needs. Given an array of ideas, an attempt is made to select the most promising from among them using the principles of thermodynamics, chemical kinetics, heat and mass transfer, etc. In so doing, designers and design teams create and assess primitive problems that are most worthy of further research and development. These steps are discussed thoroughly in Chemical Product Design by Cussler and Moggridge I1].
2.1.1 TypicalOpportunities and Primitive Problems As examples, consider the following three potential opportunities and the kinds of primitive design problems generated:
Example 1 Dialysis Device Consider the possibility of designing an inexpensive (say, less than $10) throw-away product for patients with temporary or permanent kidney failure; a device that provides the only treatment for patients with end-stage renal disease (ESRD), whose kidneys are no longer capable of their function. This treatment, which is required three times per week, for an average of 3-4 hours per dialysis, was performed on more than 200,000 patients in the United States in 1996.
Example 2 Drug to Counter Degradation of Blood In the manufacture of pharmaceuticals, consider the possible production of plasminogen activators, which are powerful enzymes that trigger the proteolytic (breaking down of proteins to form simpler substances) degradation of blood clots that cause strokes and heart attacks. Since the mid-1980s, Genentech, a U. S. company, has manufactured tissue plasminogen activator (tPA), which they currently sell for $2,000 per 100 mg dose, with annual sales of 300 million S/yr. Given that their patent will expire soon, Genentech has developed a next generation, FDAapproved, plasminogen activator called "TNK-tPA, which is easier and safer for clinicians to use. With a rapidly growing market, the question arises as to whether an opportunity exists for another company to manufacture a generic (i.e., without a brand name) form of tPA that can compete favorably with TNK-tPA.
Example 3
Vinyl Chloride Manufacture
Consider the need to manufacture vinyl chloride, H~
/C1
/~=c,,,
H
H
a monomer intermediate for the production of polyvinyl chloride,
\
/
cv~
CHC1
\
/
c~
CHC1
\
/
cN
CHC1
\
80 an important polymer (usually refen'ed to as just vinyl) that is widely-used for rigid plastic piping, fittings, and similar products. An oppommity has arisen to satisfy a new demand for vinyl chloride monomer, on the order of 800 million pounds per year, in a petrochemical complex on the Gulf Coast, given that an existing plant owned by the company produces 1 billion pounds per year of this commodity chemical. Because vinyl-chloride monomer is an extremely toxic substance, it is required that all new facilities be designed carefully to satisfy govemmental health and safety regulations. Clearly, potential opportunities and primitive problems are generated regularly in the fast-paced corporate, govemment, and university research environments. In product design, especially, it is important to create an environment that encourages the identification of consumer needs, the generation of product ideas, and the selection from among these ideas of the most promising altematives; see Sections 3.2 and 3.3. 2.1.2
Selecting Alternatives - Assessing the Primitive Problem
Normally, the designer or a small design team generates many potential ideas for products and processes as potential solutions for the primitive problem, particularly if these individtmls are well familiar with the existing products or situation. Ideas may also come from potential customers, who may be frustrated with the existing products or situation. The ideas are best generated in a non-critical environment. Often, the best ideas may initially be those that might otherwise receive the most criticism. All of the ideas are collected, organized, discussed, and carefully assessed. Cussler and Moggridge [I] present extensive lists of ideas for several products. From the list of ideas, a selection of the most promising altematives is made, based upon technical or marketing considerations; for example, thermodynamics or advertising potential. For the three primitive problems above, Seider et al. TM present design altematives that are typical of those selected from lists of ideas that serve as a base on which to begin the engineering of a product or a process. At this stage, the alternatives require further, more detailed study, and hence it is important to recognize that, as the engineering work proceeds, some altematives are rejected and new alternatives are generated. This is a crucial aspect of design engineering. On the one hand, it is important to generate large numbers of ideas leading to a few promising alternatives. On the other hand, to meet the competition with a product designed and manufactured in a timely fashion, it is important to winnow those alternatives that might require too extensive an engineering effort to be evaluated; e.g., a process that requires exotic materials of construction or extreme conditions of temperature and/or pressure. 2.1.3
Literature Survey
When generating altemative specific problems, design teams in industry have access to company employees, company files, and the open literature. These resources include the SRI Design Reports, encyclopedias, handbooks, indices, and patents, many of which are available electronically, with an increasing number available on the Intemet. For product design, especially, patents are important sources with which the design team must be aware to avoid the duplication of designs protected by patents. After the 17 years that protect patented products and processes in the United States are over, patents are often helpful in the design of next-generation processes to produce the principal chemicals, or chemicals that
81 have fimilar properties, chemical reactions, and so on. Often patents are licensed for fees on the order of 3-6 percent of gross sales.
2.1.4 Auxiliary Studies While creating and assessing a primitive design problem, design teams often initiate studies of (1) technical feasibility, (2) marketing, and (3) business considerations. For a promising product, the technical feasibility study identifies existing and potentially new manufacturing methods, the advantages and disadvantages of each method, and eventually (as design proceeds), the reasons for selecting a specific method. In the marketing analysis, other manufacturers are identified, as well as plant capacities, price histories of the raw materials and products, regulatory restrictions, and principal uses of the product. Marketing considerations often far outweigh technical considerations. Many products or process designs are rejected by management for marketing reasons. For each promising design, business objectives and constraints are normally considered, usually in a business study. These include plant capacity, product quality, likely size of first plant expansion, mechanical completion and startup dates, maximum capacity available, maximum operating costs as a function of capacity, seasonal demand changes, inventory requirements, and minimum acceptable return on investment. As mentioned in Section 1.1, these studies often justify requests to business centers for a budget to carry out the engineering work, including product and process synthesis, prelinainary design studies, cost estimates, and profitability analyses.
2.1.5
Stimulating Innovation in Product Design
The invention and commercialization of new products, and chemical products in particular, benefits by corporate organization to encourage interactions between researchers, marketers, sales-people, and others. In this regard, companies like 3M and General Electric (G.E.) are noted for their corporate policies that seek to maintain a climate in which innovation flourishes. These hclude the fifteen percent rule, in which managers are expected to allow employees 15% of their time to work on projects of their own choosing, tech forums designed to encourage technical exchange and a cross-fertilization of ideas between persons working in many corporate divisions at widely disparate locations, stretch goals intended to stretch the pace of innovation (for example, at least 30 percent of annual sales should come from products introduced in the last four years), process innovation technology centers staffed with chemical engineers and material scientists to help researchers scale-up a new idea from the bench to production, and six sigma strategies for quality control in manufacturing. For discussions of these approaches, with examples, see the books by GundlingTM, Coe [4], and Seider et al. [2].
2.1.6
Pharmaceutical Products
Special considerations are needed for the design of pharmaceutical products. As the design team creates and assesses the primitive problem, the typical development cycle or time line for the discovery and development of a new chemical entity plays a key role, as discussed by Pisano TM. The key steps begin with discovery, in which exploratory research identifies molecules that prove safe and effective in the treatment of disease, usually involving the exploration of thousands of compounds to locate a handful that are sufficiently promising for further development, and the application of data mining techniques to locate
82 the most promising proteins, and cells within which they can be grown, fi'om numerous data bases of laboratory data. Next, in preclinical development, a company seeks to obtain sufficient data on a drag to justify the more expensive and risky step of testing in humans. Then, in clinical trials, which are administered over three phases (each of duration 1-2 years), the drag is tested on human volunteers. Dmfng the latter phase, the drag is administered to thousands of patients at many locations over several years. Finally, to gain approval, an application is prepared for the FDA requesting permission to sell the drug. Note that as Phases 1 and 2 of the clinical trials proceed, process design is undertaken to produce large quantities of the drag, first for Phase 3 testing and then for commercial operation. These steps are considered in detail for a plant to produce tissue plasminogen activator (tPA) in Seider et al. [2]. Note also that the profit margins are sufficiently high to accelerate the process design of the facility for Phase 3 testing, with little if any process optimization. Subsequently, when FDA approval is obtained, the Phase 3 process is used for commercial operation. 2.2
Find Chemicals Performance
or
Chemical
Mixtures
Having
Desired
Properties
and
Having created and assessed the primitive problem, the design team often undertakes For those primitive problems in which desired properties and performance have been specified, it is often necessary to identify chemicals or chemical mixtures that meet these specifications. Examples include: (1) thin polymer films to protect electronic devices, having a high glass-transition temperature and low water solubility, (2) refrigerants that boil and condense at desired temperatures and low pressures, while not reacting with ozone in the Earth's stratosphere, (3) environmentally friendly solvents for cleaning, for example to remove ink pigments, and for separations, as in liquid-liquid extraction, (4) low-viscosity lubricants, (5) proteins for pharmaceuticals that have the desired therapeutic effects, (6) solutes for hand warmers that remain supersaturated at normal temperatures, solidifying at low temperatures when activated, and (7) ceramics having hightensile strength and low viscosity for processing. Often design problems are formulated in which the molecular structure is manipulated, using optimization methods, to achieve the desired properties. For this purpose, methods of property estimation are needed, which often include group contribution methods, and increasingly molecular simulations (using molecular dynamics and Monte-Carlo methods). The search for molecular structure is often iterative, involving heuristics, experimentation, and the need to evaluate many alternatives in parallel, especially in the discovery of pharmaceutical proteins, as discussed by Seider et al. [2].
molecular structure design.
For some chemical products, like creams and pastes, the specification of desired properties is a key to successful product design. Creams and pastes are colloidal systems that contain immiscible liquid phases, as well as solid particles. As described by Wibowo and Ng I6], the first step in their design involves the identification of product quality factors, including functional quality factors (e.g., protects, cleans, and decorates the body, delivers an active pharmaceutical ingredient, ...), rheological quality factors (e.g., pours easily, spreads easily on the skin, does not flow under gravity, stirs easily, coats uniformly, ...), physical quality factors (e.g., remains stable for an extended period, melts at a specified temperature, releases an ingredient at a controlled rate, ...), and sensorial quality factors (e.g., feels smooth, does not feel oily, appears transparent, opaque, or pearlescent, does not cause imtation, ...) Given these specifications, the second step involves product formulation,
83 which involves selection of the ingredients, the emulsion type (if applicable), the emulsifier (if necessary), and determination of the product microstructure. Then, the process creation step, as discussed below, and the product evaluation step follow. In a second paper, Wibowo and Ngiv1 expand upon these steps, and apply them for several products, including dry toner, laundry detergent, shampoo, and cosmetic lotion. 2.3 Process Creation
For those primitive problems for which a process must be designed to produce the chemical products, when funded by a business center, the design team carries out the process creation step. Since strategies for process flowsheet synthesis are well documented tS' 9, 2], especially as they apply to the manufacture of commodity chemicals, these steps are not reviewed herein. Note, however, that when process equipment is selected in the so-called task-integration step, the operating mode (that is, continuous, batch, or semi-continuous)is selected. For processes to manufacture specialty chemicals and pharmaceuticals, especially, the design and scheduling of batch processes gains importance.
2.3.1 Product Type Retuming to Figure 1, the manufacauSng processes shown differ significantly depending on the chemical product type. For the manufacture of basic chemical products, the process flowsheet involves chemical reaction, separation, pumping, compression, and similar operations. On the other hand, for the manufacture of industrial products, extrusion, blending, compounding, and stamping are typical operations. Note also that the focus on quality control of the product shifts from the control of the physical, thermal, chemical, and rheological properties to the control of optical properties, weatherability, mechanical strength, printability, and similar properties. Most strategies for process synthesis focus on the manufacture of basic chemical products, with much work remaining to elucidate strategies for the manufacture of industrial products. For the design of configured industrial and consumer products, where emphasis is placed on the design of three-dimensional products, chemical engineers are nomaaUy not involved in the design of the manufacturing process, which includes parts making, parts assembly and integration, and fmishing. 2.4
Development of Base Case Process
To address the most promising flowsheet altematives for the manufacture of basic chemicals, the design team is usually expanded or assisted by specialized engineers, to develop base-case designs. Again, since these strategies are well documented, they are not reviewed herein. Regarding computer-aided process simulation, it is noteworthy that batch process simulators (for example, BATCH PLUS and SUPERPRO DESIGNER) are gaining favor for the design of processes to manufacture specialty chemicals, especially pharmaceuticals. 2.5
Detailed Process Synthesis using Algorithmic Methods
While the design team develops one or more base-case designs, detailed process synthesis is often undertaken using algorithmic methods. For continuous processes, these methods include: (1) create and evaluate chemical reactor networks for conversion of feed to
84 product chemicals, separation trains for recovering species in multicomponent mixtures, and reactor-separator-recycle networks, and (2) locate and reduce energy usage, create and evaluate efficient networks of heat exchangers with turbines for power recovery, and networks of mass exchangers. For batch processes, these methods create and evaluate optimal sequences and schedules for batch operation. In the manufacture of industrial chemicals, such as films, fibers, and paper, processes are synthesized involving extrusion, blending, compounding, sealing, stamping, and related operations. These processes normally involve large throughputs, and consequently, are usually continuous, with some discrete operations that occur at high frequency, such as sealing and stamping. Methods of process synthesis rely heavily on heuristics and are not as well developed as for the manufacture of basic chemical products. For these processes, the emphasis is on the unit operations that include single- and twin-screw extrusion, coating, and fiber spinning. See, for example, the Principles of Polymer Processing by Tadmor and Gogos [10] and Process Modeling by Denn [11]. 2.6
Plant-wide Controllability Assessment
An assessment of the controllability of the process is initiated after the detailed process fiowsheet has been completed, beginning with the qualitative synthesis of control structures for the entire flowsheet. Then measures are utilized that can be applied before the equipment is sized in the detailed design stage, to assess the ease of controlling he process and the degree to which it is inherently resilient to disturbances. These measures permit altemative processes to be screened for controllability and resiliency with little effort and, for the most promising processes, they identify promising control structures. Subsequently, control systems are added and rigorous dynamic simulations are carried out to confirm the projections using the approximate measures discussed previously. See, for example, the books by Luyben et al. [12], Luyben[13], and Seider et al. [21. 2.7 Detailed Design, Equipment Sizing, and Optimization - Configured Product Design
Depending on the primitive design problem, the detailed design involves equipment sizing of units in a new process for commodity and specialty chemicals, that is, detailed process design, and/or the determination of the product configuration for a configured industrial or consumer product (that uses the chemicals or chemical mixtures produced). Here, also, the steps in equipment sizing and cost estimation for processes involving commodity and specialy chemicals are well documented[14' 21. When the primitive design problem leads to a configured industrial or consumer product, much of the design activity is centered on the three-dimensional structure of the product. Typical chemically-related, industrial and consumer products include, for example, hemodialysis devices, solar desalination traits, automotive fuel cells, handwarmers, multi-layer polymer mirrors, integrated circuits, germ-killing surfaces, insect repelling wristbands, disposable diapers, inkjet cartridges, transparencies for overhead projectors, sticky wall hangers, and many others. In many cases, the product must be configured for ease of use and to meet existing standards, as well as to be manufactured easily. Increasingly, when determining the product configuration, distributed-parameter models, involving ordinary and partial differential equations, are being created. Simple discretization algorithms are often used to obtain solutions, as well as the Finite-element Toolbox in MATLAB and the FEMLAB package.
85 The product invention phase begins in which ideas are generated and screened, with detailed three-dimensional designs developed. For the most promising inventions, prototypes are hailt and tested on a small-scale. Only aider the product passes these initial tests, do the design engineers focus on the process to manufacture the product in commercial quantifies. This scale-up often involves pilot-plant testing, as well as consumer lesting, before a decision is made to enter commercial production. Clearly, the methods of capital cost estimation, profitability analysis, and optimization are applied before and during the design of the manufacturing facility, although when the product can be sold at a high price, due to large demand and limited production, detailed profitability analysis and optimization are less important. For these products, it is most important to reduce the design and manufacturing time to capture the market before competitive products are developed. 3
FOCI ON STEPS AND C O M P U T E R TOOLS IN SUPPORTING HIGH-LEVEL BUSINESS DECISIONS
Given the need to expose chemical engineering students to the principal steps and computer tools in designing chemical products and processes, several vantage points can be taken. In the previous section, these steps and computer tools were elucidated, but the inputs to and responses from business decision makers were difficult to define explicitly, as they occur at various stages, depending on the product and/or process and company situation or policy. In this section, the focus is on imparting to students an appreciation of the steps and computer tools that support the high-level business decisions required in the process industries.
3.1
Case Studies and Design Projects
Virtually all students are engaged in open-ended problem solving in the core courses of chemical engineering. However, most do not solve a comprehensive design project until their senior year. When framed well, students are presented with a letter from management stating a societal need(s) and suggesting a primitive problem(s) to be solved in trying to meet this need(s). See, for example, the three examples in Section 2.1.1. Then, as discussed in Section 2.1.3, student design teams search the literature for competitive products, especially the patent literature for existing businesses and successful products. Also, as discussed in Section 2.1.4, students are expected to initiate studies of marketing and business considerations. These involve assessing consumer needs and the size of the market, as well as determining the sources of competitive product(s), the level of production, the selling prices, potential gross profits, and related issues of importance to business decision makers.
3.2
Management Stimuli
The actions by business decision makers, discussed in Section 2.1.5, contribute toward creating an environment that stimulates technical people to innovate and invent new products. These have been key elements of 3M's business strategy over the past century and have led to their success in introducing numerous successful chemical products TM. Rather than taking a passive role, management actively seeks to stimulate innovation.
86
3.3
Idea Generation in Response to Business Trends and Customer Needs
The idea generation phase, in which a design team brainstorms to arrive at long lists of potential ideas worthy of consideration in developing a viable product(s), is at the heart of product design [11. In addition, together with marketing people, the design team interviews potential customers to obtain their assessment of their most important needs. While difficult to carry out quantitatively in a university setting, it is important that students appreciate hese key steps in responding to business trends and customer needs in the commercial world.
3.4
Detailed Process Simulation, Equipment Design, and Cost Estimation
In solving their design projects, students are taught to formulate decision trees and to use screening measures to eliminate the least promising altematives, initially using approximate measures such as gross profits, and gradually refining these measures as the technical analysis becomes more quantitative. In carrying out this analysis, students often use process simulators and rigorous methods to size equipment and estimate costs. These usually involve the use of extensive data banks of thermophysical property data, equipmentsizing data, and purchase cost data. While business managers do not examine the details, the use of these packages provides business decision makers with a common basis for comparing the bottom lines of competitive designs; for example, involving the investor's rate of retum.
3.5
Sensitivity and Uncertainty Analysis
Furthermore, through the use of computer tools to compare competitive designs, students learn the ease and speed with which they can prepare detailed design studies. They learn to vary parameters in sensitivity analyses, as well as in optimization studies. By the same token, students also learn the ease with which these systems can be misused, possibly leading a design team to present management with a misguided recommendation for a business venture. The need to check calculations and fully document the underlying assumptions and approximations, with estimates of uncertainties, is another important lesson students must learn.
3.6
Design Reports
Training students to transmit their ideas, both orally and in writing, is a key aspect of their education. The experience of writing an extensive design report, with its letter of transmittal, is an important vehicle for teaching students to communicate with persons not involved in the technical details of their project. In addition, the preparation, delivery, and evaluation of an oral presentation to a critical audience, usually populated by faculty and industrial persons, is another key aspect of leaming to influence business decision makers. 4
YOUNG FACULTY PERSPECTIVES
Recent changes in young faculty perspectives suggests future impacts of PSE on business decision-making. With recent advances in experimental synthesis methods at the nano- or meso-scale, a large fraction of the young faculty have been completing doctoral research on structured materials, protein structures and interactions, among similar applied
87 chemistry, physics, and biology projects. These persons are likely to be more stimulated by the design of configured chemical products, such as biochemical sensors, integrated circuits, and multi-layer polymer mirrors, rather than the design of commodity chemical processes. The development of these products, which often command high selling prices (for example, specialized pharmaceuticals), can have a major impact on business decision-making. It seems clear that chemical engineers will focus increasingly on the design of these kinds of products, with significant business and legal considerations. 5
CONCLUSIONS
As the steps in product and process design are being elucidated and computer tools are gaining sophistication, the ties between PSE and business decision-making are being conveyed to undergraduate students more effectively. In addition to clarifying the steps and computer tools, this manuscript has focused on many of the ties. REFERENCES
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [ 11] [12] [13] [14]
E. L. Cussler and G.D. Moggridge, Chemical Product Design, Cambridge University Press 2001. W . D . Seider, J. D. Seader and D. R. Lewin, Product and Process Design Principles: Synthesis, Analysis, and Evaluation, Second Edition, Wiley, New York, 2003. E. Gundling, The 3M Way to Innovation: Balancing People and Profit, Kodansha Intemational, Tokyo, 2000. J.T. Coe, Unlikely Victory: How General Electric Succeeded in the Chemical Industry, AIChE, New York, 2000. G. P. Pisano, The Development Factory: Unlocking the Potential fo Process Innovation, Harvard Business School Press, Cambridge, 1997. C. Wibowo and K. M. Ng, Product-Oriented Process Synthesis and Development: Creams and Pastes, AIChE Journal, 47, 12 (2001) 2746-2767. C. Wibowo and K. M. Ng, Product Centered Processing: Manufacture of ChemicalBased Consumer Products, AIChE Journal, 48, 6 (2002) 1212-1230. J. M. Douglas, Conceptual Design of Chemical Processes, McGraw-Hill, New York, 1988. L. T. Biegler, I. E. Grossmann, and A. W. Westerberg, Systematic Methods of Chemical Process Design, Prentice-Hall, New Jersey, 1997. Z. Tadmor and C. G. Gogos, Principles of Polymer Processing, Wiley, New York, 1979. M.M. Denn, Process Modeling, Longman, New York (1986). W. L. Luyben, B. D. Tyreus and M. L. Luyben, Plantwide Process Control, McGrawHill, New York, 1999. W. L. Luyben, Plantwide Dynamic Simulators in Chemical Processing and Control, Marcel Dekker, New York, 2002. M. S. Peters, K. D. Timmerhaus, and R. West, Plant Design and Economics for Chemical Engineers, Fifth Ed., McGraw-Hill, New York, 2003.
88
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Informatics in Pharmaceutical Research S. Kima
aVice President/Information Officer, Lilly Research Laboratories, Lilly Corporate Center, Indianapolis, Indiana 46285, United States of America. Abstract
The traditional role of information technology (IT) in pharmaceutical R&D has been facilitating cycle compression, i.e., speeding up the time-to-market and increasing the retum on R&D investments. In the post-genomic era, the great surge in data creates additional challenges and roles for IT. A new profile of the R&D pipeline is taking shape, based on the concept of massively parallel operations in Discovery research (target discovery and development, high throughput screening and lead optimization). This increase in opportunities, however, must be tempered with the realization that the later stages of R&D are most costly as viewed from the tracking of a particular molecular entity through the pipeline. Evolving in silico technologies to selectively encourage early attrition of the pipeline, i.e., by exploiting advances at the interface of chemistry, biology, and systems engineering (systems biology), are important parts of the modern landscape for pharmaceutical informatics. The presentation will highlight the observation that concepts from process systems engineering are especially relevant for organizing both scientific and management principles.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
89
Design and operation of m i c r o - c h e m i c a l plants - Bridging the gap between nano, micro and m a c r o technologies Shinji Hasebe Department of Chemical Engineering, Kyoto University, Kyoto 606-8501, Japan E-mail:
[email protected]
Abstract: The design and operation problems of micro chemical plants which are used for production are treated in this research. First, the design problems of micro chemical plants are classified into two sub-problems: the design of the micro unit operations and the design of the entire micro plants. Then, for each of the sub-problems the features of the micro systems are explained and the dominant problems which must be solved by process systems engineers are pointed out. Then, the characteristics of the instrumentation and control problems of the micro chemical plants are summarized. Keywords micro chemical plant, CFD simulation, numbering up, optimal design
1. INTRODUCTION Few Micro-Chemical Plants (MCPs) have been used for the production; therefore, engineers do not have experience in designing MCPs, and also do not have any systematic tools for their design and control. One of the dominant characteristics of the MCPs is that the research results can be transferred into production much faster. Thus, it is very important to elucidate the problems which occur in the design and control of MCPs and propose solutions for those problems. From this viewpoint, the emphasis is placed on explaining the characteristics of the MCPs and pointing out the future research subjects to be solved. The concrete techniques to solve the problems are not explained. 2. POSSIBILITY OF MCPs When an MCP is developed for real production, the reason for using micro devices must be clear. It is meaningless to say that a product which can be produced in a conventional plant can also be produced in an MCP. We must aim at the production of materials which cannot be produced in conventional chemical plants, or the production efficiency of which is drastically
90 improved by using an MCP. At present, many kinds of materials are produced by using micro devices [1 ] - [4]. However, the number of products which satisfy the above conditions is not clear. Scientists and engineers engaged in the research of MCPs must always evaluate their results from the viewpoint of real production. The production rate is one of the dominant problems of an MCP when it is applied to the production process. It is meaningful to discuss the amount of production of MCPs by using an example. Figure 1 shows a conventional batch plant. It is assumed that the batch size is 1.0 m 3 of product, and that ten kinds of different products are produced by changing the raw material and/or production conditions. It is also assumed that two days are requested for one batch of production, and the process is operated 320 days in a year. In this case, 16 m 3 of each product is produced in a year when every product is produced equally. Suppose the case of producing the same amount of products in an MCP. If one train of MCP is assigned to one product, the flow rate of each production train which is enough for the same amount of production with the batch plant is around 2000 cm3/h. This amount of production can be attained by 16 square channels each of which has a cross-sectional area of 600 ~m x 600 pm, if the average flow speed is 0.1 m/s (See Fig. 2). However, the residence time is one second if the length ofthe device is 10 cm. From this example, it becomes clear that the problem is not the size of the device but the residence time. In this case, much effort should be devoted to increasing the reaction rate. One of the features of MCPs is that precise temperature control is easy. If a device can be operated at a higher temperature, an increase of the reaction rate may be possible. This example shows that if a sufficient reaction rate can be achieved in micro-devices, the MCPs can be used not only for the production of small volume specialty chemicals but also for the production of commodity chemicals of medium volume.
Fig.1 Batch plant
Fig. 2 Micro chemical plant
91 3. DESIGN PROBLEM OF MCPs 3.1 Design of micro unit operations As the size of the device is drastically smaller than that of a conventional device, a new design method is required that is different from that of the conventional unit operations such as heat exchange, mixture, absorption, and adsorption. Therefore, to distinguish them from the usual unit operations they are hereafter called "micro unit operations (MUOs). Characteristics of the design problems of MUOs are explained in this section.
a) Design margin When chemical equipment is designed, some amount of design margin is added to each variable so as to compensate for unforeseen variations. This method can be used when the design margin always shows a beneficial effect on the production efficiency of the device. In MUOs, the size of the device strongly affects its function. In other words, the functional design and physical design cannot be executed separately. In this case, the design margin may not work as well expected. In the simple micro device shown in Fig. 3, the cross-sectional area and the residence time are assumed to be the dominant factors which affect the function of the device. It is obvious that the function expected from the device is not satisfied when the cross-sectional area is increased. If the device is lengthened or the number of channels is increased, the residence time is also increased. It is clear from this example that in the MCPs the uncertainties of the model and parameters may not be compensated by the design margins. Laminar flow characteristics are exhibited in a channel of micro-device. Recent advances of computer technologies and simulation algorithms enable us to simulate the flows with reaction in the micro-devices by using a Computational Fluid Dynamics (CFD) simulator. The advance of the CFD simulation algorithm creates new possibilities for embedding the CFD simulator into the design system and designing devices which do not require any design margin.
~ Optimalsize J li~iii~I ~/
Lowvelocity Longresidence time
_.--~ ~ ~ ~ ~ ~ L o n g Channellength /residence time Long , IOOOOOOl -
'~0 ,.
IOOOOOOl IDDODDDI I O O
O
O
O
O
l
~
Longresidence time
Fig. 3 Effects of design margin
92
b) Shape of the device In a conventional design problem, the unit operations are modeled by using terms such as perfect mixing, piston flow, steady state, and total heat transfer coefficient. Each unit operation is modeled by using terms whose values do not depend on the location in the device. Convection and diffusion in the device strongly affect the functions of the micro devices, and the convection and diffusion conditions are affected by the shape of the device. Thus, it is very important to include the shape of the device in the design variables. The shape of the device has large degree of freedom; thus, it is almost impossible to derive the best shape if no constraints are added to the shape. However, the introduction of the constraints to the shape interferes with the emergence of the devices designed by a completely new idea. We must carefully select the constraints which are embedded in the design problem. In addition to the constraints used in the design problem of the conventional unit operation, new types of constraints, such as those on the average residence time, the residence time distribution, and the temperature distribution in the device, are included in the design problem. Figure 4 shows a concept of the future design system in which the shape of the device satisfying all constraints is gradually improved so as to optimize the given objective function.
c) Robust design In a conventional chemical plant, a feedback control system is used to keep the plant at the desirable state. It is difficult to measure and control the flow condition in the micro devices, although the flow condition is affected by the change of the physical property of the fluid such as the viscosity. Therefore, the micro device should be designed so that the effect of the change of the physical property to the flow condition is as small as possible. One method is to design the optimal shape by taking the various operating conditions into account. The introduction of stochastic programming method is a more theoretical approach. A living organism can effectively adjust to the disturbances which occur inside and outside of the body without using any mechanical system. So an interesting research area would be the installation of a self-adjusting function in the device.
Fig. 4 Concept of a shape design system
93
d) Reevaluation of the neglected terms Though a micro device is fairly small compared with conventional chemical equipment, it is still very large compared with atoms or molecules. In principle, the physical laws established in the conventional world can be used to describe the behavior in the device. However, it is probable that some terms which have been neglected in the conventional design cannot be neglected in the design of MUO. As an example, the results of the efficiency analysis of heat exchangers are explained in the next subsection.
e) Influence of physical properties of materials on heat transfer behavior [5] Plate-fin micro heat exchangers are representative of micro heat exchangers. Figure 5 shows the counter-flow plate-fin micro heat exchanger investigated in this work. A number of plates are stacked, and on each plate a hot or cold stream flows. The Fluent | code, which uses the control volume method to solve conservation equations for mass, momentum, and energy, has been used to calculate profiles of temperature inside plate-fin micro heat exchangers. Under the conditions shown in Fig.5, CFD simulations of micro heat exchangers were performed to analyze the influence of physical properties of materials on the heat transfer performance. Three types of materials -- copper, stainless steel, and glass -- were examined. Their thermal conductivities are listed in Table 1. Temperature changes of heat transfer fluids were used to evaluate the performance of micro heat exchangers. The simulation results summarized in Table 1 show that the heat transfer efficiency of micro heat exchangers made of stainless steel or glass is higher than that of copper.
Fig. 5 Schematic view of counter-flow plate-fin micro heat exchanger.
94 Table 1 Heat transfer performance achieved by using; three kinds of materials Materials Thermal conductivity [W m -1 K l] Temperature change [K] Copper 388 59.4 Stainless steel 16.3 72.7 Glass 0.78 64.8 [High thermal conductivity]
[Low thermal conductivity]
kTemp.
kTemp.
Hot s t r e a m ~
~
all
Wall
Fig. 6 Temperature profiles of heat transfer fluids
Fig. 7 Image of micro heat exchanger When the copper micro heat exchanger is used, the temperature profile inside the wall (device itself) becomes flat in the longitudinal direction due to high heat conduction as shown in Fig. 6 (left). The stainless steel or glass micro heat exchanger generates an appropriate temperature gradient inside the wall in this case study, as shown in Fig. 6 (fight). Therefore, higher heat transfer efficiency is not necessarily achieved by using materials with higher thermal conductivity, which leads to the conclusion that the heat transfer behavior depends largely on the longitudinal heat conduction inside walls. This result does not mean that materials with lower thermal conductivity are suitable for achieving higher heat transfer performance. In designing micro heat exchangers, it is necessary to select appropriate design and operating conditions that maximize their performance. Longitudinal heat conduction inside walls is ignored for the design of conventional macro heat exchangers. However, the effect of longitudinal heat conduction cannot be neglected in designing micro heat exchangers, because the ratio of wall volume to channel volume is large as shown in Fig. 7.
95
3.2 Process synthesis The study of process synthesis has been done by assuming that the unit operations which can be used in the process are given in advance. Under that condition, a method which derives the optimal structure of the process has been developed. On the other hand, in the MCPs the micro unit operations which can be used in the process synthesis are not clear. For example, there are no standard types of micro reactors or those of micro absorbers. Therefore, the process synthesis of the MCPs becomes the problem of deriving the optimal combination of the functions which are requested in the final process, not the problem of deriving the optimal combination of MUOs. After the combination of the required functions is decided, the physical structures of the devices and the plant are generated. The concept of unit operations has been effectively used to develop new plants. However, it is not suitable to generate a completely new structure of the device and plant. If the plant can be synthesized by using the idea explained above, it is possible to generate a new type of plant systematically by computers. Figure 8 shows an example of process integration [6], [7]. In this example, eleven units have been merged into one column consisting of the reaction, distillation, reactive distillation, and extractive distillation. This is not the case for MCPs. The number of devices in an MCP increases as the production rate increases. Thus, the development of a complex device in which various functions are executed is a very important problem in order to avoid complexity of the micro plant. A complex device such as shown in Fig. 8 is usually developed ad hoc, on the basis of an engineer's experience. In other words, there is no systematic procedure for developing a complex device. Systematic generation of complex device hardware from the knowledge of the process functions to be done in the device is a promising research area in process synthesis.
Fig. 8 Integration of the functions
96 Conventional
Unit micro plant ..................................
9
[
.................
f ........................
Aggregation of micro devices
Aggregationof micro devices
......................................
Aggregation of unit micro plants (a) ................
~.: .....................
,, .....................
(b) .,...,. . . . . . . . . . . . . .
....
s
...............'L....~.....2....................J[.............] Distri Aggregationof Mixing -bution microdevices unit unit (c)
~, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
-.
Hybrid system (d)
Fig. 9 Four types of plant structures
3.3 Numbering-up Figure 9a shows a typical case of the numbering-up of unit micro plants. The production rate can be increased by increasing the number of unit micro plants operated in parallel. This is an easy way to increase the production rate, but this structure may not be economically or technologically optimum. Four types of structures are shown in Fig. 9. Figure 9b shows the structure in which the micro devices having the same function are collected to one aggregated device. If an efficient MUO does not exist for a type of unit operation, a combination of micro devices and a conventional device is also possible (See Fig. 9b). The structure shown in Fig. 9d is the hybrid structure of Fig. 9a and Fig. 9b. Many factors affect the optimal structure. It must be determined taking the following terms into account:
a) Types of micro-unit operations suitable for the process The MUOs or the train of MUOs which is suitable for the given process must be made clear. If a conventional unit operation is used in a part of the process, the structure shown in Fig. 9b must be adopted.
b) The time allowable to transfer to the next device In an MCP, the residence time in the connection device cannot be neglected. If a shorter residence time in the connection device is desirable, the structure in Fig. 9c [8] is better than that in Fig. 9b.
c) Operating temperature As is shown in section 3.1e), the heat transfer in the longitudinal direction cannot be
97 neglected in the micro device. Thus, it is not desirable to aggregate two MUOs which are operated at different temperatures. If two MUOs are operated at different temperature, the structures shown in Fig. 9b and Fig. 9c are better than that in Fig. 9a.
d) Requirement of actuators There are some proposals for the micro pumps. However, for the production process, it is desirable to use conventional pumps to transfer the materials among devices. If a pump is requested at every intersection of unit operations, the structures shown in Fig. 9a and Fig. 9c are not recommended.
e) The possibility of cost reduction by the aggregation Micro devices are manufactured in many ways. Usually, devices having the same function can be manufactured on the same board. In this case, the structures shown in Fig. 9b and Fig. 9c can be manufactured easily.
39 Flexibility of the production When we adopt the structure shown in Fig. 9a, the production rate can be easily changed by changing the number of unit plants to be operated. However, the production path cannot be changed.
g) Ease of instrumentation and control This term is explained more precisely in the next section. 4. INSTRUMENTATION AND CONTROL OF MCPs 4.1 Instrumentation of MCPs
A catalyst may progressively degrade with time, and the environmental temperature and pressure also change with time. In order to produce desirable products, information on the precise operating condition of the plant is indispensable. There is much research on the development of the analyzing systems using micro systems. There is also much research in which the measurement systems are used to analyze the behavior of the micro systems. However, most of them are not for the MCPs which are operated for a long time. For conventional chemical plants, it is possible to add new measurement devices after the construction of the plant. On the other hand, it is almost impossible to add new measurement devices to the MCP after construction. Thus, the plant and the instrumentation and control systems must be designed simultaneously. For the development of suitable measurement and control system design, all of the dominant disturbances which affect the process performance must be enumerated, and countermeasures against each of the disturbances must be developed. Then, the observed, manipulated and controlled variables are selected so as to be able to execute the countermeasures derived above. If there is no instrumentation or manipulation devices to some countermeasures, the structure of the process itself is changed. There is no experience on the operation of MCPs, so a dynamic simulator should be used intensively to evaluate the efficiency of using each countermeasure.
98 4.2 Control of MCPs
In conventional chemical plants, feedback control is predominantly used to keep the plant at the desirable condition. For example, the temperature of the reactor is controlled as shown in Fig. 10a. The temperature of the liquid in the reactor is measured, and the flow rate of the heating medium is adjusted so that the temperature of the liquid in the reactor becomes the predefined value. When the number of controlled variables is small in the MCP, it is possible to adopt a feedback control scheme. However, as the number of control variables increases, it becomes economically difficult to adopt the feedback control scheme (See Fig. 10b). Thus, the control variables must be selected carefully, taking the variation of the conditions and the effect of each variation into account. One possible approach is to design robust hardware for disturbances and changes in operating conditions. If such a robust device is designed successfully, the total number of sensors and actuators can be reduced. In this case, it may be possible to leave the variables as they are. Indirect control by using the characteristics of the micro devices is another approach. As shown in Fig. 10c, the temperature of the liquid in the micro device can be controlled by keeping the temperature of the heating medium constant because of the good thermal conductivity in the micro device.
Fig. 10 Two structures of temperature control systems
99
Fig. 11 Control systems of micro chemical plants
It is not difficult to embed many actuators and sensors in micro devices by using the techniques developed in the field of Micro-Electro-Mechanical Systems (MEMS). However, the number of data cables increases as the number of observed and manipulated variables increases. To avoid the "spaghetti condition," some functions should be assigned to the actuators and sensors (See Fig. 11). If valves which are driven by the deviation of the pressure, temperature, flow rate or concentration of some material can be developed, the flow rate can be directly controlled without using electrical signals. For example, the tube which shrinks and expands by the change of the temperature can be used as the valves. 5. CONCLUSION - Bridging the gap between nano, micro and macro technologies -
The dominant problems to be solved in the field of the design and control of the micro chemical plants were explained. In the design problems of the conventional chemical plants, it has been difficult to develop a precise model because of many uncertain factors. Thus, the latest results of chemical engineering research have not been used in the design problems. In most cases, it is possible to assume laminar flow in micro devices. Thus, the development of a precise model is much easier than that of conventional plants. That is, the design of micro chemical plants can be regarded as a most appropriate subject to apply a new design method based on chemical engineering science, systems science and computer technology. Such new design methods have not been proposed yet, and it is not easy to develop them. However, the author believes that micro chemical plants act as a bridge between chemical engineering science (micro area) and process systems engineering (macro area) (See Fig. 12).
100
Fig. 12
Innovation of process design
REFERENCES
[ 1] E. Ehrfeld (Edt.), "Microreaction Technology: Industrial Prospects," Proceedings oflMRET 3, Springer (1999) [2] I. Rinard (Edt.), Proceedings of IMRET 4, AIChE National Meeting, March 5-9, Atlanta (2000) [3] M. Matlosz, W. Ehrfeld & J.P. Baselt (Eds.), "Microreaction Technology," Proceedings of IMRET 5, Springer (2001) [4] P. Baselt et al. (Eds.), Proceedings of IMRET 6, AIChE Spring Meeting, March 10-14, New Orleans (2002) [5] O. Tonomura, et al., Proceedings of PSEAsia 2002, Dec. 4-6, Taipei (2002) 109. [6] A. I. Stankiewicz and J. A. Moulijn, Chem. Eng. Prog., 96(1), (2000) 22. [7] J. J. Siirola, "An Industrial Perspective on Process Synthesis," AIChE Symposium Series, No.304, Vol.91, (Proceedings of FOCAPD'94), (1995) 222. ACKNOWLEDGMENTS The author thanks Professors I. Hashimoto, K. Mae, M. Ohshima, M. Kano and M. Noda (Kyoto University) for their valuable discussion. The financial support of the Micro Chemical Process Technology Research Association is gratefully acknowledged.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
101
Workflow and Information Centered Support of Design Processes Wolfgang Marquardt a and Manfred Nagl b
aLehrstuhl ftir Prozesstechnik, bLehrstuhl fiJr Informatik III (Software Engineering), RWTH Aachen University, D-52056 Aachen, Germany Abstract Design process excellence is considered a major differentiating factor between competing enterprises since it determines the constraints within which plant operation and supply chain management are confined. The most important prerequisite to establish such design process excellence is a proper management of all the design process activities and the associated information. Starting from an analysis of the characteristics of chemical engineering design processes, some important open research issues are identified. They include the development of an integrated information model of the design process, a number of innovative functionalities to support collaborative design, and the a-posteriori integration of existing software tools to an integrated design support environment. Some of the results obtained and experiences gained in the last years in the collaborative research center IMPROVE at RWTH Aachen University are presented.
computer-aided design, information modeling, software engineering, tool integration, business processes, workflow, work process
Keywords
1. M A N U F A C T U R I N G A N D D E S I G N IN THE 21 st C E N T U R Y
The markets and hence the requirements on manufacturing in the process industries have been changing tremendously in the last decades. Growing market volume and limited, often largely local competition have been dominating manufacturing in the seventies and eighties. Today, the process industry is facing largely saturated markets in many geographical regions of the world. Internet technology has been successfully used in e-commerce solutions to achieve almost complete market transparency. At the same time, transportation cost have been decreasing significantly. Hence, every manufacturer is facing truly global competition. Economic success is only possible, if new ideas can be quickly transformed into new marketable products or if the production cost of established products can be diminished substantially to counteract decreasing profit margins. Product innovation, process design as well as manufacturing processes have to be continuously improved to reduce time to market of a new product, to minimize manufacturing cost and to establish a high level of customer satisfaction by offering the right product at the right time and location. 1.1 Two business processes
The value chain in any manufacturing oriented industry comprises two major business processes- manufacturing and design- which are highly interrelated [1]. These business processes are constrained by the socio-economic environment, in particular, the market, the legislation and the available process technologies (Fig. 1).
102 Value creation happens in the manufacturing process (Fig. 1, top), which is part of a supply chain including warehouses, distribution and procurement in addition to the production plants. Excellence in manufacturing is not possible without explicit consideration of the constraints and potentials resulting from interaction between the plant and the supply chain it is embedded into. The influencing factors from the supply chain on plant operation have to be exploited rather than rejected by model-based plant management considering all the manufacturing business processes across the whole supply chain [2]. The changing business environment can be addressed on a short time scale by adapting supply chain management and plant operation strategies for a fixed design. The manufacturing process is largely determined by the second business process, the design process, which comprises all the activities related to the design of a new product and the associated production plant including the process and control equipment as well as all operation and management support systems (Fig. 1, bottom). This business process starts with
Fig. 1. The two major business processes in the process industries: manufacturing and design. an idea on a new product and subsequent product design. Conceptual design, basic and detail engineering of the production plant are the major activities which follow, before the plant can be built and commissioned. Excellence in design requires consideration of the complete design lifecycle [3]. In particular, the interactions between different design lifecycle phases focusing on different aspects such as the chemical product, the process concept, equipment design, plant layout, or control structure selection need to be exploited. Only an integrated consideration facilitates the realization of synergies and the achievement of the true economical potential. The plant and the supply chain have to be continuously reengineered during their lifetime in order to adjust manufacturing to major changes in the market conditions and legislation, to adopt new process technologies and to profit from accumulated operational experience. Plant reengineering is only possible on a longer time scale as
103 compared to an adaptation of the manufacturing process for a given plant and supply chain design. 1.2 Value creation
The economic performance of an enterprise heavily relies on the quality of the products of these two business processes. Typically, the major focus is on the product of the manufacturing process, namely the chemicals, which are sold to customers and therefore are considered to generate the revenue to the enterprise. The manufacturing process and its associated supply chain, however, are considered as the cost generators. Profit can be increased on the short time scale with limited investment, if the manufacturing cost can be reduced by optimized strategies for plant operation and supply chain management. It is therefore not surprising, that the current industrial focus is on the reduction of manufacturing cost in order to counteract decreasing profit margins. This strategy does not seem to be sustainable in the long run, since cost reduction by means of better supply chain management and plant operation using existing assets is largely independent of a certain product portfolio and does not contribute to a ftmdamental understanding of the processing technology and its impact on chemical product characteristics. The employed operations research techniques apply to many businesses and may therefore evolve in a technological commodity. After a transition period during which these technologies are adopted, the differentiation between competitors with respect to manufacturing excellence vanishes. Hence, at least at this point in time, there is no adequate appreciation of the contribution of design excellence to the overall success of an enterprise. It is the design process which determines the design of a manufacturing plant. This design is largely responsible for the achievable quality of the chemical product and for the order of magnitude of the production cost. The design also constrains the operational envelope and hence the flexibility to react to changing market conditions. Ideally, an integrated consideration of plant and supply chain design on the one and supply chain and plant management on the other hand should be addressed [2]. However, such an approach would have to generalize and extend the problem of an integrated design and control of a single plant, which itself has not yet been solved satisfactorily. We hypothesize that design excellence is becoming a major differentiating asset in the future which, to a large extent, will decide on the economical success of an enterprise. Of course, for this hypothesis to be true, design has to be interpreted in a broader than the traditional sense. In particular, not only the process flowsheet and equipment, but also the operation support system as well as the chemical product itself have to be considered part of the integrated design business process. The quality of the design process is strongly depending on the available knowledge about the chemical process and products and its longterm management. However, we claim that design process excellence in addition requires a profound understanding of the integrated design process itself. Such an understanding forms the basis for identifying shortcomings in available knowledge and established work processes. It is therefore a prerequisite for design process reengineering to establish better process design practices as well as for the implementation of a suitable software environment to support the activities in the design process in an integrated manner. 1.3 Overview on the paper
In the following we focus in this paper only on apart of the design process, namely to the early phases of the chemical process design lifecycle, the conceptual design and front-end engineering, for pragmatic reasons to avoid excessive complexity. Further, we believe that
104 many of our findings will carry over to the more complicated integrated design and manufacturing problem - not only in chemical engineering but in many other related fields. The next section continues with a discussion of the character of the chemical process design process. The key research questions are formulated and the interdisciplinary research center IMPROVE is introduced subsequently. The major results of the research work together with a discussion of our experience constitutes the last part of this contribution. 2. THE C H A R A C T E R OF C H E M I C A L PROCESS DESIGN P R O C E S S E S
The plant lifecycle can be subdivided into six major phases which comprise conceptual design, basic engineering, detail engineering, construction and commissioning as well as maintenance and continuous reengineering (Fig. 1). Conceptual design and front end engineering (the early phase of basic engineering) constitute those parts of the lifecycle with the most significant impact on the lifecycle cost. In this early design phase, almost all of the conceptual decisions on the raw materials and the reactions, the process, the equipment, the plant and even on control and plant operation are taken. Though only a small fraction of the total investment cost of the plant is spent in these early lifecycle phases, the consequences on the total cost of ownership of the plant are most significant. The results of this early lifecycle phase form the basis for the subsequent refinement during basic and detail engineering. These early phases of the design lifecycle constitute the focus of this contribution due to their significance for the whole plant lifecycle. 2.1 Status of industrial design processes The design process is carried out by a team of multidisciplinary experts from different organizational units within the same or different companies. The team is formed to carry out a dedicated project, it is directed by a project manager. Usually, a number of consultants are contributing to the design activities in addition to the team members. All team members are typically part of more than one team at the same time. Often, the team operates at different, geographically distributed sites. The duration of a single project may range from weeks to years with varying levels of activity at a certain point in time. Hence, the team and the status and assignments of its members may change with time in particular in case of long project duration. Inevitably, there is no common understanding about the design problem in the beginning of the project. Such a common understanding, called shared memory in [4], has to evolve during collaborative work. The design process constitutes of all the interrelated activities carried out by the team members while they work on the design problem [5]. This multi-disciplinary process shows an immense complexity. It has to deal with the culture and paradigms from different domains. Complicated multi-objective decision making processes are incorporated in the design. They rely on the information produced in the current and previous design activities. In particular, conceptual design processes show a high degree of creativity, they are of an inventive nature and do not just apply existing solutions. Creative conceptual design processes are hardly predictable and can therefore only be preplanned on a coarse-grained level. A work process definition- even coarse-grained- is mandatory to establish simultaneous and concurrent engineering to reduce the total time spent on a design. The lack of precise planning on a medium-grained level inevitably results in highly dynamic work processes. They show branches to deal with the assessment of alternatives and to allow for a simultaneous work on only loosely related subtasks. Iterations occur to deal with the necessary revision of previous decisions and solutions. They are due to new insight or due to evolving design requirements. A revision may either address a problem
105 which has instantly been recognized or it may serve to exploit an identified potential. A strict definition of the work process in conceptual design (as accomplished in many administrative business processes [6]) is not only impossible but also highly undesirable. It would largely constrain the creativity of the designer with obviously undesirable consequences for the quality of the resulting design. The team of experts typically uses a multitude of resources in the various phases of the design process. For example, web-based text retrieval and browsing systems are used to search the scientific and patent literature or internal archives for information on the materials or processing technologies. Lab-scale or pilot-scale experiments allow the investigation of specific questions related to physical properties, kinetics, scale-up of equipment or the accumulation of impurities in recycles and their impact on the process behavior. All kinds of software tools with diverse and often overlapping functionality have been increasingly used in the last two decades to support different design activities. First, there are standard software tools such as word processing, spreadsheet or groupware systems, which are completely independent of a specific application domain and hence are established in all industrial segments. Second, there are domain specific tools which support specific chemical process design activities. Such tools include, for example, block or equation oriented process modeling environments, equipment rating and design or cost estimation software. Often, different tools are in use for the same or similar tasks within a typically globally acting enterprise. This diversity and heterogeneity of software tools may even show up in a geographically distributed design team. Often, these tools rely on some mathematical model of the chemical process to perform a synthesis or analysis step in a model-based fashion. These models are of differing coverage and rigor, but contain a lot of process knowledge in a formalized and structured manner. In the course of the design process, a complex configuration of different types of information is created. This information appears in multiple ways. There are, for example, standardized documents including equipment specification sheets or design reports, informal texts like e-mail or telephone notes, or input or output files of certain software tools containing problem specifications or result summaries in a formal syntax. This information is typically held in a decentralized manner in the local data stores of the individual software tools, in document management systems or in project databases. Typically, the relationship between the various information units is not explicitly held in the data stores. Information is exchanged in the design team by means of documents, which aggregate selected data relevant to a certain work process context. Though a large amount of information is created and archived in some data store during the design process, there is typically no complete documentation of all the alternatives considered during the design. However, a full documentation of the final conceptual design has to be compiled from the information created during the design process. Typically, this documentation is handed over to an engineering contractor and to the operating company. The contractor employs this design documentation to continue the design process during basic and detail engineering, whereas the operating company uses the conceptual design package to prepare maintenance and asset management procedures.
2.2 Analysis of current design practice and supporting software tools The analysis of current design practice reveals a number of weaknesses which have to be overcome to successfully establish design process excellence. The most import issues are the following: 9 There is no common understanding and terminology related to the design process and its results.
106 9 Creative design processes are not properly understood. There is no systematic reengineering and continuous improvement process in place. 9 Design processes and their results are not sufficiently well documented. This lack of documentation prevents the tracing (i) of ideas which have not been pursued further for one or the other reason, (ii) of all the alternatives studied, (iii) of the decision making processes and (iv) of the design rationale. 9 Reuse of previous solutions and experiences at a later time in the same or similar design projects is not supported. 9 The creation of knowledge through learning from previous experience is not systematically supported by information technologies. 9 There is no systematic evolution of requirements and no assessment of design objectives with respect to the requirements. 9 A coherent configuration of all the design data in the context of the work process is not available. Time spent for searching and interpreting information on a certain design in the course of the plant lifecycle is enormous. Often, it is less effort to repeat a task. There is no systematic management of conflicts between design information or change propagation mechanism between design documents. 9 There is no systematic integration of design methodologies based on mathematical models of the chemical processes with the overall design work process. In addition to these work process oriented deficiencies, there are also serious shortcomings with respect to the software tools supporting the design process. Some important considerations are the following: 9 Tools are determining the design practice significantly, because there has been largely a technology push and not a market pull situation in the past. Tool functionality has been constrained by technology, often preventing a proper tailoring to the requirements of the design process. Usually, the tools are providing support functionality for only a part of a design task or a set of design tasks. 9 There is a limited integration between tools largely focusing on those of a single vendor or its collaborating partners. The integration of legacy tools into such an environment or the integration of the software infrastructure of a company is costly. 9 The heterogeneity of the software environment impedes cooperation between organizations. 9 Design data are represented differently in the various tools. There are not only technical, but also syntactic and semantic mismatches which prevent integration. 9 There is a lack of managing relations between data and documents produced by different tools in different design activities. 9 Project management and administration software is not at all integrated with engineering design support software. Hence, proper planning and controlling of creative design processes is difficult. 9 Tool integration is largely accomplished by data transfer or data integration via a central data store, neglecting the requirements of the work processes. 9 Communication in the design team is only supported by generic tools like e-mail, video conferences, etc., which are not integrated with engineering design tools. 9 The management of creative design processes is not supported by means of domain specific tools. These two lists clearly reveal high correlation of the work processes itself and the supporting software tools. Both have to be synergistically improved and tailored to reflect the needs of the design process in a holistic manner. We believe that a work process oriented
107 view on design and the required information technology support is a major prerequisite to achieve design process excellence. In addition, a further development of model-based chemical process design methodologies, algorithms and tools has to take place. 3. THE C O L L A B O R A T I V E R E S E A R C H C E N T E R I M P R O V E
About six years ago, the interdisciplinary collaborative research center (CRC) 476 (IMPROVE) has been established at RWTH Aachen University. It is funded by Deutsche Forschungsgemeinschaft (DFG, the German science foundation) to address some of the issues identified in the last section. Computer scientists and engineers from six disciplines are collaborating with substantial financial and human resources in this long term research effort. The focus is on new concepts and software engineering solutions to support collaborative engineering design processes [7]. Research is concentrated on the early phases of the design lifecycle due to its significant impact on total cost of ownership and due to the challenges resulting from the creative and highly dynamic nature of the work process. A scenario-based research approach has been used in IMPROVE in order to identify the requirements based on a concrete chemical process design case study. The selected scenario comprises the conceptual design of a polymerization process for the production of polyamide6 from caprolactam [8]. This process is well documented in the literature and of significant industrial relevance. The polymerization domain has been chosen because there are much less mature design support tools as compared to petrochemical processes. Therefore, tool integration and work process support are of considerable interest in the end user as well as software vendor industry. The process consists of a number of polymerization reactors followed by a number of units to separate water and monomer from the reaction products and a compounding extruder. The extruder is not only used for compounding but also for degassing of the monomer remaining in the melt. Typically, polymerization, separation, and extrusion are designed in different organizational units of the same or even different corporations using different approaches and supporting software tools. An integrated solution of this problem has to overcome the traditional gap between polymer reaction engineering and polymer processing with its different culture as well as the problem of incompatible data and software tools. Hence, the scenario poses a challenge for any integrated conceptual design process and its supporting software environment. The design support software tools employed in the scenario are of a completely different nature. They include commercial as well as legacy tools. Examples are Microsoft Excel, various simulators such as Polymers Plus from Aspen Technology, gPROMS from PSE, Morex, BEMflow and BEMview from Institut for Kunststoffverarbeitung at RWTH Aachen, the project database Comos PT from Innotec, the document management system Documentum as well as the platform Cheops for run-time integration of heterogeneous simulators, the repository ROME for archiving mathematical models and the modeling tool ModKit, all of Lehrstuhl fOr Prozesstechnik at RWTH Aachen. The major research issues considered in IMPROVE include 9 the improvement of design processes by either integrating yet largely isolated design activities or by defining innovative design processes, 9 the development of an integrated information model of the complete design process in the sense of an ontology,
108 9 the development of novel computer science concepts and their prototypical implementation for information and collaborative work process management in engineering design processes, 9 the implementation of a demonstrator of an integrated design support system to illustrate the synergy of integration and to prove the additional benefit to the end user by means of an industrially relevant and realistic design scenario, and 9 the development of software technologies for the a-posteriori integration of existing tools and their functional extensions with an emphasis on the automatic generation of wrappers to homogenize interfaces. Some results of IMPROVE will be presented in the remainder of this contribution. More detailed information with numerous references to publications originating from IMPROVE can be found at http://www-i3.informatik.rwth-aachen.de/research/sfb476/. 4. MODELING OF DESIGN W O R K PROCESSES AND THEIR PRODUCTS
A major objective of our research in IMPROVE is the development of an integrated information model which covers the work processes, the resources employed, and the resulting design (or product) data, which are typically organized in documents reflecting the context of a certain activity during the design process. Such a modeling activity is not selfsufficient. The resulting model can be used in a number of ways. For example, deficiencies of established design processes may be identified as a prerequisite for their improvement and reengineering. Further, new innovative work processes may be developed from an analysis of existing approaches in order to better integrate traditionally separated activities. Examples include the tighter integration of mathematical modeling and cost estimation with the increasing refinement of the design in a continuous manner, despite the constraints imposed by current tool function. Another example relates to the improved integration of different design domains such as polymer reaction, monomer separation and polymer processing. Besides these engineering related use cases, the information model is the basis for a
Fig. 2: Different perspectives on an integrated information model for the representation of design process information.
109
model-based top-down design of new software tools with innovative functionality and for the integration of these new and of existing tools to a design support environment. The envisioned information model not only has to cover work processes and the information generated and used, but has also to describe the design process and the associated information from various perspectives with differing levels of detail. Fig. 2 shows some relevant perspectives on the information managed during the design process on various levels of detail and with various degrees of formalism [9]. First of all, the major process design concepts have to be represented on a conceptual level (Fig 2, top) to address the needs of the designers in the team. For example, such a conceptual model facilitates a common understanding of the design process and its results, a prerequisite for improving the design process or for formulating requirements on appropriate design support software tools. The conceptual information model can be transformed into a design model (Fig. 2, middle). It serves the needs of the software engineer during software development and also determines the user interface of tools. Finally, the design model is implemented by means of some technology resulting in the implementation model of the design support software (Fig. 2, bottom). In addition to these levels of detail and degrees of formalization, we also distinguish between the data itself (Fig. 2, left) and the documents as carriers of data related by a certain design context (Fig. 2, right). Hence, documents link contextual design data to the work process. In the sequel, we will discuss some of the information models developed and their relation. For the sake of clarity, the focus will be largely on the conceptual level. Besides such a conceptual model, various more refined and strongly formalized implementation models have been derived from or related to the conceptual model in IMPROVE.
4.1 Work process modeling during empirical studies The investigation of existing and the definition of recommended work processes is supported by means of the work process model C3. It is based on the Unified Modeling Language (UML), but includes a number of specific extensions [10, 11 ]. C3 supports work process modeling in a hierarchical manner on an arbitrary level of granularity. It covers the roles of the members of the design team, the order of activities carried out in a certain role, the information used, modified or generated, as well as the resources (software tools, in particular) employed during an activity. C3, implemented by the Workflow Modeling _System (WOMS), facilitates the acquisition and documentation of actual work processes by industrial designers with little extra effort due to its easily accessible and illustrative graphical notation [12]. The weak degree of formalization is considered a strength of C3. It minimizes the modeling effort to a minimum which is essential for being accepted by always time constrained industrial designers. The C3 work process model can form the starting point for further extension and refinement to a conceptual work process model which itself can further be transformed in the sense of Fig. 2 in order to assist the development of software supporting the design process in geographically and institutionally distributed teams [ 13]. 4.2 The conceptual information model CLiP and its applications The conceptual information model CLiP has been developed to clarify the most important concepts and their relations for the description of chemical process design processes in the sense of an ontology [14]. The design of CLiP is based on ideas from general systems theory [15], which have been successfully applied to represent complex structured systems in various domains. Its design philosophy is detailed in [ 16].
110
Fig. 3: The conceptual information model CLIP: Meta model and partial model structures. The development of CLiP aims at a well structured and therefore extensible information model, which ultimately covers all the design data produced during the design process, the mathematical models used in the various model-based design activities, the documents for archiving and exchanging data between designers, collaborating institutions, or software tools, as well as the design activities with the resources they use. CLiP is not planned as an information model which fixes all the details of the universe of chemical process design in a comprehensive manner. Rather it is understood as a modeling framework in the first place to provide a coarse structure for the very diverse types of data occurring in the design process. Such a model framework has to be open for extensions driven by the requirements of a certain application. Further, it has to been designed to allow for an integration of already existing data models. CLiP integrates the representation of work processes and the resulting design information. In particular, design documents are explicitly distinguished from design data. Fig. 3 gives an overview on the structure of CLIP. A more detailed description can be found elsewhere [9, 17, 18]. Meta modeling has been used a first structuring mechanism, in order to allow for an efficient representation of symmetric and recurrent model structures. This way the coarse structure of the information model can be fixed and a simple categorization of the most important modeling concepts becomes feasible. We distinguish the meta meta class level, which only defines the concept of a general system, the meta class level, which holds the major categories of concepts for our domain, and the simple class level, which defines concepts related to different tasks in the design process. The meta class level comprises a technical system with its constituents device and connection, the material, the social system consisting of the members of the design team, the activities carried out during the design process and the documents associated to the various activities. The open model structure of CLiP is achieved by grouping the concepts on the simple class level to related logical units. The resulting partial models relate to design tasks which
111 are typically addressed independently in parallel or in sequence during the design process. The concepts in the partial models can be introduced and maintained largely independently from each other. However, since the same real object is often referred to in different design tasks from different perspectives with differing degree of detail, overlap, partial redundancy, conflicts, and even inconsistency can hardly be avoided. Existing relationships between concepts are explicitly captured by means of association links. These links are defined by means of integration classes (or documents) to specify relations not only between concepts in different partial models but also between the associated data. To reduce the specification effort and the complexity of the resulting information model, only those relations are represented which are of any relevance in the course of the design process. This principle of systematic, task-oriented decomposition and subsequent selective reintegration is considered an essential prerequisite to successfully deal with the inherent complexity of an integrated information model covering the whole design lifecycle. CLiP is implemented by means of different modelingformalisms. The meta model and some of the concepts of the simple class level have been implemented in ConceptBase [19]. This system nicely supports meta modeling and offers a sound logical foundation with basic deductive reasoning capabilities to assist schema development and maintenance. All the partial models of the simple class level are represented by means of UML [10]. This formalism is well-suited for large data models due to its graphical notation. The contents of documents are represented by means of XML [20]. The information units within documents are linked to CLiP classes and their attributes by means of associations. This link is the prerequisite for explicitly relating information stored in a project database to that contained in design documents, typically stored in a document management system. Currently, CLiP is being enhanced by additionalformal semantics for various reasons. First, the associations between partial models can only be specified if a precise meaning of the concepts and attributes is established. Second, model development is facilitated and third, the model can be directly used by a reasoner based on description logics. This way, new data and concepts can be classified and introduced in an existing database. Also, browsing and retrieval of data can be assisted across heterogeneous data sources, if the semantically enriched data model is used as a homogenization layer. Still, a coarse conceptualization by means of UML is accomplished first, before the refinement and further formalization of the UML concepts is addressed by means of some ontology language such as DAML+OIL [21 ].
4.3 Application of C L i P - From conceptualization to implementation The software implementation of design support functionality requires a refinement and transformation of this conceptual information model according to Fig. 2. This refinement may be organized by means of various horizontal layers on the simple class level. Such layers serve as an additional structuring mechanism to maintain transparency and to support extensibility. The specific refinement of the model is determined by the envisioned application and the target software platform. There may be more than one refined model, if different tools for the same of similar tasks are being used in an integrated software environment. Often, available data models are subject to reuse and integration. These data models can either be those used in the tools to be integrated, or some standardized data model such as the application protocols of STEP [22] which have been developed for data exchange between the software environments of different organizations. Different ways of integrating existing data models with the information model framework CLiP have been discussed in [23 ]. There have been a number of data modeling activities in the process engineering domain (see [24] for a critical review) without a proper validation. Information model validation is difficult in principle since there is very little theoretical foundation to decide
112 upon the quality of a certain information model. Validation is only feasible if such a data model is implemented in a variety of different ways. An information model may be considered valid, if such implementations are possible without major effort and if the resulting software matches the cognitive model of the user to facilitate the use of the tool. For this purpose, CLiP has been used in various software development projects in IMPROVE. CLiP has been used for example to extend the database schema of the project database Comos PT of Innotec to also cover conceptual design data [9]. Originally, the database schema of Comos PT has been focusing on detail engineering and maintenance only. The case study revealed the versatility of CLiP and its simple integration with existing data models. Another case study carried out in IMPROVE is related to the integration of different software tools [25]. CLiP is refined into the data model for the specification of so-called integration documents which explicitly model the relations between the schema and the data of the implementation models of different tools. This way, an integration of tools is facilitated by a selective data homogenization approach without the need for defining and implementing a centralized data store (see Section 5). Such an approach avoids the problems of data centered tool integration, in particular, the maintenance and implementation of the necessarily complex data model of the complete design process. In contrast to this tool-to-tool data integration, CLiP is also being used in IMPROVE as a basis for the implementation of a data warehouse to integrate heterogeneous data sources such as tool specific file systems or databases which inevitably occur in an integrated design support environment. Such a process data warehouse not only archives all data generated, but also the work processes operating on these data (see Section 5). Besides the application of the product data model of CLiP for the implementation of information management functionality, e. g. for archiving of the design data generated during a design project and for the exchange of data between tools, the integrated data model can also be used as a starting point for the implementation of tools which support the execution of work processes during design. Such tools can be considered generalized workflow systems which, in contrast to existing workflow systems, satisfy the needs of highly dynamic and creative engineering design processes. At least in the medium time range, such systems are considered of high industrial relevance. The focus will shift from information management to an efficient support of the execution of high-quality design processes. Two work process support approaches are pursued in IMPROVE (see Section 6). They aim on the one hand at the guidance of an individual designer during the execution of unstructured and not properly planned personal work process, and on the other hand on the administration and management of the complete design process carried out by the design team. 4.4 Some lessons learned and future challenges in information modeling Four major and largely independent issues will be briefly sketched in the sequel. They relate to empirical studies of design processes, the integrated modeling of data, documents and work processes, the structuring of an integrated information model and its application. Work process oriented information modeling has to rely at least in part on empirical studies of real industrial design processes. These empirical studies, however, should not only be confined to clarify the social context of a design process [26]. Rather, they should be related to the concrete engineering domain and to the information technology support of design, either desired or actually used. According to our experience, empirical studies have to be goal oriented towards an in-depth understanding of the design process relating organization, management, resources, requirements, tasks, and results produced. The understanding is at best cast into an information model. Since it is impossible to completely formalize (on a fine-
113 grained level) creative conceptual process design, the information model has to remain and coarse-grained (and hence vague) in parts. Such a focus on understanding and modeling is comparable to inductive (empirical) mathematical modeling of chemical processes. Acquisition of real work process data is most effective if it is carried out by the designers themselves. WOMS has proved to be a useful tool to support such work process data acquisition. As in mathematical modeling, this bottom-up approach of empirical studies has to be complemented by some deductive component as in fundamental chemical process modeling. Obviously, such a top-down component of modeling a design process requires a "design theory" or, more pragmatically, a good understanding of current design practice or preferred design processes. A meaningful combination of both approaches remains a challenge for future research [27]. As soon as an information model of the existing design process is available, techniques from business process engineering may be applied to improve the design process and to formulate requirements for computer-aided support [28]. The integrated consideration of data, documents and work processes together with the resources used and the organizational structures involved seems to be appropriate. Still, a lot of conceptual as well as technical issues of developing and validating such an integrated information model have to be addressed by future research work. A much better capturing of the real design process seems to be possible, if documents of all kinds are systematically considered to link design data and work processes. Documents are not only used to archive a design, but they also play a dominant role in the exchange of design results between people across organizational boundaries. Hence, they are closely related to the human part of the work process. Further, documents can be interpreted as input files and they are the result of the execution of some software tools. Therefore, documents relate to the computer-assisted part of the work process. Documents always define the context of a work process and provide a situated view on the design data. Separated documents, however, do not allow a comprehensive and consistent presentation of the whole configuration of the design data. Hence, more work has to be done to clarify the conceptual relations between different types of documents and their data items. An integrated information model of the design process lifecycle has an immense inherent complexity. An appropriate structure of a multi-faceted information model is crucial to facilitate transparency, extensibility, and maintainability. The evolving formalisms and languages for information model representation are further complicating the problem. Last but not least, the collaborative work process of information modeling has to be properly defined, managed, and supported by suitable tools. The resulting information model not only provides a common understanding of the domain of interest within the design team. It is also mandatory for a fully model-based topdown design of design support software systems. There are many applications which can benefit from the same integrated information model such as tool development, integration of existing tools, data exchange between tools and organizations, homogenization of heterogeneous data sources, or the realization of the semantic web to create the knowledge base of an enterprise. Ideally, all this software should be generated automatically from a formal specification. There is obviously a long way to go due to the complexity of the design domain.
5. ARCHITECTURE OF A FUTURE INTEGRATED DESIGN ENVIRONMENT The information models introduced in the previous section are indispensable for a top-down design and for the implementation of integrated design environments. Before we discuss
114 advanced cooperative design support under development in IMPROVE in Section 6, we present and discuss a coarse software architecture which is suitable for the work process oriented integration of existing and novel software tools. 5.1 An exemplary architecture Fig. 3 depicts a sketch of a software architecture of a future design support environment. A prototype of such an environment with partial functionality has been implemented and evaluated in IMPROVE. The environment comprises ex&ting tools typically employed in industrial practice which stem from different sources, either commercial or in-house. Tool integration is still and will remain of substantial interest for the operating companies despite the substantial effort of major vendors to integrate their own tools with each other and with those of selected collaborating partners. The end users in the operating companies are interested in customizing their design support environments by integrating additional tools and data bases provided by other vendors or in-house development groups in order to differentiate their technology from that of their competitors. The software to be integrated can therefore be either "complete" design environments from some major vendor or highly specialized tools or data bases from niche providers. The tools are wrapped by thin software layers to provide standardized interfaces for data exchange and method invocation employing state of the art middleware technology [29]. The interface definition is guided by the conceptual information model of the design process discussed in the previous section. The design documents and their evolution during the work processes determine the interface definition to a large extent, since they are providing the context for tool interoperation in a natural manner. The architecture in Fig. 3 suggests interoperation of very different types of software modules in an integrated design support environment. There are, for example, general purpose process modeling environments (e. g. Aspen Plus from Aspen Technology or gPROMS from Process Systems Enterprise) as well as dedicated simulation tools (e. g. Morex for the simulation of extrusion processes). In addition to the various simulation capabilities various data bases need to be integrated. For example, a project database (e. g. Comos PT from Innotec) is required to store the major product data during a design project. Such a project
Fig. 3: A coarse sketch of a software architecture of a future integrated design environment as partially implemented in the CRC IMPROVE.
115 database may offer a flowsheet centered graphically supported portal to access the design data stored as well as interfaces to a limited number of design tools. Alternatively, a separate flowsheet tool with extended functionality [30] could be integrated in order to serve the needs of other tools integrated in the environment. In addition to the project database, a physical property database (e. g. DIPPR) with raw experimental data as well as parameters for physical property correlations and a repository for storing mathematical models of different kinds (such as ROME [31]) are part of the integrated environment. A commercial document management system is used to serve as an archive for all design documents. A process data warehouse captures the design data in the context of the work process [32]. In order to support the execution of distributed design process, the management system AHEAD [33] of Informatik III at RWTH Aachen is integrated. It assists the project manager in allocating and monitoring the resources (e.g. the members of the design team and the tools they use), in providing a consistent set of documents produced during the design project, and in keeping track of all the activities carried out during the design process on a medium-grained level. An extended middleware platform developed as part of CRC IMPROVE provides load balancing, error handling and service management for the integrated design environment which is typically operated in a distributed wide area network.
5.2 Integration approach The software integration approach chosen is driven by the characteristics of actual design processes, the resulting product data distributed in documents of various kinds and the relations between those documents, or the data items they contain. It is not intended to extract the design data, completely or in parts, from the native data stores of tools in order to duplicate them for example in a central data warehouse and store them together with the relevant associations existing across the various tools. Rather, in contrast to such a data centered integration approach followed by all commercial integration solutions, we preserve the native data stores of the tools to be integrated. Hence, integration is achieved by means of a-posteriori homogenization of heterogeneous data sources. For this purpose, the data and communication layer of the architecture (see Fig. 3) is equipped with dedicated mediators [34] which map the data instances between data sources and sinks. The process data warehouse stores the meta data which are required to trace work processes and the resulting product data for documentation purposes and to facilitate later reuse in the same or in a different project [32]. Such an integration approach has been advocated by a requirements analysis of a number of German operating companies [35]. If integration considers both, the work processes as well as the data handled in a particular design context, implementation and maintenance effort of integrated solutions is limited.
5.3 Providing new functionality for collaborative design New design support functionality has to be provided by means of a functional extension of existing software tools (e. g. a simulator or a project database). These extensions have to be accomplished without reengineering existing tools which is typically not feasible because of commercial as well as technological constraints. Hence, the functional extensions of existing tools are implemented as separate and self-contained software components. These software components are subsequently wrapped by a thin software layer to implement logically as well as technically matching interfaces to facilitate integration with existing tools. Examples of such new functionality under development in IMPROVE will be discussed in Section 6. In many cases, some desired functionality is already available as part of an existing tool. Often, the level of sophistication of the available implementation is too limited in order
116 to apply it for a related purpose for which it has not been designed originally. In such cases, it would be desirable to isolate and extract the available generic functionality from the existing tool in order to offer its service to other tools in the integrated environment after the required extensions and modifications. For example, most computer-aided process engineering tools include some software module for the specification, representation and visualization of flowsheets. Typically, the level of abstraction and the information content covered is determined by the specific task addressed by the tool in the design process. It is obviously preferable from a usability as well as from a maintenance point of view to centralize all the flowsheet functionality in a single dedicated tool. Such an advanced flowsheet tool (see Fig. 3) is designed to fulfill all the requirements for managing flowsheet representations on various levels of granularity and for browsing and retrieving flowsheet related design data [30]. In practice, the extraction of some functionality from existing code may not be possible. There are at least two reasons: the source code may not be available, or the functionality to be extracted may be tightly linked to other tool functions such that the extraction is impossible without complete reimplemention of the tool. In those cases, the functionality is not extracted, but it is bypassed instead. An extended functionality superseding the existing capabilities is provided by a new dedicated tool as part of the integrated design support environment. 5.4 Some lessons learned and future challenges in tool integration A number of challenging issues have come up during our studies on the development of integrated design support environments. Some of them are briefly sketched in the sequel. The a-posteriori integration of existing tools into an open integrated design support environment is meeting the expectations of the end users but is, at least to some extent, contradicting the objectives of the software vendors. The latter want to offer their own integrated solutions to extend coverage and market share. Especially, their tools do not offer transparent interfaces which easily allow tool integration. The data structures may not be documented or the data can not be exported. Existing tools often combine too much functionality in a single software systems due to historical reasons. Typically, the tools have not been designed for integration. Rather, they have been created in an evolutionary extension process which steadily extended the functionality of a monolithic tool. Obviously, a redesign and modularization of the tools would not only facilitate integration into open environments but would also reduce software maintenance cost. Both issues, the lack of transparent interfaces and appropriate modularization are hard problems for tool integration. Middleware and wrapper technology has come a long way and is nicely supporting the control and platform integration aspect of tool integration [36] on a technical level. However, the interfaces are only standardized on a syntactic level, which is not sufficient for tool integration. Rather, standardization on a semantic level is required to ensure proper function and meaningful data exchange between tools. Such a semantic standard may be accomplished by ontologies which are tremendously pushed by semantic web approaches [37]. Ultimately, the classical tool integration dimensions [36] have to be extended by a work process dimension to provide context to the integration exercise. If such a work process orientation is lacking, tool integration is unnecessarily complex and costly to develop and maintain. Hardware and software platforms are rapidly changing. The technological progress in information technology is driven by the mass consumer markets and not by the requirements of engineering design applications. The level of sophistication and functionality of the service layer on top of traditional operating systems is steadily increasing. Improved services simplify the implementation of integrated design environments and allow more advanced functionality. For example, multimedia services can be used for advanced communication between design
117 team members. However, a careful modularization of the application becomes crucial to allow the absorption of consolidated new software technologies. In summary, the integration of tools into useful design support environments at reasonable cost requires careful architectural considerations. Both, the integration of existing commercial as well as in-house legacy software and the absorption of evolving software technologies have to be accomodated. Vendors have to design their tools systematically for aposteriori integration to satisfy the needs of their customers and to reduce their own development and maintenance cost. 6. NEW DESIGN SUPPORT FUNCTIONALITY A work process oriented integration of existing design support software tools requires novel functionality if a new quality of support for collaborative design is aimed at. Subsequently, a selection of such novel support functions are discussed.
6.1 Semantic support of individual designers A designer has accumulated a substantial amount of experience during previous design projects. The quality of the design processes can be improved tremendously if this implicit knowledge can be converted into explicit knowledge which is amenable to a later reuse by himself or by a colleague in a completely different context either within this or another design process. There have been numerous attempts to acquire implicit knowledge from experts by means of formal techniques in artificial intelligence. These techniques typically require a basic understanding of the business processes of interest. Since creative design processes are, at least in part, not sufficiently well understood to effectively guide such knowledge acquisition processes and since experts are not always cooperating well, a new approach formerly suggested in the context of requirements engineering was adopted to apply to engineering design processes in IMPROVE. We briefly sketch the idea in the following and refer for details to [38]. Instead of acquiring knowledge a-posteriori by means of structured interviews, reviews of past design processes etc., the design process is recorded automatically during its execution. The recording results in so-called process traces which capture all the major steps carried out during the design process together with the data and documents which have been handled. These traces are stored in the process data warehouse of the integrated design support environment (see Fig. 3). The traces are not only used to document the work processes in detail. Rather, they provide the basis for interactively extracting repetitively occurring process chunks applicable in a certain design context. As in the area of mathematical process modeling, such an identification task can be supported if the purely data driven identification is complemented by some a-priori knowledge. While such knowledge is comprised by model structures derived from the fundamental laws of physics in mathematical modeling, it is not that obvious what kind of a-priori knowledge can assist the discovery of design process chunks. We are currently investigating to what extent specific parts of a design process can be modeled on an abstract level in order to provide parameterized chunks which could guide the discovery process based on process traces. The process chunks are supposed to be implemented in the design process guidance and enactment tool of the PRIME environment [38] which assists the individual designer during repetitive activities. For this purpose, the guidance and enactment tool has to analyze the current context of the design process first. Next, it has to match it with similar contexts stored in the process data warehouse. If a matching context has been found, applicable
118
process chunks are retrieved from the process data warehouse and suggested to the designer. After his approval and after providing lacking context data, the process chunk is enacted. 6.2 Administration and coordination of the complete design process Individual designers are typically contributing to different design processes simultaneously. All these processes are administrated and coordinated by a chief design engineer, the manager for short. Obviously, the individual design processes are not independent but highly interrelated by the documents they work with and by the resources they share. The resources include time and budget, team members, experimental facilities and available software tools. Inevitably, the inherent complexity of the design processes requires a management support tool to effectively monitor and coordinate the design processes and the associated activities, to keep track of the resulting design documents and their relationships, and to administrate and allocate the available resources. The strong relation between resources, activities, and documents has to be taken into account for a proper allocation of resources to specific design tasks as well as for consistency management of the documents. AHEAD, a software tool to support the management of cooperative design processes anal their interdependencies on a coarse-grained level, provides functionality for two different kinds of users, the manager and the designer [33]. The manager is supported by three fully integrated tool sets. Dynamic task networks with control and data flow interrelations are provided to implement activity management. Version control, configuration management and an explicit notion of the dependencies between documents are provided to facilitate management of the products of the design process. The resource management allows for the definition of the organizational structure of the design teams working on the various design processes. The designer is supported by an agenda tool, which displays the upcoming tasks to be carried out by the design team members, and of a work context tool to manage the documents and the software tools required to carry out a certain design task. The design support offered by AHEAD is purposely limited to coarse-grained activities in order to facilitate the link between the actual design work carried out by the design teams and the management of related design processes. Hence, it differs in scope from the work process support offered by the PRIME environment which focuses on guiding and supporting activities of an individual designer on a fine-grained level. The implementation of AHEAD directly addresses the inherent dynamics of a design process. In particular, the task networks can be modified at any time during project execution to reflect changes in the design process as consequences of emerging insight into the design problem or handling of problems and mistakes. Further, an adaptation of the functionality to the peculiarities of a given domain of application is possible by means of a modeling environment which facilitates the representation of domain specific knowledge, for example, related to the capabilities of the tools employed. Domain specific code is generated to customize the management tool to the domain. 6.3 Multimedia communication in distributed design teams Geographically distributed design teams already use a multitude of services including e-mail, groupware systems, joint workspaces or even video conference systems in order to facilitate synchronous and asynchronous communication. Typically, these services are not integrated among each other, and more importantly, with the engineering design tools of a given domain. Hence, the available communication support systems do not offer sufficient functionality to effectively assist the members of distributed engineering design teams. For example, during the design of an extruder as part of a polymer production process, the potential separation of remaining monomer from the polymer melt during polymer
119 processing in the extruder has to be assessed in order to decide on the degree of monomer separation in the evaporation unit following the polymer reactors. This question can only be resolved effectively, if the chief engineer, the extrusion expert and the separation e x p e r t - all working at different locations and in part in different institutions- can easily communicate via multimedia services which are seamlessly integrated with the design support environment. Only then, all the team members have access to the same set of currently valid design documents and to all the required software tools to jointly carry out the necessary design studies during their virtual conference. For example, they may carry out a CFD simulation of the degassing melt flow in the extruder and a process simulation to study the effect of shifting the monomer separation partly from the evaporator to the extruder. The results of the simulations have to be discussed immediately to decide on the required equipment design modifications of the extruder given the multiple domain specific requirements. In order to support such a scenario effectively, the system KomPaKT has been developed in the CRC IMPROVE and evaluated on the basis of the polyamide-6 design case study [39]. KomPaKT offers a set of modular services in a homogeneous environment to support the needs of multimedia conferencing in engineering design applications. Communication is supported asynchronously, for example by e-mail and audio messages, and synchronously by means of a whiteboard and video streams. Floor control and conference management functions are also provided. KomPaKT is integrated with AHEAD in order to support spontaneous as well as planned conferences. AHEAD provides information on the organizational data of the project, the tools and the documents of a design context of interest. Communication on design issues is supported by application and event sharing mechanisms. In application sharing, the output of a design tool residing on the computer of one designer is presented to all participants of a multimedia conference. Often, communication bandwidth is not sufficient if 3D images or movies have to be transmitted. In those cases, event sharing is more appropriate. An instance of the design tool is then available on every team member's computer and only control information is communicated to synchronize the different instances of the tool during communication in the multimedia conference.
6.4 Document oriented tool integration Tool integration is always possible via input and output data which form a certain configuration of the product data denoted as documents, if the data contained in the documents of two different tools can be mapped to each other in a consistent manner at any time during the design process. Despite the independent creation and incremental revision of such documents by individual design tools, there exist a large number of fine-grained dependencies between the data contained in different documents. For example, the abstraction of the process flowsheet used to define the steady-state simulation problem has to match the real flowsheet stored in the project database. Inconsistencies between the various documents are unavoidable. However, a certain level of consistency has to be established as soon as two tools of a design support environment are used in a cooperative manner. The manual reconciliation of the content of associated documents is time-consuming and error-prone. Hence, integration tools are preferable which automate such a reconciliation process to the extent possible. It should be noted that a fully automated integration is not feasible in many cases because of a potential semantic mismatch between the data models employed by the tools to be integrated. This mismatch can only be resolved manually. Obviously, document oriented integration tools are crucial for the implementation of a design support environments (as suggested in Fig. 2) which do not rely on integration via a centralized design data store.
120
Document oriented integration functionality is subject to research and development in IMPROVE [30]. The integration tools developed assist the user in consistency analysis of two documents, in browsing document content and in the necessary transformations between documents. They operate in an incremental manner and propagate only the increments between documents in a bi-directional manner. They are interactively used by the designer in order to control the transformation process. The reconciliation of the documents is automatic if possible; it can also be assisted by manual interaction of the designer in those cases, where the integration mechanisms fail. The reconciliation is rule based. The rules build on an information model of the documents to be integrated. The objects of the two models are related to each other by means of an integration document, which holds the links between the data items in the two documents. These links are derived by refining the associations between concepts in different parts of the conceptual information model defined in CLIP. Because of a model-based design, the integration tool can be customized to the peculiarities of the tool documents to be reconciled, if the conceptual information model covers the data objects in the documents semantically. Various integrators between different tools have been developed and tested as part of the activities in IMPROVE by employing a common reference architecture. 6.5 Advanced tools for mathematical model development, maintenance and reuse
Chemical process design has been quickly moving towards solutions which heavily rely on mathematical models. Process simulation is used on a routine basis during conceptual design today assisting the analysis of design alternatives. Tomorrow, the generation of a design alternative itself is routinely supported by short-cut methods and partly automated by rigorous structural optimization employing a multitude of tailored mathematical models. The variety of mathematical models requires their management across the design process lifecycle [3]. Two objectives can be distinguished, namely the integration across the process of mathematical modeling to reuse existing model knowledge downstream in the design process and to integrate existing models at runtime to facilitate multi-model, multimethod and multi-platform integration of simulation and optimization tools. Until recently, traditional heuristics and experienced based design have been largely separated from modelbased design. Consequently, the software environments used in both areas are not integrated, neither conceptually nor technically. Both issues, the management and integration of mathematical models across the lifecycle as well as the integration of design data, mathematical models and the results produced during simulation experiments are addressed as part of the IMPROVE project. For the support of mathematical modeling, three complementary software systems are under development. ModKit [40] supports the generation of tailored mathematical models which cannot be found in the library of a simulator. The model can either be exported into the proprietary format of a commercial process modeling environment or in a neutral format derived from Modelica [41] to facilitate model exchange between applications. Models generated by either ModKit or any other commercial modeling environment can be stored in their native form in the model repository ROME [31 ]. Hence, ROME stores symbolic models in a neutral format or in any proprietary format of a commercial simulator, declarative equation-based models as well as executable block-oriented models. Links between models in a flowsheet or between models from different sources are kept at this point on a coarsegrained level only in the database schema which derives from the appropriate partial model in CLIP. Models can be checked out in their native form to be processed by the appropriate simulation tool. However, models from different sources can be linked to a single flowsheet and integrated during runtime by means of Cheops [42]. Cheops allows steady-state as well
121 as dynamic equation-oriented and modular simulation using existing dedicated simulators which have been developed for specific parts of a process. For example, in the polyamide-6 case study, Polymers Plus may have been used for polymer reactor modeling, gProms for monomer separation from polymer melt in a wiped-film evaporator, and the legacy tool Morex for the modeling of the extrusion process. These simulators are wrapped by standard interfaces and integrated with a configurable simulation strategy (modular, simultaneous, or mixes thereof) to form a simulator of the complete flowsheet showing a recycle of the unconverted monomer. This reuse of individual models is possible without the need for a costly and error-prone reimplementation in a single process modeling environment. Mathematical models and their results have to be related to the design process and in particular to the design data. However, mathematical models and design data are kept in different tools without explicitly accounting for relations between them. Obviously, there is a significant overlap and the risk of inconsistencies in these two data sets. Further, tracing of the design process and its rationale requires an explicit relation between design data and mathematical models [43]. Such an integration is currently being carried out using ROME as a model repository to archive models from various simulators in a coherent manner in the first place and Comos PT which serves as the project database storing relevant design data [44]. This kind of integration may be considered a special case Of the homogenization of related data from different sources as discussed already above. 6.6 Discussion The advanced functionality discussed in the previous subsections is not meant to be the only necessary to effectively upgrade current design environments for collaborative and geographically as well as organizationally distributed conceptual design processes in the process industries. Many other support functions to improve the efficiency of collaborative design are conceivable. We have limited our attention on those activities which are currently being studied in IMPROVE. There is yet very little experience with those functionalities which impact the way a designer works. This is not just a matter of human-computer interaction which is essential for both, acceptance and high productivity. An interesting question also concerns the social implications of such an extended design functionality (see [45] for a general discussion). More and more activities are becoming computer-based, the interaction between humans is changing in quality with unforeseen consequences, for both, the quality of the design and the satisfaction of the designer. Further, the full transparency of the design process results in an almost complete assessment of the performance of a designer. Any inefficiency or any mistake is documented. Obviously, such transparency has to be handled with care by management. 6. CONCLUSIONS This contribution has attempted to show that information technology support of engineering design processes (not only in the chemical process domain) is a complex and far reaching problem. It goes well beyond the classical problem of data exchange or of data centered integration of tools to a design environment. IMPROVE addresses this problem area in a long-term project. The objective of the research work is the clarification of work process oriented support of engineering design by means of information technologies. This objective is considered to be the guiding paradigm of the research work and determines the concrete research projects in the center to a large extent.
122 Some of these research issues together with results obtained and experience gained have been summarized in this contribution. Despite the long term and fundamental research focus of IMPROVE, some of the concepts and technologies have already reached a level of maturity which is sufficient to start transfer into industrial practice in focused joint research and development work with the software and end user industries. ACKNOWLEDGEMENTS The authors thank the German Research Foundation (DFG) for the financial support of the Collaborative Research Center CRC 476 (Sonderforschungsbereich SFB 476) and all members of the CRC for their fruitful collaboration, without which the results presented in this paper would not have been possible. REFERENCES
[1] H. Schuler, Chem.-Ing.-Tech., 70 (1998) 1249. [2] T. Backx, O. Bosgra and W. Marquardt, FOCAPO '98, Snowbird, Utah, 1998. Available from http://www.lfpt.rwth-aachen.de/Publication/Techreport/1998/LPT-1998-25.html [3] W. Marquardt, L. v. Wedel and B. Bayer, in: M.F. Malone, J.A. Trainham, B. Carnahan (Eds.): "Foundations of Computer-Aided Process Design", AIChE Symp. Ser. 323, Vol. 96 (2000) 192. [4] S. Konda, I. Monarch, P. Sargent and E. Subrahmanian, Research in Engineering Design 4, (1992) 23. [5] A. W. Westerberg, E. Subrahmanian, Y. Reich, S. Konda and the n-dim group. Computers & Chemical Engineering 21 (Suppl.) (1997) S 1. [6] L. Fisher (Ed.). Workflow Handbook 2001, Future Strategies Inc., Lighthouse Point, 2000. [7] M. Nagl and B. Westfechtel (Eds.), Integration von Entwicklungsumgebungen in Ingenieuranwendungen, Springer, Berlin, 1999. [8] M. Eggersmann, J. Hackenberg, W. Marquardt and I. Cameron, in: B. Braunschweig, and R. Gani (Eds.): "Software Architectures and Tools for Computer Aided Process Engineering", Elsevier, Amsterdam, (2002) 335. [9] B. Bayer. Conceptual Information Modeling for Computer-Aided Support of Chemical Process Design. PhD. Thesis, RWTH Aachen, 2003. [10] J. Rumbaugh, I. Jacobson and G. Booch, The Unified Modeling Language Reference Manual. Addison Wesley, Reading, Massachussetts, 1999. [11] C. Foltz, S. Killich, M. Wolf, L. Schmidt and H. Luczak, in: M. J. Smith, and G. Salvendy (Eds.) "Systems, Social and Internationalization Design Aspects of HumanComputer Interaction", Proceedings of HCI International, Vol. 2, Lawrence Erlbaum Associates, Mahwah (2001) 172. [12]R. Schneider and S. Gerhards, in: M. Nagl and B. Westfechtel (Eds.): Modelle, Werkzeuge und Infrastrukturen zur Untersttitzung von Entwicklungsprozessen, Wiley VCH, Weinheim (2003) 375. [13] M. Eggersmann, R. Schneider and W. Marquardt, in: J. Grievink and J. v. Schijndel (Eds.): "European Symposium on Computer Aided Process Engineering- 12", Elsevier, Amsterdam, (2002) 871. [ 14] M. Uschold and T. R. Gruber, The Knoweldge Engineering Review 11 (1996) 93.
123 [15] J. P. van Gigch. System Design Modeling and Metamodeling. Plenum Press, New York, 1991. [16] B. Bayer and W. Marquardt, Computers & Chemical Engineering, submitted for publication (2002). [17] B. Bayer and W. Marquardt, in: M. Jarke, M. A. Jeusfeld and J. Mylopoulos (Eds.): "Meta Modeling and Method Engineering", MIT Press. In preparation. [ 18] M. Eggersmann, S. Gonnet, G. Henning, C. Krobb, H. Leone and W. Marquardt., Latin American Applied Research 33 (2003) 167. [19] Jeusfeld, M. A., Jarke, M., Nissen, H. W., Staudt, M., in Bernus, P., Schmidt, G. (Eds.): "Handbook on Architectures of Information Systems", Springer, Berlin (1998) 265. [20] W3C. Extensible Markup Language (XML). Online available from http ://www.w3.org/XML/. [21 ] A. Gomez-Perez and O. Corcho, IEEE Intelligent Systems, January/February 2002, 54. [22] X. Yang and C. McGreavy, Computers & Chemical Engineering 20 (Suppl.) (1996) $363. [23] B. Bayer, R. Schneider and W. Marquardt, Computers & Chemical Engineering 24, (2000), 599 [24]B. Bayer and W. Marquardt,Concurrent Engineering: Research and Applications, submitted for publication (2002). [25] B. Bayer, S. Becker and M. Nagl, 8th Int. Symposium on Process Systems Engineering, Kunming, China (2003). [26] L. L. Bucciarelli, Designing Engineers. MIT Press, Cambridge, 2000. [27] B. A. Foss, B. Lohmann and W. Marquardt, Journal of Process Control, 8 (1998) 325. [28] M. Hammer and J. Champy. Business. Reengineering the Corporation. A Manifesto for Business Revolution. Harper, New York, 1993. [29] R. M. Adler, IEEE Comnputer 28, 3(1993) 68. [30] B. Bayer, K. Weidenhaupt, M. Jarke and W. Marquardt, in: R. Gani and S. B. Jorgensen (Eds.): "European Symposium on Computer Aided Process Engineering- 11", Elsevier, Amsterdam (2001) 345. [31] L. v. Wedel and W. Marquardt, in: S. Pierucci (Ed.): "European Symposium on Computer Aided Process Engineering-10", Elsevier, (2000) 535. [32] M. Jarke, T. List and J. K611er, Proceedings of the 26th International Conference on Very Large Data Bases VLDB 2000, 473. [33] M. Nagl, B. Westfechtel and R. Schneider, Computers & Chemical Engineering 27 (2003) 175. [34] G. Wiederhold and M. Genesereth, IEEE Expert 9/10(1997) 38. [35] R. Klein, F. Anh~iuser, M. Burmeister and J. Lamers, Automatisierungstechnische Praxis 44,1(2002) 46. [36] A. Wasserman, in: F. Long (Ed.): Software Engineering Environments. Proc. Int. Workshop on Envrionments. LNCS 467, Springer, Berlin (1990) 137. [37] D. Fensel, J. Hendler, H. Liebermann and W. Wahlster (Eds.). Creating the Semantic Web. MIT Press, Cambridge, 2002. [38] K. Pohl, K. Weidenhaupt, R. D6mges, P. Haumer, M. Jarke and R. Klamma, ACM Transactions on Software Engineering and Methodology 8, 4(1999) 343. [39] A. Schtippen, D. Trossen and M. Wallbaum, Annals of Cases on Information Technology IV (2001) 119. [40] R. Bogusch, B. Lohmann and W. Marquardt, Computers & Chemical Engineering 25, (2001) 963. [41 ] S. E. Mattson, H. Elmqvist and M. Otter, Control Engineering Practice 6, 4(1998) 501.
124 [42] L. v. Wedel and W. Marquardt, in: M. F. Malone, J. A. Trainham and B. Carnahan (Eds.): "Foundations of Computer-Aided Process Design", AIChE Symp. Ser. 323, Vol. 96 (2000) 494. [43] R. Banares-Alcantara and J. M. P. King, Computers and Chemical Engineering 19 (1995) 267. [44] B. Bayer, L. von Wedel and W. Marquardt, ESCAPE 13, Lappeenranta, Finland (2003). [45] J. S. Brown and P. Duguid. The Social Life of Information. Harvard Business School Press, Boston, MA, 2000.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
125
C H A L L E N G E S F O R THE PSE C O M M U N I T Y IN FORMULATIONS. J.L. Cordiner
Syngenta, Global Specialist Technology, T&P, Earls Road, Grangemouth. Stirlingshire. Scotland FK3 8XG
1. I N T R O D U C T I O N Process Systems and computer aided process engineering has over the last few decades very successfully tackled many of the issues from chemical processes. These range from modelling reactors, distillation columns to whole refineries and chemical manufacturing plants. Meanwhile to support this physical property models for the process fluids have been developed from SRK in the seventies through UNIFAC for predictions, Chen and Pitzer models for electrolytes. Recently with increasing computer power and improved mathematical optimisation techniques [1,2] methods have been applied to more complex problems. Rational solvent selections over whole processes [3,4,5] and computer aided molecular design [6,7] have been very successfully applied to industry problems. In addition more complex fluids, such as: associating fluids [8], polymers [9], surfactants [ 10] and electroyes [ 11,12] have been modelled by advanced property methods. Molecular dynamics, QSAR, data-mining and mathematical techniques taken from "biology" e.g. neural networks and genetic algorithms have also been used extensively in modelling complex systems. With this expanded "toolkit" the Process Systems Engineer can begin to tackle some very different types of problems. In recent years there has been much encouragement to broaden the chemical engineering discipline to meet the needs of the new industries, e.g. microprocessors, biochemical, biomedical, food etc. Along with this, academic funding is focussing on partnerships and cross-functional work [13]. Therefore the time is right to exploit the "toolkit- modelling capability and thinking" of the process systems community in these new fields.
126
2. F O R M U L A T I O N S IN I N D U S T R Y The chemical process industry produces many millions of products from relatively few raw materials. Currently the PSE community is focussed on the few thousand active ingredients and bulk commodities. Final products that people use are formulated from these active ingredients and are present in all walks of life from personal care, hygiene, pharmaceuticals and agrochemicals. Formulated products use specifically chosen mechanisms to serve the customer needs by accurately exerting their desired features that can be performance related (such as crop protection, surface protection) and/or convenience (such as controlled release, ease of handling). These formulations are designed for different markets and purposes. The successful design of such formulations can have a huge impact on sales and profitability of a company. With markets tightening and growing competition many of these products need to be made more efficiently and with reduced costs. Therefore the optimisation and design of these formulations is critical to business success. This presentation will focus on the challenges for the PSE community in agrochemical formulations, though many of the issues are directly relevant to drug delivery, personal care, hygiene products and speciality/effect chemicals.
2.1 Agrochemical Market The agrochemicals market is estimated to be $40bn+/year, which is expected to grow as the world population, and pressure on food production grows as shown in figure 1.
Figure 1. The pressure on the agrochemical industry is increasing with advancing competition from Asia and much consolidation has happened as shown in figure 2.
127
Figure 2. Industry is focused on finding new products and formulations that will expand their market and increase their market share. It typically takes at least 10 years of intense research and development from the discovery (- first synthesis) of an active ingredient until it's market introduction as an agro-chemical product. A new formulation can be on the market much quicker. In addition, it has become increasingly more and more difficult to find better actives, that are more user friendly, safer, cheaper to manufacture and increase the effacy of the active. Therefore the design of the formulation becomes more business critical.
3. C H A L L E N G E S IN M O D E L L I N G M A N U F A C T U R I N G PROCESSES
FORMULATION
Starting with unit operations as this is familiar to the PSE community. These unit operations for formulations, shown in figure 3, are in some part very different to active ingredient or bulk commodity manufacturing. Typical examples being of bead mills, coaters, high shear agitators as shown in figure 4. However some items are familiar to the PSE community e.g. fluid bed dryers, spay dryers, mixing tanks and agitators. The characterization of the familiar equipment though is more difficult due to high viscosity, solids, slurries and fluids such as surfactants, wetters, dispersants, polymers and complex active ingredients. Formulation manufacturing is typically more globally spread in more numerous smaller sites than active manufacture as can be seen in figures 5-6. This brings added complications of scheduling and differentiation for each market. The number of slightly different formulations may be large and therefore a system for designing formulations, their manufactures and optimising them easily for each market becomes important.
128
Figure 3" Unit operations in formulation processes. Provided by Paul Bonnett Syngenta.
Figure 4 Coater, Dynomill- bead mill and high shear emulsion agitator from left to right.
129
Figure 5
Figure 6
4. F O R M U L A T I O N TYPES AND ISSUES Pesticide products are split into herbicides, insecticides and fungicides. Herbicides have to penetrate deeply into the plant to kill it. Many Insecticides need to sit on the Surface of the plant leaf to contact insects attacking the plant. Contact many be by direct pick-up or ingestion. Protectant Fungicides in general form a protective layer on the wax of the leaf and therefore require slow uptake into the leaf cuticle but rapid uptake into the surface wax [ 14], although systemic fungicides require reasonable uptake for redistribution in the xylem (for example azoxystrobin). These products are split into a number of different formulation types as shown in table 1 and the characteristics of some of these formulations are shown below.
130
Solid formulations Wettable Powder ( W P ) Wettable Granules ( W G ) Soluble Granules (SG) Granules (GR) Tablets (TB)
Liquid formulations Suspensionconcentrates (SC) Emulsionconcentrates (EC) Soluble liquids (SL) Emulsions in water (EW) Microemulsion concentrates (MEC) Microemulsions (ME) Capsule suspensions (CS) Suspoemulsions (SE)
Table 1: Formulation types 4.1 Solid Formulations Wettable Powders (WP) have a solid, active ingredient - or a liquid active ingredient coated on an inert carrier that is mixed up with all other formulation inerts (dispersants, surfactants, fillers). This is then dry milled (i.e. jet-stream milling) to reduce the particle size to about 2-5 microns. Redispersion upon dilution with water by the farmer results in a suspension of the active ingredient/filler particles as spray solution. Dispersants and surfactants ensure fast redispersion to single particles upon dilution in water to form the spray solution and prevent the single particles from agglomerating and/or sedimenting in the spray solution
4.2 Liquid formulations: Suspension concentrate (SC) :A solid active ingredient with the help of added dispersant and thickner is suspended in water then wet-milled to reduce the particle size to about 2 microns. Surfactants in this case protects the particles from crystal growth and agglomeration They also ensures fast redispersion to single particles upon dilution in water to form the spray solution and prevent agglomeration and/or sedimentation of particles in the spray solution. The addition of a thickener (for rhelogy and structure adjustment) prevents the sedimentation of particles during storage.
Emulsion concentrates (EC). The active ingredient is dissolved in a waterimmiscible organic solvent along with the addition of a co-solvent if not liquid already. Emulsifiers are added to ensure spontaneous emulsification of the formulation upon addition to water to form the spray solution. Emulsifiers also prevent emulsion droplets in spray solution from particle growth, agglomeration, creaming and sedimentation and prevent recrystallisation of the active ingredient upon dilution in water to form the spray solution. Capsule suspension (CS). The liquid active ingredient or solid active ingredient is dissolved in water-immiscible organic solvent and emulsified in water. Oil droplets contain an (at least bifunctional) purely oil-soluble monomer in addition of an (at least bifunctional) purely water-soluble monomer starts an interfacial polymerisation (figure 7). This reaction results in polymer capsules filled with active ingredient "Controlled release" formulation. Emulsifiers stabilise emulsion droplets before polymerisation to prevent
131 sedimentation of the polymer capsules in the suspension. This also ensures fast redispersion to single particles upon dilution in water to form the spray solution and prevents polymer capsules from agglomeration and sedimentation in the spray solution. The polymer chosen need to provide a polymer wall that is mechanically stable upon drying of spraysolution,where applicable, and allows release of active ingredient at the desired speed and amount. A cut out section of a capsule is shown in figure 8. Careful selection of the solvent system can adjust the strength and structure of the wall changing the release rates.
Figure 7 Emulsion before polymerisation and
Capsule suspension after polymerisation
Figure 8: Cut out section of a capsule showing the wall structure taken with Scan Transmission X-Ray Microscopy. Originally presented by Daryl W. Vanbesien, Harald D.H. St6ver, McMaster University, Hamilton, Canada in a internal Syngenta Presentation. Used with permission.
Emulsions in water (EW): Are essentially the first stage in an encapsulation. They are complex multiphase systems where the surfactant typically creates a third phase (micelles) or changes the interfacial tension sufficiently to increase the solubility of the active and increase bio-availability. The control of the water/oil or oil/water emulsion and the path to achieve the emulsion will change the droplet size distribution and the ability to form a stable emulsion as shown in fugure 9. The HLD (hydrophilic-lypophhilic deviation ) scale as described in [15] is written as a sum of the contributions of effects and therefore would lend itself to a group contribution method.
132 Small Drop Size clue to best formulation compromise Small Drop Size due to high internal phase ratio effect
o ER
0
Wl 0
B+
,
u.'r"
~
oil
~
C+
UNSTABLE
~
I~
A
~
1
|
_
ill
I
water
Two-dimensional fomaulation-composition (water-to-oil ratio) map showing ~he (shaded) zones where smalle~ drop size is attained, when varying formulation or composition at constant stirring energy. Inserted plots show the aspect of the drop size distribution in different points of the map,
Figure 9: Reprinted from [ 16].Used with permission.
4.3 Generic issues with formulations These formulations typically need to have a shelf-life of at least 3 years which means no (or minimal within regulatory limits) chemical degradation of the active ingredient and/or formulation inerts, no change in physical appearance of the formulation i.e. lump formation in WP's, sedimentation or creaming of SC's and recrystallisation of active ingredient in EC's. There must be no change in redispersability upon dilution in water to form spray solution. And finally no agglomeration or sedimentation in the spray solution. Farmers oftern use mixtures of formulations and therefore compatability between these typical mixtures is also important. Therefore a formulation design can include wetters, emulsifiers, dispersants, polymers, surfactants as well as the active ingredient. The potential number of mixtures is vast with the choices of all the formulants possible. As was mentioned before, the design of a new formulation can be business critical and therefore formulations are often tuned to the needs of specific markets, allowing differentiation of the products to maximise sales potential. A well-designed formulation can increase the activity by a number of means, for example increased uptake from careful selection of surfactants. UV protection reduced environmental impact and reduced pytotoxicity can make the product more attractive. Increased retention and spreading characteristics can reduce usage rates. Any of these can also make the product sufficiently beneficial over a competitor's to increase sales. In addition a novel formulation can be used to extend patent coverage preventing generic competition taking over the market when the active ingredient patent runs out.
133 The current practice in designing formulations is to employ a trial and error approach based on past knowledge and expertise. Therefore, although the needs may be satisfied, there is no guarantee that the solution is optimal.
5. P L A N T S T R U C T U R E AND E F F E C T ON UPTAKE. An important step for the efficacy of an agrochemical is the uptake of the active ingredient into the target organism. Therefore building the right mixture into the formulation can enhance the speed and/or amount of uptake of the active ingredient into the target organism and therefore enhance the activity. The right mixture can enhance the chemical stability of the active ingredient, i.e. protect it against photodegradation by UV radiation and weaken the negative impact of environmental factors like heavy rainfalls on the efficacy of the agrochemical. The formulation can influence positively the toxicological profile of the agrochemical like reducing leaching,skin irritation or inhalation. Often something simple like a pH change can substantially improve the suitability of a given formulation. The pesticide, for example, needs to travel from the surface through the epicuticular wax, through the cutin and pectin layer before reaching the target cells as shown in figure 10
Figure 10: reprinted from [ 14] with permission.
This epicuticular wax is a complicated mixture which can be homogenous or show varying levels of crystallinity [ ] (figure 11) dependant on species, which slows the diffusion of the pesticide [ 17]. The intracuticular wax layer is generally accepted to be the main barrier to pesticide uptake [ 18,19,20]. Young leafs have more wax crystals possibly due to the fragile cuticles that are more permeable to water. The waxes differ between plant species. Two examples of the composition of the waxes are shown in figure 12.
134
Figure 11: Pea Leaf epicuticular wax provided by Dr C Hart, Syngenta. Further information is presented in [21] This wax layer forms a 2 micron barrier that the active ingredient has to to travel through. Diffusion through the leaf wax can be improved significantly by careful selection of the surfactant [22-29]. The HLB scale [30] is used to classify and select surfactants much like solvent classifications Parker [31] and Chastrette [32] for reaction solvents [33,34]. There are several postulations about why the surfactant aids the passage of the active ingredient, for example the surfactant could be solubilizing the wax and in particular the crystalline structure reducing the tortuosity of the actives' path through to the target cells.
135 [] n-alkanes [] iso-alkanes ~] anteiso-alkanes 9n-alkenes 9iso-alkenes 9anteiso.alkenes [] n-alkanals [] n-alkanoic acids [] unsaturated alkanoic acids [] branched alkanoic acids 9ketones 9n-alkanols 9secondary alkanols [] branched alkanols [] diols [] alkyl~sters [] methyl-esters [] glycerine esters [] al kyl.coum arates [] trite rpe ne esters [] triterpenes (~ constituents with basis peak 8: constituent w i ~ basis peak 123 ra constituent with basis peak 18( [] constituent w i ~ basis peak 117 [] unidentified
Figure 12 9The chemical composition as well as the chain length of the components of the cuticular wax of sunflower leaves (upper chart) and rape (lower chart) and is given in %
5.2 Cutin Composition The cutin layer is made up of an insoluble polymer matrix. This is generally whydroxy-fatty acids of chain length C16-C18 [35-37] and 1-2microns in depth.
6. M O D E L L I N G UPTAKE IN PESTICIDES. Some modelling has been attempted. Essentially the problem is a familiar one with solubility of the active chemical in the water droplet and the formulation mixture on the leaf, then a set of membranes to cross and diffuse through before reaching the target cells. Fickian diffusion can be used to model this diffusion. Foliar uptake of a pesticide has been modelled by using partition coefficients and diffusion coefficients across the plant cuticle[38,39]. This has been extended to the effect of the tortuosity of the actives through the crystallinity of the epicuticular wax [ 14]. This needs estimations for Partition Coefficients, diffusion coefficients, cuticle thickness, Solubility of Leaf Wax in Droplet and the Molar Volume of AI.
136 Whole plant models have been developed [23] which considers the cuticular membrane solubility, solute mobility and tortuosity and the driving force. Further models have been developed for adhesion [23] onto the plant leaf and retention [23] this is especially important for insecticides. Clearly the formulants selected need to aid faster diffusion through the wax in the cases of fungicides and herbicides. Insecticides ideally should diffuse very slowly into the plant and also be well retained on the surface wax. Encapsulated formulations, like microcapsules for drug delivery are being used which present some different modelling and design tasks. The solubility of the active ingredient in the polymer and the diffusivity through it becomes important. The ability to select and optimise the polymers used for efficiency, environmental and cost reasons become the objective and are very familiar to the PSE community being similar to solvent selection and design as used in the CAMD approaches. The careful selection of solvent, for encapsulation can change the release rate and structure of the capsule. This is as a result of the solubility of the monomer in the solvent and therefore the amount available for the polymerisation. A new generation of models and model-based PSE-tools would be needed. Also, formulation design problems, could in principle, be formulated as Computer Aided Mixture Design problems - the limitations at this moment are the models to predict the properties that design the mixtures.
7. CURRENT RESEARCH BY FORMULATORS, PLANT SCIENCE AND RELATED DISCIPLINES. Formulation research is focused on areas as shown in figure 3 using some new advances such as high throughput screening and combinatorial chemistry. Robots are used to rapidly generate large datasets [40] to facilitate more fundamental understanding of what processes are happening as a herbicide, insecticide or fungicide comes in contact with it's target organism. A mixture of methods is used to study the transport processes and mechanisms. Surfactant or oil/solvent enhanced uptake is studied by reconstituted or isolated waxes along with measurements of wax composition versus uptake, cuticle architecture. Clearly genomics is being exploited to build significantly new markets beyond traditional formulations of classical active ingredients. Some of these "tools" could potentially be useful for building PSE type models with high throughput screening able to generate vast quantities of data, which may be needed to build more fundamental understanding of systems or generate group contribution type methods.
137
Figure 13
8. C H A L L E N G E S F R O M F O R M U L A T I O N DESIGN. As formulations can be a complicated mixture of surfactants, dispersants, emulsifiers, polymers, buffer agents, antifoams, oil concentrates and inorganic salts models for these will need to be further developed. Many of the systems are formulants such as solvents and surfactants are actually mixtures and therefore models need to capable of handling these appropriately. Models for surfactants need improvement. An understanding and representation of surfactant concentrated phase behaviour phase behaviour is required [23] e.g. liquid crystals where deposits on leaf surfaces dry, or gel phases in waxes. Models for the solubility of active ingredients in these phases are required. The solubility can change rapidly when the deposit dries. A typical surfactant structure is shown in figure 14.
~O~o~O~o~O~o~O~o/~/O~o'~O',H 0 Figurel4: Typical surfactant: "Akypo RML 100" surfactant molecule: C12, EO l0 COOH. Alkyl chain ranges, EO number typically gaussian distribution with mean at 10. Below pH 3.65 effectively non-ionic but anionic above pka of acid group Control of the rheology and interfacial properties can be the key to size control of particles, stability of emulsions and process ability. As mentioned above, the solubility of complicated actives in polymers along with the solubility of the polymers in solvents are also required. Solubilities of the complex actives in the surfactant and in the leaf wax are required. Even solubility of the complex actives in water is important and at the bounds of
138 what is currently possible. Improved models for solubility of complex multi-functional molecules would be very beneficial in development of active ingredients purifications and separations as well as formulation design. Many of the systems are electrolytic, as are many active ingredient processes. However no reasonable predictive models exist for such systems. Such predictive models would be extremely useful in Active process optimisation [34]. Predictive models are needed for all the systems m e n t i o n e d - current models are unable to handle interfacial phenomena and the properties related to this. Models need to consider the phenomena as well as the physics and chemistry (that is, thermodynamic properties, molecular structural properties as well as mass transport, interfacial phenomena with or without reactions - in the case of electrolytes, there are reactions). Without suitable models, development of PSE or CAPE tools would not be possible (these tools can contribute by providing the means to investigate solutions that otherwise would not be possible - but, they require models, which are not available, currently). The interaction between the species and a fundamental understanding on how they affect the active ingredients joumey to the target organism is required. How does the surfactant and other additives really affect uptake. The PSE toolkit allows for rationalization of large datasets and to fit models to the data. Perhaps this can be used to generate more fundamental understanding of the systems in formulation or the effect on the target organism. The ability to model the effect of the different formulants is necessary in-order to be able to optimise and generate and handle the large amount of data required. Models are required for partition coefficients and diffusivity from the drop into the wax and from the wax into the cutin, through the pectin. Models are also required for retention on the leaf surface, spreading and loss, to allow optimisation of formulations and conditions to maximise active ingredient uptake and beneficial bio-availability. If such models can be developed or approximated from analysis of large data sets potentially available from high throughput experiments, then the ability to optimise formulation design and design like the CAMD approach for solvents would be feasible. However, what's needed in this case is computer aided mixture design and has a much higher potential benefit than CAMD, as the composition space is much larger than for solvent selection. Global optimisation techniques may well be appropriate to rapidly assess such large composition spaces for optimal solutions. Empirical models and geometrical techniques [41,42 and 34] have proven very useful in design of active ingredient plant design. Perhaps there is an opportunity to use these types of models to develop our understanding and selection of formulation mixtures. Clearly developing understanding of the effect of formulants in the mixture and the ability to select as required in a Computer aided formulation mixture design tool would also lead to better understanding of the impact of the Active ingredient design and manufacture. Typically active ingredients are made to a specification and designed for the best process for the active. However, this artificial boundary between the active and
139 formulation manufacture can lead to less than optimal designs. The boundary must be removed and any designs need to take into account the formulation ability of the product and optimise across this wider whole process. An understanding of the properties that are important in an active to ensure successful formulation is required.
9. C O N C L U S I O N S . A series of challenges for the PSE community have been presented showing the need for more fundemental understanding of the impact of each formulant on uptake, effacy and the ability to model the systems. The critical need to develop physical property models to handle the complex mixtures was highlighted. The aim being to be able to use PSE models to design and optimise formulations and invent new formulation types. A rational selection tool or computer aided formulation mixture design tool being the goal to radically improve formulation design. Such tools and the ability to tune formulations specifically, optimising their design will reduce costs, reduce environmental impact and allow product differentiation giving potential for sales growth. Given the increasing difficulty to find new agrochemicals that show advantages over those currently available. The ability to design new and improved and cost effective formulations will be the key to business growth and therefore critically important. The PSE community needs to work in partnership with plant scientists, surface scientists and those working in formulation development to build understanding of the systems bringing their toolkits and thinking together.
REFERENCES 1. Adjiman C.S., Adjiman, Androulakis I.P., Maranas C.D.,and Floudas C.A., Computers in chemical engineerings 20:$419-$424,1996 2. Pantelides C.C. Proc 2nd conf on foundations of computer aided process operations. Edited by Rippen D.W.T. and Hale J. Cache Publications, 253-274, 1994 and 1997. 3. O. Odele and S. Macchietto Fluid Phase Equilibria. Vol 82, pp 57-54, 1993 4. E.N. Pistikopoulos ,S. K. Stefanis Computers Chem. Eng 1998 Vol 22, pp 717-733, 5. Giovanoglou A.,Barlarier J., Adjiman C.S., Pistikopoulos E.N., Cordiner J., submitted to AIChemE Joumal and presented at Fall AIChE Meeting 2002. 6. Hostrup, M., Harper, P.M., Gani, R., Comput Chem Eng, 23, pp1394-1405, 1999. 7. Gani R., Achenie L.E.K., Venkatasubramanian V. Editors. Computer Aided Molecular Design: Theory and Practice. Elsevier Science, Amsterdam. 8. W. G. Chapman, K. E. Gubbins, G. Jackson and M. Radosz,, Industrial and Engineering Chemistry Research, 29, 1709-1721 (1990) 9. Arlt W., Distillation and Absorption Conference Sept 2002 10. A. Galindo, G. Jackson, and D. Photinos, Chemical Physics Letters, 325, 631-638 (2000) l l.A. Galindo, A. Gil-Villegas, G. Jackson, and A. N. Burgess, Journal of Physical Chemistry B, 103, 10272-10281 (1999)
140 12. A A. Gil-Villegas, A. Galindo, P. J. Whitehead, S. J. Mills, G. Jackson and A. N. Burgess, Journal of Chemical Physics, 106, 4168-4186 (1997) 13. Howell J, Cordiner JL et al Report to EPSRC "Strategy and Stucture of Chemical Engineering research" 14. N. M. Sune, R. Gani, G. Bell & J. Cordiner, AIChE Fall Meeting, 2002. 15. Salager J.L.,Forgiarini A.,Marquez L.,Pena A., 3rd World Congress on Emulsion Lyon Sept 2002 16. Perez M., Zambrano N., Ramirez M.,Tyrode E.,Salager J.l., J Dispersion Science and Technology, 23(1-3)55-63 (2002). 17. Merk, S., A. Blume, et al. Planta 204(1): 44-53, 1997 18. Sch6nherr J 1976 19. SchOnherr J 1984 2o. Schreiber, L. and M. Riederer. Plant, Cell Environ. 19:1075-1082. 1996 21. Friedmann A.,Bell G.,Hart C. IUPAC Int. Congress on the Chemistry of Crop Protection. Basel 2002 22. Friedmann A., Taylor P., Bell G., Stock D. SCI Conference 3.12.2002 23. Zabkiewicz J.A. IUPAC Int. Congress on the Chemistry of Crop Protection. Basel 2002 24. Holloway, P. J., Edgerton B. M., Weed Res. 32:183-95, 1992. 25. Holloway, P. J., Silcox D. Proc. Br. Crop Prot. Conf.-Weeds: 297-302, 1985. 26. Holloway, P. J., Wong W. W. C. etal. Pestic. Sci. 34: 109-18, 1992 27. Stock, D., B. M. Edgerton, et al. Pestic. Sci. 34: 233-42, 1992 28. Van Toor, R. F., A. L. Hayes, et al. FRI Bull. 193: 279-284. 1996 29. Burghardt M., Schriber L., Riederer M.J., Agric. Food Chem 46, 1593-1602, 1998 3o. Holmberg K., Editor Handbook of Applied Surface and Colloid Chemistry. 2001 John Wiley and Sons, Ltd. 31. Parker, A.J, 1973, J. Am. Chem. Soc., 95,408 32. Chastrette JACS Vol 107 No 1 1-11 1985 33. Cox, B. G., 1994, Modem liquid phase kinetics, Oxford Chemistry Primer Series 21, Oxford University Press 34. Cordiner J.L., Comput Chem Eng Volume 9 Proceedings of Escape 11 35. Baker 1982 36. Holloway 1982 37. Holloway 1993 38. Lamb et al 2001 Syngenta internal report 39. Stock, D., P. J. Holloway, et al. Pestic. Sci. 37: 233-45. 1993 40. Baker E., Hayes A., Butler R., Pest. Sci. 34, 167-182 (1992) 41. M Doherty and J Perkins Chemical Engineering Science 1982 Vol.37 381-392 42. Bek-Pedersen, E., Gani, R., Levaux, O., Comput Chem Eng. 24 (2-7) pp253-259, 2000
ACKNOWLEDGEMENTS Permission to publish from Syngenta is gratefully acknowledged. Thanks to a great many friends and colleagues for advice and information, especially: Dr Gordon Bell, Dr Alan Hall, Kenneth McDonald, Brian Lauder, Paul Bonnett, Dr Adrian Friedmann, Dr Cliff Hart and Dr Stefan Haas of Syngenta Dr Claire Adjiman of Imperial College and Prof Rafiqul Gani of Danish Technical University, Denmark.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
141
A Summary of PSE2003 and the Impact of Business Decision Making on the Future of PSE Christodoulos A. Floudas a, Jeffrey J. Siirola b aDepartment of Chemical Engineering, Princeton University, Princeton, NJ 08544-5263, USA bEastman Chemical Company, PO Box 1972, Kingsport, TN 37662-5150, USA
The authors of this paper are to prepare it based on these proceedings and the presentations at PSE2003. Thus a copy cannot be included here.
142
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier ScienceB.V.
Multi-Scale and Multi-Dimensional Formalism for Enterprise Modeling Atsushi Aoyama, Yuji Naka Chemical Resources Laboratory, Tokyo Institute of Technology Nagatsuta Midori-Ku Yokohama 226-8503, Japan
Abstract This paper proposes a modeling formalism that integrates various phenomena with different level of accuracy, temporal and spatial scale and dynamics. Multi-scale and multi-dimensional formalism (MSMDF), in which model is expressed as a combination of hierarchically and/or horizontally structured modules and their links, has been proposed. The MDF concept defines a model as an interaction of structural, behavioral, and operational dimensions (modules). The MSF concept further enables a hierarchical/horizontal combination of MDF modules. Multi-scale and multi-dimensional formalism (MSMDF) integrates high-level business models with day-to-day operation models, and enables a business decision making (BDM) based on more precise understandings of the state of the business.
Keywords multi-scale modeling, object oriented modeling, lifecycle engineering, concurrent engineering 1 INTRODUCTION The business environments surrounding global chemical industries are undergoing tremendous changes. One trend is a heightening concern on environmental impacts and global sustainable issues. An immediate attention has to be paid to development and improvement of product lifecycle that reduces environmental impact and energy consumption. These activities for sustainability require modeling and simulation of product lifecycle to evaluate environmental impacts (e.g. global warming, ozone layer depletion). Another trend in process industries is the shift to manufacturing of low-volume, higher added value products. In commodity chemical industries, the improvement of process efficiency such as energy saving, cost saving, controllability and operability during the product manufacturing is the most important issue. However, in fine chemical industries, these issues are marginal and how quickly to manage a
143 business flow from research and development to manufacturing and marketing becomes crucial. Concurrent engineering is a solution to this paradigm shift from the process centered manufacturing to the product centered manufacturing. Performances of concurrent engineering depend on accurate business flow model that can express multidisciplinary processes with different levels of accuracy, temporal and spatial scale and modes. The above-described tasks require a modeling scheme that can integrate phenomena ranging from government policy, business decision-making and supply chain management to individual process equipment and molecular interaction. The purpose of this paper is set to present a multi-scale modeling formalism to integrate models with not only different range of characteristic times and lengths but also vast ranges of dynamics and business and engineering activities. The proposed scheme is based on multi-dimensional formalism (MDF) [ 1]. Multi-scale and multi-dimensional formalism (MSMDF) is a conceptual expansion of MDF to enable multi-scale modeling where a lower layer MDF module is defined to be a behavioral dimension of upper layer MDF module. The next section introduces the concepts of multi-scale modeling. Section 3 describes multi-dimensional formalism (MDF). Section 4 describes the conceptual expansion of MDF to form multi-scale and multi-dimensional formalism (MSMDF) for multi-scale hierarchical modeling. Section 5 briefly summarizes the results. 2 MULTI-SCALE MODELING Physical and chemical behaviors arise from the interaction of atoms, molecules and nanometer-structures. Chemical engineering's traditional focus, behaviors of unit operations arise from the combination of physical and chemical behaviors. And the behavior of plants and business enterprises is a combined behavior of unit operations. A multi-scale model is a composite mathematical model formed by combining modeling modules with different characteristic lengths and time scales. Each modeling module expresses some aspect or part of the overall model. The links are defined between modeling modules to exchange information. The multi-scale modeling framework describes the way in which modeling modules are integrated, or linked, to form an overall model. This section looks into the classification of various multi-scale modeling frameworks and conceptual issues related to multi-scale modeling frameworks. Ingram [2] divides multi-scale modeling frameworks into five classes: serial, simultaneous, hierarchical, parallel and multi-domain following the classification proposed by Pantelides [3]. The notion of "micro-scale" and "macro-scale" in the following description is based on the usage in the Ingram's paper. 1. Serial: The micro-scale model is used to generate some parameters, data or simple relationship that is later used by or as the macro-scale model.
144 2.
Simultaneous: The micro-scale model is used to simulate the system in its entirety. The macro-scale model is simply some averaging or integrating procedure that is used to summarize the detailed micro-scale results 3. Hierarchical: The micro-scale model is 'formally embedded' in the macro-scale model. That is, the micro-scale model exists within the macro-scale model and it provides some relationship between macro-scale model quantities. The chief advantages of hierarchical integration are micro-scale realism coupled with a reduced computational burden and the 'natural appeal' of this style of modeling. 4. Multi-domain: The micro-scale and macro-scale models describe the processes in a small number of distinct but adjoining spatial regions of the system; there is an interface between them. 5. Parallel: Both the micro-scale and macro-scale models span the entire spatial domain. The micro-scale model treats some phenomena thoroughly, while other phenomena are treated in a minimal way. The micro-scale model is complementary to the macro-scale model in the thoroughness with which it treats the various phenomena. We consider serial framework is essentially equivalent to hierarchical framework from the viewpoint of model structure. It is reasonable to suppose that serial framework is proposed only to avoid the computational burden of simultaneous micro-scale and macro-scale computation. Simultaneous framework is considered to be a special case of hierarchical framework where macro-scale model is very thin. Multi-domain framework and parallel framework are classified as a rather simple model division. So we chose hierarchical framework as the main focus of our research for multi-scale modeling. Our proposal also supports multi-domain framework and parallel framework. 3 MULTI-DIMENSIONAL FORMALISM (MDF) Multi-dimensional formalism (MDF) expresses model as a combination of structural, behavioral, and operational dimensions. The structural dimension defines the boundary of model. The structural dimension has attributes to define characters of the model (e.g. size, capacity and connectivity). The behavioral dimension comprises objects and activities with properties that come out as an interaction between structural dimension parameters and internal states. The behavior is not what the object is supposed to do but refers to the internal states as is processed, contained, or conveyed in the structure. The operational dimension includes knowledge and activities for managing, controlling, operating and manipulating parameters of structural dimension. In MDF, structural, behavioral, and operational dimensions are configured to perform roles in a similar way to those played by the equipment, controllers, processed material and energy in a real plant. Figure 1 shows an example of MDF model, a chemical company. As shown in Figure 1, operational dimension does not directly exchange information with behavior
145 dimension. Because MDF supports highly structured modeling, the resulting model has a high level of modularity and hence makes the management of change very simple. MDF allows the presence of more than one kind of behavioral and operational dimension associated to a particular structural dimension and enables the development of simulation environments in which different ways of management scheme can be explored with the same simulation models. MDF introduces the concept of meta-model to enable a hierarchical structure in behavioral dimension. Behavioral dimension is formulated with a behavior entity called meta-model. Meta-models have properties of aggregation and hierarchical decomposition. Hierarchical decomposition allows a meta-model to be decomposed into a number of other meta-models. Aggregation is the property of combining and linking meta-models.
Figure 1 MDF Model
4 MULTI-SCALE AND MULTI-DIMENSIONAL F O R M A L I S M (MSMDF) As described in the previous section, MDF does not clearly show the framework for hierarchical structure in operational dimension and structural dimension, or a heterogeneous hierarchical structure of all three dimensions. This paper proposes a framework in which all three kinds of modeling dimensions (modules) are linked to form an overall model. The framework also specifies how information propagates between modeling modules. The basic concept of MSMDF is created based on the observation that it is possible to define an internal structure with structural, behavioral and operational dimension to behavior dimension of MDF model.
146 Since a behavior dimension of MDF model can be a MDF model, defining the structure and operational dimension of the MDF model as the upper layer module and the behavior dimension of MDF model as the lower layer module constructs a layered structure of overall model. The concept of module is developed so as to support the creation of libraries of reusable modules that can be exchanged between simulation tools. In addition, a module can be replaced by another module of different fidelity or coarseness without modifying the overall model structure. MSMDF allows the presence of more than one lower layer modules associated to particular upper layer module so as to support multi-domain and parallel framework of multi-scale modeling.
Figure 2 Layered Structure of MSMDF Model Three kinds of module are defined: structural-operational module, structural module and behavioral module. Structural-operational module has structural dimension and operational module, operational module can change predefined parameter values of corresponding structural dimension. Structural module is used to model a component that cannot be directly operated, manipulated or controlled. Modules that do not have lower layer module are called behavior modules. Modules are interconnected via ports. The following eight links are defined for transferring information, energy and mass between modules. 9 Between upper layer structural module and lower layer structural module 9 Between upper layer structural module and lower layer behavioral module 9 Between upper layer structural module and lower layer operational module 9 Between behavioral module and structural module in the same layer
147 9 Between behavioral module and behavioral module in the same layer 9 Between behavioral module and operational module in the same layer 9 Between structural module and structural module in the same layer 9 Between operational module and operational module in the same layer The port ensures the consistency of information exchanged between modules and transforms the fidelity and/or coarseness of information if necessary.
Figure 3 Modules and Links 5 CONCLUSIONS This paper proposed a novel modeling formalism, multi-scale and multi-dimensional formalism (MSMDF). MSMDF not only supports simple management of change and multiple modelling but also enables multi-scale hierarchical modelling, and can be used to model product lifecycles and concurrent engineering activities for better business decision making (BDM). REFERENCES [ 1] R. Batres, M. L. Lu and Y. Naka, Concurrent Engineering Research and Application,
No. 7 (1996) 43. [2] G. D. Ingram and I. T. Cameron, The Proceedings of the 9th APCChE Congress and CHEMECA 2002 (2002). [3] C. C. Pantelides, European Symposium on Computer Aided Process Engineering11 (2001), Elsevier, Amsterdam 15.
148
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
An Evaluation Strategy for Optimal Operation of Batch Processes under Uncertainties by Chance Constrained Programming H. Arellano-Garcia, W. Martini, M. Wendt, P. Li and G. Wozny Institute of Process and Plant Technology, Technical University of Berlin, KWT 9 10625 Berlin, Germany
Abstract: Previous studies have applied deterministic approaches for simulation and optimization of batch processes with constant parameters treated as known. Nevertheless, the existing uncertainties may have a significant impact on the operation developed by a deterministic approach. In this work, chance constrained programming is used to cope with those uncertainties for the development of both optimal and robust operation policies. We hereby focus on a reactive semi-batch distillation process with purity constraints, which is known as a complex dynamic nonlinear process. For this purpose, a new approach is proposed to compute single and joint probabilities and their gradients. The relationship between the probability level and the corresponding values of the objective function is used as a measurement for evaluating and selecting operation policies. Keywords: Uncertainty, chance constraints, single and joint probabilities, probabilistic programming, batch distillation. 1. INTRODUCTION Robust decision making under uncertainty is considered to be of fundamental importance in numerous discipline and application areas. Specifically in complex dynamic processes there are parameters which are usually uncertain, but may have a large impact on the targets like the objective value and the constrained outputs. Deterministic optimization approaches based on constant model parameters have mostly been used in the past. One way of handling uncertainties in optimization approaches is the definition of chance constraints. For this purpose, efficient approaches for linear systems had been proposed (Pr6kopa, 1995) and applied for linear process optimization and control (Scharm and Nikolaou, 1999; Li et al. 2002). An approach to nonlinear chance constraint problems for steady state process optimization under uncertainty was proposed (Wendt et al. 2002). Efficient approaches to chance constrained programming for nonlinear dynamic processes are not available, although it is required for developing and evaluating operation policies for nonlinear dynamic processes, e.g. batch processes. In this work, we focus on chance constrained optimization of a reactive semibatch distillation process of industrial scale with product purity constraints. The kinetic parameters of the reaction rate and the tray efficiency are considered as uncertain parameters. In this work, they are assumed to be multivariate normally distributed involving correlations to each other. The probability levels can be regarded as a measurement of the robustness of the optimized control strategies. With a higher probability level, the optimized value of the objective function will be degraded. Therefore, a trade-off decision has to be made between robustness and profit concerning the objective function value. In order to achieve this target, one important step for complex dynamic systems is the development of an efficient numerical method to compute the probabilities and their gradients. Since there is a monotone relation between the tray efficiency as the uncertain input and all constrained outputs, the method can
149 be generally based on the approach proposed by Wendt et al. (2002), but modified with an efficient search algorithm of bounds of the uncertain input. 2. C H A N C E C O N S T R A I N E D O P T I M I Z A T I O N P R O B L E M S
Generally, a nonlinear dynamic minimization problem under uncertainty can be formulated as follows: min f ( x, u, ~) s.t. g(2,x,u,~) =0
h(2,x,u,~)<_O
x(O) = x o te[O, tf]
(1)
xeX, ueU,~=_ where f is the objective function, g and h are the vectors of the equality and inequality constraints, while x c_~tl",u c_91 m and ~c_91 s are the vectors of state, decision and uncertain variables, respectively, x 0 is the initial state of the process. One way of handling the uncertainties is to transform the inequality constraints to chance constraints with a userdefined probability level. If there is more than one inequality constraint, which need to be integrated to chance constrained formulation, it is a critical question (and the user's choice as well) whether to define single or joint chance constraints. These two possibilities of defining a chance constrained optimization problem are shown in the formulations below: min E [ f ( u ) ] = Iz f ( u , ~ ) p ( ~ ) d ~ s.t.
P, {y; (u , ~') < ySp}> o~,, i = 1,...,I
single chance constraints
(2)
or P{yi(u,~) < ySp, i= 1,...,1}> c~ joint chance constraints where E is an expectation operator. It should be noted that the equalities and the state variables in (1) will be eliminated, if an integration step is used (i.e. a sequential solution strategy). The probability level a directly indicates the robustness of the operation policy to be optimized. Therefore it is worthwhile to investigate the relation between a and the optimized value of the objective function. The user may decide for a solution, where the optimal objective value is least sensitive to the probability level. 3. THE S O L U T I O N A P P R O A C H
The basic requirement of this approach is that at least one uncertain input variable has a monotone relation to all constrained output variables. Due to the monotony, a point of the constrained output Yi can lead to a unique value of the uncertain input ~s through the reverse projection ~:s = F-~(Yi) 9 This leads to the conclusion that if Yi 7" =~ ~:i "[', the chance constraint P{y, < yS,p }> ~ can be mapped to P {~s < ~:sL }> a' , or if Yi "~=:::>~i "~ then the equivalent presentation will be P{~s > ~:sL}> or. Based on given values of the other uncertain variables ~,,, ( s = l , . . . , S - 1 ) ,
the bound of the constrained output y/SP and the control
variables u, the bound ~sL can be computed as follows:
~ :F-'(r162
(3)
150 and this leads to the following representation **
**~s L
(4)
p{y, < ySP}= I"" I I p(~:'''''' ~s-,, ~s)d~sd~s-,"" d~, --oo
_o._oo
Furthermore, computation of the gradients, needed for an NLP framework, can be implemented based on the following representation:
OP{y, <_ySe } _ OU
] P ( ~ ' ' " ' ' ~ s - ' ' ~ L ) - - a~ ~U d~s_ ! ... d~ -oo
(5)
_no
The numerical integration of (4) and (5) can be carried out simultaneously by orthogonal collocation on finite elements. For the case of multivariate normal PDF, it is found that the most efficient implementation is to use the 5-point-collocation. For solving dynamic problems with a constraint variable ySP (t I ) at a fixed time point t i , and uncertain parameters occurring throughout the entire operation time, a more general and efficient dynamic solver is required. The proposed dynamic solver can be divided into two steps: 1) Determination of the reverse projection of the feasible region by the bisectional method and 2) Computation of the gradients. The method is based on formulation of the total differential of the model equations
g(x,u,r ag ax
ag a~'
ag
(6)
dg = -~x-ff-~udu +-~s --ff~u du +-~u du =0 Therefore a large-scale system of equations will be generated as follows:
,•
c, J~ .
xu.~
......
xv.,.,
G .
.
.
4 J, 9 .
..
c, "
Cm_l
A,
S,
(7)
F~ .
9
... =
..
-
F, ""
Lx ....
"
.....
xv.,,,.j
where Ji denotes the jacobian matrix
F=
(ae/axl,) at time
interval i and m is the number of time
intervals. Ci is the gradient (ag/a(s ,), Ai is (0gi/&i_,) and F~ signifies (ag/aul,). The Jacobian at the last time interval Jm is adjusted by replacing the constrained variable with ~:~. Thus, the desired gradient (o~/ou)is included in the last line of the matrix, which denotes the gradients (0x/0u). The unknowns in this equation system, the values for xu, will be computed using Gauss elimination. 4. JOINT C H A N C E C O N S T R A I N T S IN N O N L I N E A R SYSTEMS To compute a joint constraint, one uncertain variable ~:s must be monotone to all constrained output variables. This uncertain variable has to be defined as an upper or lower bound according to the bounds of the constrained outputs and the characteristics of the monotony. In this case, there are several constrained outputs inducing several upper or lower bounds, then for the integration of ~:s, the lowest possible value of the upper bound and the highest
151 possible value of the lower bound is chosen. Thus, the joint probability concerning the output constraints will be formulated so that
P{y, < y s'~ i : 1 , . . . , I } : P{~ < ~s -< ~sL}
(8)
where ~:~ and ~:/ are the upper and lower bound of the uncertain input region, respectively. Then the joint probability can be computed by: oo~ P{y, <_ySP,i = 1,...,I}= ~.-. ~ ~9(~,,...,~s_t,~s)d~sd~s_, ... d~, . . . . ~,
(9)
This formulation is applicable to all steady state problems, but also to dynamic processes, where the uncertain variables are constant throughout the process. 4. C O M P U T A T I O N R E S U L T S An industrial reactive semi-batch distillation process is described by a rigorous model, which has been validated through a conventional batch run on the industrial site (Li et al. 1998). It is assumed that the uncertainties are from the kinetic parameters (the activation energy and the frequency factor in the Arrhenius equation) and the tray efficiency rl. In the main cut period the product alcohol is accumulated in the first distillate accumulator with a given purity specification. For the end of the batch there is a desired purity for the remaining mixture in the reboiler, described by an upper bound of the educt alcohol. The aim of the optimization is to minimize the batch operation time. Thus the independent variables of the problem are the feed flow rate F and the reflux ratio Rv. The nonlinear dynamic optimization problem under single chance constraints is formulated as follows: min t r (F (t), R v (t), t~, t r ) s.t. the model equation system and D ! _> D ~ i~ P
{x D.i (tu) >- XD.I sp } >_6~l
10)
sP }>a2 P {x A.NST(tr) <xA,NsT -~,:F (t)dt = M, F '~ __ F ( t ) ~ v ~
with XD,I and XA.NSr as the average distillate composition at the end of the main cut period and the purity in the bottom, respectively. In order to handle the fraction switching-time tu and the total batch time tf conveniently, the lengths of the different time intervals are also regarded as independent variables, additionally. D ! and D Mm are the total amount of the distillate product and its predefined lower bound, respectively. One uncertain variable, which is monotone to the restricted state variables in the probabilistic constraints at any rate is the tray efficiency T1. That means there is a relation 7/1" =:> X D, 1 ~ and also r/$ ~ X A,NST ~,. According to (1)-(5) /7L can be used as the upper bound for the uncertain variable r/ in the numerical integration of both probabilities P of the complementary event of the original constraints in (10). The actually desired probability will be found by P = 1- P.
152 o
....................................
tl al
Od
0
=
99
%
o 40%
30~
50%
60%
70%
Probability r
80%
90%
100%
Fig. 1: Objective value to probability limits Since the results are to be used for finding trade-off decisions between robustness (reliability of fulfilling the constraints) and the benefit of the objective value, the optimized values are computed for different probability levels. Here the probability level of the distillate product purity is fixed at a~ = 99%. It represents the case that almost no risk can be afforded towards a violation of the distillate product purity. We change the confidence level for holding the bottom purity restriction. The resulting profile is illustrated in Fig. 1. It is worthwhile to note the significant increase of the objective value, from the confidence level of 90% on. From this point on, it is obvious that not much reliability can be gained by increasing the batch time. The opposite thing occurs in regions of lower probabilities. For a trade-off decision, the point, at which the low increase ends and the significant increase begins, can be chosen. The consideration of a joint chance constraint means that the two single chance constraints in (10) will be replaced by one chance constraint with the following formulation: P
t
XD"(t")>XD" SP
X A, NST
(tf) < XA, NST
}
>
Due to the relations 7/1" ~
a'
(11)
XD,~ 1" and r/1" ~ XA.NSr ,1,, both purity restrictions induce an
upper bound of r/ in the integral for computing the probability of violation (or a lower bound for the probability of being feasible). Follwing the principles in eq. (3) and (8), we have also the convenient case, i.e. there is only an upper bound but no lower bound. This means that in each step, the reverse function is computed for each purity restriction according to eq. (4) so that we have for each one a corresoponding upper bound for r/:
~L,1 = F-,(~,...,~s_,, yslP,u)=rl L
(12a)
~:~.2
(12b)
= F-,
(~: , . . . , 4 S _ l
,Y2s , ,u)=rl~
Then the higher one will be taken as the upper bound for the integration: ~'s = max{q~, q~}
(13)
It is worthwhile to note different values of r/f and r/~ generated through reverse projection at different values of the other uncertain parameters. Taking the last point from the curve in Fig. 1 (i.e. a, = 99%, a 2 = 93%), some corresponding curves of r/( and r/~ are illustrated in Fig. 2. It can be seen that the higher value switches between r/f and r/~ in different situations.
153 Frequency Factor 1 = 37266
Educt
/b
~
-
9
t Frequency Factor 1 ,, 47064
~'
12500
13500
145oo 155oo Frequency Factor 2
165oo
Product
[
14500
tSSO0
15500 Frequency Factor 2
17500
18500
Fig. 2a-b." Tray efficiency over frequency factors
Fig. 3." Single and joint probability Due to this changing behaviour, it can be concluded that the joint probability resulting from a determined operation policy is always significantly lower than both single constraints. This fact can be confirmed in Fig. 3, showing the results of the two purities by the optimal operation policy and 1000 samples of the uncertain paramerters through Monte Carlo simulation. Moreover, it can be seen that the optimal policy will result in a higher reliability for holding the product alcohol purity than that of the educt ester purity. 6. CONCLUSIONS In this work, a new approach has been developed to chance constrained optimization problems of batch processes. The novelty of this approach lies in the efficient computation of single and joint constraints and their gradients. It has been applied to a reactive semi-batch distillation process. Uncertainties of some model parameters such as kinetic parameters and tray efficiency have been taken into consideration. It leads to a dynamic nonlinear chance constrained optimization problem. For performance evaluation, this problem is solved with different probability levels and their corresponding objective values can be received. Furthermore, a comparison between the effect of single and joint constraint has been made. These results can be used for a trade-off decision between robustness and profitability to select optimal and robust operation policies. We thank the Deutsche Forschungsgemeinschaft (DFG) for the financial support under the contract WO 565/12-1. REFERENCES
[1] [2] [3] [4] [5]
A. Pr~kopa, Stochastic Programming, Kluwer, Dordrecht, The Netherlands (1995). A. T. Schwarm, M. Nikolau, AIChE J., 45(1999) 1743. P. L, M. Wendt, H. Arellano-Garcia and G. Wozny, AIChE J., 48(2002) 1198. M. Wendt, P. Li, G. Wozny, Ind. Eng. Chem. Res., 41 (2002) 3621. P. Li, H. Arellano-Garcia., G. Wozny, E. Reuter, Ind. Eng. Chem. Res., 37(1998) 1341
154
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
Scheduling Multistage Flowshops with Parallel Units An Alternative A p p r o a c h to Optimization under U n c e r t a i n t y J. Balasubramanian and I. E. Grossmann Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15217, U.S.A.,
Abstract The prevalent probabilistic approaches to modeling processing time uncertainties in production scheduling problems result in mathematical programming models that are computationally expensive to solve. We present a non-probabilistic treatment of scheduling optimization under processing time uncertainty, where the latter is described using fuzzy set theory, and derive Mixed Integer Linear Programming (MILP) models for the scheduling of multistage flowshops with parallel units. Results indicate that these models are computationally tractable for reasonably sized problems. We also describe tabu search implementations for larger problems. 1. Introduction A number of papers in recent years have addressed scheduling in the face of uncertainties in different parameters - for e.g., demands [ 1-3] and processing times [4,5]. The prevalent approach to the treatment of uncertainties is through probabilistic models that describe the uncertain parameters in terms of probability distributions. However, the evaluation and optimization of these models is computationally expensive, either due to the large number of scenarios resulting from a discrete representation of the uncertainty [6], or complicated multiple integration techniques when the uncertainty is represented by continuous distributions [7]. In this work, we draw upon concepts from fuzzy set theory to describe the imprecision and uncertainties in the durations of batch processing tasks. Indeed, this approach has been receiving increasing attention recently (see Ref. 8 for an excellent overview). However, most of the work in applying fuzzy set theory to scheduling optimization has focussed on using heuristic search techniques such as simulated annealing and genetic algorithms to obtain near-optimal solutions. We show how it is possible to develop MILP models for the scheduling of multistage flowshop plants with parallel units, when the processing times of the tasks are modeled using a fuzzy-set representation of uncertainty. The value of the MILP approach is that it can be used to rigorously obtain optimal solutions or at least provide bounds on the best possible solution, as well as predict the most likely, optimistic and pessimistic values of metrics such as the makespan, total lateness etc. We show that the MILP models can be solved to optimality with reasonable computational effort, and discuss heuristic search algorithms for larger problems. 2. Overview of Fuzzy Sets and Numbers Here we review key concepts from the theory of fuzzy sets relevant to scheduling models.
155 2.1. Definitions Zadeh [9] introduced the concept of a fuzzy set in which an element's membership to a set need not be just binary-valued (i.e., 0-1), but any value over the interval [0,1 ] depending on the degree to which the element belongs to the set. The higher the degree of belonging, the higher the membership. A fuzzy set A of the universe X is specified by a membership function #,~ (x) which takes its value in [0,1 ]. For each element x of X, the quantity #~i(x) specifies the degree to which x belongs in A. Thus, fi~ is completely characterized by the set of ordered pairs shown in Eq. (1). The a-level set (a-cut) of a fuzzy set A is a crisp subset of X given by Eq. (2). A fuzzy set is said to be convex if all its a-level sets are convex. A fuzzy number is a convex normalized fuzzy set with piecewise-continuous membership function [ 10]. Triangular Fuzzy Numbers (TFNs) and Trapezoidal Fuzzy Numbers (TrFNs) are commonly used fuzzy numbers.
7t = {(x, pA(x)) l x c X } A,~ = {x e X lp;a(x ) >_ a}
(1)
Va e (0, 1]
(2)
2.2. Operations on Fuzzy Numbers Arithmetic operations on fuzzy numbers are defined through the extension principle [9], that extends operations on real numbers to fuzzy numbers. In scheduling, the principal arithmetic operations that are involved are addition (computing the fuzzy end-time of a task given the fuzzy start time and duration) and maximization (computing the start time of a task as the maximum of the end times of preceding tasks). We also need to compare metrics like makespan, total lateness etc., which are fuzzy numbers. We summarize the relevant operators. a. Addition" If X and Y are two fuzzy numbers, their addition can be accomplished by using a-level sets, X~ = [x~, x~] and Y,~ = [y~, y~]. The addition of X" and Y, Z, is obtained as in Eq. (3), which clearly shows that the lower bound of Z is the sum of the lower bounds of ~" and Y and similarly, the upper bound of Z is the sum of the upper bounds of 2( and Y.
2~ = 2 o ( + ) ? ~ = [x~ + y~,~on + yR~]
w e (0, 1]
(3)
b. Maximum: The maximum o f ) ( and 1/can also be obtained by using a-level sets as:
2~ = max()~a, Ya) = [max(x L, yL), max(xn, yff)]
Va e (0, 1]
(4)
In general, the maximum operation requires infinite computations, i.e., evaluating maxima for every a E (0, 1]. However, good approximations can be obtained by performing these computations at specific values of a, rather than at all values. Of course, the number of a-levels at which the computations are performed affects the quality of the approximation. e. Area Compensation Operator: If we want to select a schedule with the minimum makespan, we have to compare the fuzzy makespans of potential schedules. We use the area compensation integral [ 11 ] for comparing the fuzzy makespans. When minimizing the makespan, a schedule with fuzzy makespan X1 is preferred over a schedule with fuzzy makespan X2 if Ac(f(1) < AC(X2). The optimization models use a discretization approximation for the calculation of the one-dimensional integral; see Eq. (5).
AC(f() = 0.5.
(x L + x R) do~ ,~ ( S S / 3 ) . (1/2). ~ SCa . (XaL + X~R) a
(5)
156
Figure 1. Multistage flowshop
Figure 2. Membership functions of total lateness of two schedules
Thus, the interval (0,1] is discretized to points {al, a 2 , . . . , aA}, with step-size S S = 1/(A - 1) and the integral in Eq. (5) is approximated by a summation of the function values at specific values of a. The SCs denote the Simpson coefficients [ 12] used in approximating the integral. By choosing a sufficiently fine discretization, the errors in the integral approximation do not affect the comparison between different solutions (a post-optimization evaluation of a few solutions with finer discretization can be used as a check on the validity of the obtained solutions). With the discretization approximation, the optimization models are formulated as MILPs, with timing-related continuous variables and sequencing- or assignment-related binaries. 3. Multistage Flowshop with Parallel Units
We address the problem of optimizing the scheduling of multistage flowshop plants which have parallel units in several stages (see Fig. 1) - an extension of two classical scheduling problems (flowshop and parallel machines problems). Given a production horizon with due dates for the demands of several products, the key decisions that are to be made are the assignment of the products to the units as well as the sequencing of products that are assigned to the same unit. Each product is to be processed only once by exactly one unit of every stage that it goes through. We present an MILP model (M) for obtaining the schedule with minimum total lateness, when the processing times of the orders are given by fuzzy numbers. The assumptions involved in the model are that (i) due dates are deterministic, (ii) there is unlimited intermediate storage between the stages, and (iii) transition times are deterministic and equipment dependent. Much work has been done on deterministic versions of this problem of scheduling flowshop plants with parallel units [ 13-15]. These models have used tetra-indexed (stage-order-slot-unit) or triindexed (order-order-stage) binary variables. Model (M) is a generalization of the tri-indexed model [15] and in it, the timing computations are performed for different a-levels, with the objective being a discrete approximation of the area compensation integral of the total lateness. 3.1. Tri-indexed model- (M) Given are products (orders) i c I and processing stages I c L with processing units j c Jr. Each product i must be processed on stages L i, and can be processed only on units Ji. The parameters of interest are the processing times Tijae, transition times SUj and due dates DDi. In parameters Tijae, the a subscript refers to the specific a-level and the e subscript refers to the
157 left (L) or right (R) end-points of the interval at a-level a. These Tijae can be derived from the fuzzy-number specification for 7~ij. The model presented in [15] uses tri-indexed binary variables Xil,i2,t to indicate if product il is processed before product i2 in stage I. Binaries s~j are used to represent the assignment of product i to the first processing position in unit j, while continuous variables wij are used to represent assignment of product i to unit 3". Here we present the model for total lateness minimization. Continuous positive variables tsilae and te~t~ respectively denote the start- and end- times of product i in stage l at a-level a. The lateness at different a-levels is computed through positive slack variables slpi~e and slniae. Objective Function: The objective is to minimize the area compensation of the total lateness. min ZTotn,ue -- (SS/3) * 0.5 * ~ ~ S C a . (snian + sniaR) a
(6)
i
Assignment Constraints: Eq. (7) state that every order has to be processed by a unique unit in each stage, while Eq. (8) state that each unit processes at most one starting order. Eq. (9) specify that if orders i and i' are consecutive orders and order i is assigned to unit j, then order i' is also assigned to unit j, while Eq. (10) specify the relation between the assignment and s variable. Note that with Eq. (9) and (10), the assignment variables, wij, can be relaxed as continuous variables with 0 and 1 as lower and upper bounds (for proof, see [ 15]). E Wij = 1 je(J, nJt)
s,j _< 1
V(i e I, l e L)
(7)
V(j e J,)
(8)
iCI
wiy +
~
wi,j, + xii,z <_ 2
V(i, i' C I, i r i'; j C Ji N Ji' N Jt; l E L)
(9)
j'C(JiMJl);j'r
Wij ~_ 8ij
V(i G I ; j G Ji)
(10)
Sequencing Constraints: Eq. (1 l) and (12) ensure that each order i has at most one successor and one predecessor respectively in each stage of processing. zii,t_~ 1 V ( i e X ; l e L )
(11)
i ~C It ,i ~r
xi, u + i' C Ii ,i' r
E
sij=l
V(ieI;leL)
(12)
jC( JiMJl )
Timing Constraints: Eq. (13) relate the end time of order i in stage 1 to the start time through the processing time in the unit j to which it has been assigned. Eq. (14) and (15) specify the relationships between the starting times of an order in successive stages and the starting times of successive orders in the same stage respectively. Finally, the lateness of the orders at different a-levels is computed through Eq. (16), with slack variables slpia~ and slni~e. teilae -- tSilae T
E Wij " (Tijae -~ ~Vj) jc(J~nJ~)
teilae ~_ tSil'ae
V(/',
V(i E I;1 E L; a E A; e E E)
l C L, l' > l; i C I; a C A; e E E)
(13) (14)
158 te~t~ < (1 - x~i,l) " U + ts~,t~ V(i, i' E I, i r i';1 E L; a E A; e E E ) tei~ae + slpi~ -- s l n i ~ = D D i
V(i E I; a E A; e E E)
(15) (16)
Note that although model (M) resembles the scenario approach used in probabilistic models, a crucial difference is that the number of c~-points at which the discretization is performed can be chosen independently of the number of uncertain parameters (since with 10-point discretizations, accurate approximations - within a few percent - of the A C value can be obtained). With scenario-based approaches, the number of scenarios one must consider grows exponentially with the number of uncertain parameters (assuming independence of parameters).
3.2. Computational Results MILe model (M) can also be modified to reflect different objectives. Thus, rather than minimizing the area compensation of the total lateness, we can minimize the maximum lateness among all orders. Fig. 2 displays the membership functions of the total lateness of two schedules for a 5-order, 3-stage, 7-unit (2-2-3) problem where the processing times were given by TFNs, triplets (a, b, c), with a representing the lower bound (optimistic estimate), b the most likely value, and c the upper bound (pessimistic estimate). Schedule 1 is optimal with respect to the area compensation of the total lateness (ZTotLate 57.6), but the maximum lateness among all orders is 31.3. On the other hand, Schedule 2 has a considerably higher area compensation of the total lateness (ZTotLat~ = 67.7), but it is the schedule with the minimum maximum lateness among all orders (25.3). We can also see from Fig. 2 that Schedule 1 has a most likely total lateness of 54, with an optimistic estimate of 48.8 and a pessimistic estimate of 58.3. These models were solved in under 2 minutes of CPU time with GAMS / CPLEX 7.5 on a Pentium 111/930 MHz machine running Linux. Thus, model (M) can be easily adapted to optimize schedules with respect to different objectives in the face of processing time uncertainties. Results from more examples for total lateness minimization are presented in Table 1. These examples were solved with a resource limit of 5000 CPU secs. The number of binary variables required in model (M) increases quadratically with the number of orders; thus, larger problems are very hard to solve to optimality. Computational difficulties with model M arise more from the NP-hard nature of the underlying scheduling problem rather than the number of discretization points. For larger problems, we implemented a local search algorithm called Reactive Tabu Search (RTS) [16] in Java [17]. Since the evaluation of a given solution (schedule) under a fuzzy-set description of uncertainty can be performed very easily, local search algorithms like RTS (which explore a large number of feasible solutions every iteration) are particularly relevant. Other attractive features of the RTS algorithm are that (i) the CPU time for one iteration of the algorithm scales polynomially in the size of the problem, and (ii) the termination criteria can be modified as necessary (for e.g., terminate after a fixed number of iterations or when there has not been a change in the best solution for a given number of iterations). Although the algorithm finds very good solutions quickly, it cannot guarantee the quality of the solution. However, this can be verified with the use of model (M). With a limit of 5000 iterations (,~, 1000-2500 CPU secs), the RTS algorithm found the optimal solution for the smaller problems below, and, for the larger problems, found better solutions than through the MILe model. Since the RTS algorithm is computationally less expensive, a two-step approach can be conceived: RTS is utilized to provide a few high quality solutions, which are then used as bounds for improving the branch and bound algorithm for model (M). =
159 Table 1 Characteristics of model (M) for parallel flowshop problems N-L-(Units in Stgs) 5-3-(2,2,3) 10-3-(10,8,7) 15-3-(10,8,7)
Binaries 95 520 1005
LP Rlxn. 30.91 158.78 0
Best Solution Obtd. 57.63 165.77 6.84
Best Possible 57.63 159.2 0
4. Conclusions
We have addressed the problem of optimizing the schedules of multistage flowshop plants with parallel units under processing-time uncertainty. Using a fuzzy-set description of the uncertainty, MILP models were formulated for minimizing the total or maximum lateness of orders. Results indicate that these models are computationally tractable for problems with reasonably large number of uncertain parameters. Furthermore, due to the ease of evaluating given solutions, local search techniques can be used for larger problems. Acknowledgement
We gratefully acknowledge financial support from the NSF under Grant CTS-9810182. REFERENCES
1. M.G. Ierapetritou and E.N. Pistikopoulos, Ind. Eng. Chem. Res. 35 (1996), 772. 2. S.B. Petkov and C.D. Maranas, Ind. Eng. Chem. Res. 36 (1997), 4864. 3. G. Sand, A. Engell, A. M/irkert, R. Schultz, R. and C. Schulz, Comput. Chem. Eng. 24 (2000), 361. 4. S.J. Honkomp, L. Mockus and G.V. Reklaitis, Comput. Chem. Eng. 23 (1999), 595. 5. J. Balasubramanian and I.E. Grossmann, Comput. Chem. Eng. 26 (2002), 41. 6. R.J.-B. Wets, SIAM Rev., 16 (1974), 309. 7. C. Schmidt and I.E. Grossmann, Eur. J. Oper. Res. 3 (2000), 614. 8. R. Slowinski and M. Hapke, M. (eds.), Scheduling under Fuzziness, Physica Verlag, Heidelberg, 2000. 9. L. Zadeh, Info. Contr., 8 (1965), 338. 10. H.-J. Zimmermann, Fuzzy Set Theory and its Applications, Kluwer Academic Publishers, Norwell, 1990. 11. P. Fortemps and M. Roubens, Fuzzy Sets Syst. 82 (1996), 319. 12. M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, New York, p.886., 1972. 13. J. Pinto and I.E. Grossmann, Ind. Eng. Chem. Res. 34 (1995), 3037. 14. C.M. McDonald and I. Karimi, Ind. Eng. Chem. Res. 36 (1997), 2701. 15. C.-W. Hui, A. Gupta and H.A.J. van der Meulen, Comput. Chem. Eng. 24 (2000), 2705. 16. R. Battiti and G. Tecchiolli, ORSA J. Comput. 6 (1994), 126. 17. K. Amold and J. Gosling, The Java Programming Language, Addison-Wesley, Reading, 1998.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScience B.V.
160
CONCURRENT PROCESS ENGINEERING AND INTEGRATED DECISION
MAKING
Rafael Batres a, Ming L. Lu b and Xue Z. Wang c "Tokyo Institute of Technology, R1 Bldg., Midori-ku, 226-8503 Yokohama, Japan bAspen Technology Inc. 10 Canal Park, Cambridge, MA 02141, USA CDepartment of Chemical Engineering, Leeds University, Leeds, LS2 9JT, UK
Abstract Concurrent Engineering (CE) is a systematic methodology based on multi-disciplinary teams that collaborate in parallel processing activities to continuously consider all product, process, and plant related aspects from their every individual domain perspectives. This paper reviews the main developments of concurrent process engineering and highlights its contribution to integrated decision making. 1. I N T R O D U C T I O N To compete in the ever-expanding global market, as well as to meet increasingly tighter safety and environmental constraints process industries are being compelled to develop safer, operable and reliable plants and processes that result in safer high-quality products in shorter time and less cost. Therefore, different approaches are needed that address all of these requirements from the very beginning. To achieve this, it is important to consider a number of aspects simultaneously. For example, analyses of market demand and raw material forecast are needed during product development. Similarly, the use of performance history of existing unit operations and control strategies in safety and process design leads to the synthesis of inherently safer process structures. In the same way, bringing operational issues such as the simulation of plant startup to the design stage and design modification during the operation stage are ways to make sure that plant is easier to operate. Once the plant is built and the process is in operation, it is also important to simultaneously observe all these aspects to make timely and correct business and operational decisions. In product, process and plant development, it is often necessary to evaluate and screen a number of alternatives. Unfortunately, engineers frequently face a number of uncertain factors that complicate the evaluation [ 1]. Life-cycle considerations can be integrated at early stages of the design when later potential problems can be attenuated or eliminated by proposing better decision alternatives. Traditionally, however, assessment of life-cycle issues including safety and operability analysis tend to be performed at relatively late stages of the design and development process. Others factors such as environmental analysis of the value chain are rarely done and computational support is almost non-existent. On the other hand, significant progress has been made in Concurrent Engineering (CE), a systematic methodology based on multi-disciplinary teams that collaborate in parallel processing activities to continuously consider all product, process, and plant related aspects from their every individual domain perspectives. Customer involvement and computer support environment are
161 additional elements that are essential to Concurrent Engineering, which allows customers and team members to have access to relevant information and tools to collaboratively make engineering decisions. During last ten to fifteen years, Concurrent Engineering has been under continuous development and successfully applied to several different industries and engineering fields. The core concepts of CE are integration and concurrency. Integration means simultaneously considering issues of different life cycle stages and solving problem consistently while concurrency highlights the interactions between system components of the value chain and different life cycle stages. The underlying assumption of this paper is that these concepts facilitate the decision making process at different levels of abstraction, ranging from government-based material recycling strategies, virtual enterprises down to plant operation. The structure of the paper is as follows. The concept of concurrent process engineering will be defined and a systems approach to the assessment of life cycle issues will be presented. Then a concurrent engineering approach for design of chemical plants and integrated production will be described, followed by examples and critical review of the challenges.
2. C O N C U R R E N T P R O C E S S E N G I N E E R I N G The application of CE in chemical engineering started from chemical process and plant design such as heat exchanger network synthesis and heat exchanger design [2], FCC process design [3], control system design [4]. Then CE extended to cover integration between several pairs of different life-cycle activities such as between design and control, and between operation and maintenance [5]. The complexity of chemical process systems and its operation have attracted quite some researchers to apply and extend CE and systematically develop more domain specific methodologies and models which have resulted in the emergence of Concurrent Process Engineering (CPE) as a new area of research. The holistic nature of CPE can be explained by the diverse (and often interrelated) value chains that are characteristic of the process industries. A value chain describes the full range of activities that are required to bring a product from discovery through the intermediate phases of production (involving combinations of physical and chemical processes), delivery to final consumers, recycling of materials or products and disposal after use. Extended value chains are networks of a number interacting value chains. Holistic approaches have a profound impact in ensuring a sustainable development 1 as material, energy, information and economical flows of the value chains are interrelated. CPE takes into account aspects that concern the society such as safety, energy and the environment. CPE involves cross-functional teams (including representatives from key suppliers and customers) that work jointly on activities around products, processes, and plants. Cross-functional teams are multidisciplinary in nature, which implies sharing a common, high-level goal to achieve a defined objective. Teams are composed of experts from chemistry, engineering, production, marketing, and other domains that interact and perform designated tasks to achieve common goals. In addition, CPE has a cross-organizational structure as dynamic networks of companies collaborate by creating synergies of computational resources and people. Simultaneity in CPE is needed in order to reduce the time to bring high quality products to the market. Contrary to sequential approaches, process-engineering activities are allowed to occur in parallel and exchange information continuously. It can be expected that CPE will be accompanied by information that flows more rapidly with more complex interactions than those in traditional engineering approaches. As a result, CPE is
1 Sustainable development encompasses activities that combine environmental protection and resource conservation with social responsibility and value creation.
162 optimum so that with CPE enabled tools, more alternatives can be analyzed and evaluated so that better (if not the best) design alternatives in terms of the life-cycle aspects can be obtained. 3. L I F E - C Y C L E
SYSTEMS
Following a systems theory approach based on a multi-dimensional object oriented model [6] [7], integration between product, process, and plant life cycles can be seen as interacting material, processes, value chain components, and human systems. Each system is composed of other subsystems that are defined based on their structural, behavioral, and operational aspects. Structural descriptions of a system are characterized by the interconnections between system components. For example, the plant structure system describes the equipment and the equipment topology in a plant network. The structural description of value chains would represent a network of suppliers, producers, consumers, research institutions, government and recycling facilities. The behavior of a system indicates how a system reacts in its relation to other systems. The behavioral description of an industrial process system defines physicochemical behaviors that are influenced by exogenous variables illustrates this concept. Similarly, the behavior of a value chain refers to the changes in the flows of material, energy, emissions to the environment, and money. Operational aspects define the activities (intentional processes) aimed at satisfying specific goals of a system. For example, the plant operation system defines actions that manipulate valves and other devices that satisfy production and safety requirements. The operational description of the value chain include the decision making processes that will define the flows of material, energy, emissions to the environment, and money. A typical example is the selection of recycling technologies and the design of material flows along value chains. Based on this methodology, information, activities and software can be defined that support concurrent engineering. Tables 1 and 2 show examples of information and activities organized into the three aspects. Table 1 Information models and knowledge System Behavior Value Logistics, production, Chain inventory and demand models Plant Corrosion models
Subsystem category Structure Topology in terms of possible flows of information, material, energy and currency Facility, Equipment
Process
Process structure
Process behavior models Material Physico-chemical property models
Composition, molecular structure
Management / Operation Business models, negotiation models Operating procedures, batch recipes Synthesis methods, design rationale Recycling policies, design rationale
A key concept is that process behavior refers to the physicochemical phenomena that are manifested through changes in the properties of the material that is processed in the plant. In other words, process behavior is defined from a material perspective. This definition implies the existence of physicochemical product behavior models that can be defined independently of where the modeled phenomena occur. For a given piece of equipment, engineers can combine multiple process behavior models to represent proposed, assumed, or actual behaviors as well as functional information (roles or expected behaviors).
163 The process-management-and-operation subsystem defines concepts for equipmentindependent activities that describe the chemical process. Plant behavior subsystems include corrosion behaviors, mechanical stress or thermal expansion. On the other hand, the plant structure refers to the description of the components of which the plant is built as well as its topological (component connections), mereological (part-whole) and spatial representation. The plant management and operation subsystems cover activities that range from planning and scheduling, plant-wide control, through to local advanced and regulatory control. Knowledge and activities are defined for controlling, and operating actionable devices (such as control valves) from the plant structural dimension. In the product life-cycle, structure subsystems include the molecular structure and their properties such as sterilization capabilities, transparency. In the management and operation dimension of product processing constraints can be defined. For example, for a material to be used for the food and beverages industry, a processing constraint may look like 'sterilize with dry heat, don't use toxic chemicals).' Activities include design of experiments. In the product-behaviordimension, models and measurement containers describe material properties in terms of variables of the process (such as temperature, pressure, etc.). Next to the definition of subsystems, the relationships between subsystems should be identified. For example, process behavior (behavior subsystem in the process life cycle) takes place in a piece of equipment (plant structure subsystem). Simulation models based on this system classification have a number of advantages over traditional approaches including the possibility of modeling matter or energy flows that do not flow through defined pipes or channels, such as a vessel leakage or sunshine heating over the equipment [7]. Table 2 Examples of activities based on the three subsystem categories System Subsystem category Structure Behavior Value Changes in consumer Selection of a production Chain preferences, changes in the route, design of the topology flow of requested raw of the value chain materials Plant Simulate equipment Develop piping and instrumentation diagram deterioration, predict mechanical stress Develop process flow Process Simulate process, monitor process variables diagram, develop process design networks Material Estimate physical properties Determine product specifications
Management/Operation Definition of governmental policies and regulations Control and manipulate valves, maintain plant Determine ramping trajectories for startup Plan experimental work
4. DECISION MAKING TECHNIQUES Decision making is selecting a course of action from among alternatives. Approaches in concurrent engineering are needed that manage alternatives proposed from different life-cycle perspectives and evaluation criteria. Design rationale methods have been developed to assist engineers in solving decision-making problems by capturing the argumentation (pros and cons) of alternatives that are proposed as potential solutions of an engineering problem. IBIS (Issue Based Information Systems) [8] is probably the most popular design rationale techniques. IBIS has a methodology and a knowledge
164 model. The methodology starts with problems that are formulated for which solutions are proposed and then evaluated with supporting and refuting arguments. The knowledge model defines a graph composed of three kinds of three kinds of nodes namely issues (problems), positions (potential solutions), and arguments. Eight types of edges are defined to add semantic content to the graph, namely supports, objects-to, replaces, responds-to, generalizes, specializes, questions, and suggested-by. Changes in assumptions or objectives of a design artifact have an effect on a part or all of the constraints taken into account along the design. Management of change consists in consequence analysis and reconfiguration. Consequence analysis identifies the assumptions that became invalidated after the change. Reconfiguration refers to changes in the design of the artifact to adjust to the changes or objectives. Design rationale tools can be developed to support consequence analysis. For example, the design support system l~gide [9] implements a management of change functionality based on dependency-directed backtracking. Egide verifies that all the issues have their best positions selected, which in turn identifies the segment of the design rationale that became invalidated. For the invalidated design rationale segment, the tool evaluates alternative arguments that provide a new set of active positions. Figure 1 illustrates an application of the dependency-directed backtracking in the design of a safety protection system for a pressure vessel. The IBIS network shows a segment of a design rationale record that is to be reused in the UK in a fictitious scenario. Argument A-4 that objects to position P-2 became invalidated, for which position P-2 is preferred over P- 1. The life-cycle systems approach discussed in Section 3 can be used to guide the development of IBIS networks. Firstly, positions can be proposed at different levels among the value chain,
Figure 1. A fragment of an IBIS network of the design of a safety protection system plant, process or material systems. For each level, positions can be proposed based on structural, behavioral or management aspects. Similarly, arguments can be proposed following the different levels and the three system aspects. 5. C H A L L E N G E S A N D F U T U R E D I R E C T I O N S The field is huge and needs much more effort to be made, which needs both time and funds. Unfortunately, research in CE needs more funding and industry does not see immediate benefit, not to mention the existence of constraints in existing investment and legacy systems. In addition, technical challenges are 1) development of consistent and complete engineering activity model
165 that allows dynamic change and management of engineering workflow; 2) development of consistent and inherent data models that can be used to manage past, current, and future prediction data; 3) integrated value chains that integrate R&D chains with production chains, suppliers' chains, demand chains, recycling subsystem, EPC contractor/operator organizations, academic institutions, etc. 6. C O N C L U S I O N S Extensive research effort has been made in applying CE in different activities of process and plant life cycle. Integration and concurrency between different product, process and plant life cycle activities and a multi-dimensional framework have built the foundation for creation of a domain specific CE - CPE. Now more than a dozen leading research laboratories are conducting research in this area. Several standardization efforts have been promoting the development of common data models that form the basis for sharing and exchanging information across tools (a critical issue in concurrent engineering). However, these efforts while effective in data exchange as snapshots such as between design and production offer little support for interoperability of software components that share and exchange data concurrently in time. On the other hand, efforts that aim at realizing the interoperability of simulation components have produced successful stories, but still little has been done to integrate such components with CAD tools, equipment rating software, safety design packages and control systems. Additionally, none of these integration efforts addresses the need of a paradigm shift in the way of carrying out the engineering activities, which may bring new requirements for the information used in the design and operations as well as for the integration of tools. REFERENCES
[1] Herder, P. M. & Weijnen, M. P. C. 2000. A concurrent engineering approach to chemical process design. Int. J. Production Economics, Vol. 64, pp. 311-318 [2] McGreavy, C., M. L. Lu and E.K.T. Kam. 1993. A Concurrent Design Procedure for Heat Exchangers in Integrated Systems, Heat Exchange Engineering ,Vol. IV, Chapter 17, Ellis Horwood Ltd, Oxford, UK. [3] Wang, X.Z., M.L. Lu and C. McGreavy. 1994. Concurrent Engineering Application in Industrial Fluid Catalytic Cracking Process Design. Proc. of The 5th International Symposium on Process System Engineering, PSE'94, pp.381-386, Korea [4] Yang, S.A., Y. Hashimoto, M.L. Lu, X.Z. Wang and C. McGreavy. 1994. Concurrent Design of Control System with Associated Process. Proc. of The 5th International Symposium on Advanced Multi-Variable System and Technologies(AMST'94), pp.41-47, UK. [5] Zhao, Y., Lu, M. L., Yuan, Y. 2000. Operation and maintenance integration to improve safety. Proceedings of the Process Systems Engineering conference. [6] Lu, M. L., Batres, R., Li, H. S., Naka, Y. 1997. A G2 Based MDOOM Testbed for Concurrent Process Engineering. Comput. Chem. Engng., Vol. 21, Suppl., pp. S 11-S 16 [7] Batres, R., Lu, M. L. and Naka Y. 1999. A Multidimensional Design Framework and Its Implementation in an Engineering Design Environment. Journal of Concurrent Engineering, 7(1) [8] Conklin, J. and Burgess-Yakemovic, K. 1995. A Process-Oriented Approach to Design Rationale. In Design Rationale Concepts, Techniques, and Use; T. Moran and J. Carroll, (eds), Lawrence Erlbaum Associates, Mahwah, NJ, pp. 293-428. [9] Bafiares-Alc~intara, R., King, J.M.P. and Ballinger, G. H. (1995). Egide: A Design Support System for Conceptual Chemical Process Design. AI System Support for Conceptual Design. Springer-Verlag, Edited by John E. E. Sharpe. Presented at the University of Lancaster, UK 1995
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
166
An Object-Oriented Approach to Hybrid CFD/Multizonal Modelling F. B e z z o a, S. M a c c h i e t t o b, C . C . P a n t e l i d e s b aDIPIC "I.Sorgato" - University of Padua, via Marzolo 9, 35131 Padova, Italy bCPSE, Imperial College of Science, Technology and Medicine, London SW7 2BY, UK
Abstract. Process simulation and Computational Fluid Dynamics (CFD) are well-established tools in the process industry. The two technology are largely complementary and their combined application can lead to significant industrial benefits. In view of the advantages and limitations of process simulation and CFD modelling, it is natural to consider hybrid approaches that attempt to combine the two. This may bring great advantages in the process and product design, in the equipment scaling up and down, in the capability of optimising the process and delivering solid technical criteria for business decision making. A few works have recently appeared demonstrating the feasibility of a combined approach where critical parameters are exchanged between a CFD and a process simulation model. In this paper a novel design for hybrid CFD/multizonal modelling is considered in terms of an object oriented approach. This generic modelling approach allows an easy-to-use and effective representation of the process by a synergic use of available technologies.
1
Introduction
One of the key challenges facing process modelling today is the need to describe in a quantitative manner the interactions between mixing, fluid flow and other phenomena such as chemical reaction, heterogeneous and homogeneous mass transfer and phase equilibrium. This is particularly important in complex operations (such as polymerisation and crystallisation) because it is often these interactions that determine the quality of the product (e.g. in terms of the distributions of molecular weights or crystal sizes and shapes) and, eventually, its profitability. Both process modelling tools and CFD techniques, if used independently, cannot grasp the complex interactions between hydrodynamics and the other physical and chemical phenomena (Bezzo, 2002). Some of the problems can be addressed by hybrid approaches that attempt to combine process simulation models with CFD calculations (e.g. Bauer and Eigenberger, 1999, 2001 in their study of gas-liquid bubble columns, Urban and Liberis, 1999 for the model of an industrial crystallisation process). Bezzo (2002) presented a first formal framework to address the above difficulties by the definition of a compact and generic structure which can largely be standardised within currently available software. The entire domain is mapped by means of a multizonal model which is independent of the specific model for each zone and which is topologically defined by means of a set of interfaces establishing the connectivity between zones. The coupling between the
167 multizonal and CFD models is achieved by characterising fluxes of material between zones and properties affected by fluid mechanical mixing processes by means of CFD calculations, while properties of the fluid needed by the CFD model are, in turn, computed by the multizonal model. In this paper we want to include the above methodology within a more generic description of the overall framework for the integration of process simulation and CFD for a better approach to simulation design. The definition of a object-oriented description will demonstrate the usefulness of a clear subdivision of modelling specifications and the possibility of an integrated use of existing modelling tools.
2
Spatial partitioning
The first target of a declarative process for achieving a good hybrid model representation regards the definition of a more comprehensive modelling design by assessing the best modelling tools for the description of each piece of equipment and operation. We call this spatial partitioning. This idea moves away from the current push towards multiscale integration which aims at considering all relevant phenomena by means of a single modelling tool. Here we consider a solution which exploits the present effort of modelling tools towards open architecture and standardisation (Braunschweig et al., 2000) as we believe that in the near future major benefits may be obtained in product design and business decision making in the process industry by "properly" using the simulation tools which are already available. Some CFD and process simulation companies have already moved together for the definition of a more comprehensive design allowing unit operation coupling. Process simulation and CFD unit operation models are used separately and the only connection is obtained by common inlets/oulets. This idea can be pushed even further as long as we start viewing process models as objects, which are transparent to a set of inputs/outputs. From this perspective, an object does not necessarily describes a whole unit operation. For instance, consider a jacketed reactor: on the one side we may use a CFD model of the mixing tank and on the other side a process simulation model may take care of jacket, other ancillary equipment and operating and control procedures (Bezzo et al., 2000). Also, we may refer to membrane operations and/or fuel cells where there is a clear separation between phenomena occurring in the fluid and on the membrane: a spatial partition approach may be used to define a process simulation model for the membrane and a CFD model for the fluid phenomena.
3
Model partitioning
Each spatial partition is an object defined by a set of public variables and parameters (inputs/outputs), internal variables and modelling equations and, finally, suitable numerical methods to solve the model equations. This is not very different from the modular approach to process simulation. However, in this case, we clearly state that a) different pieces of software and numerics will be used and that b) spatial partitions may occur within a single piece of equipment. Furthermore, we assume that a spatial partition may itself be defined through a hybrid use of modelling techniques dealing with the same spatial domain, but taking care of different phenomena. We define this approach as model
168
partitioning according to which we split the constituent equations of a single process system into two submodels. Here we will refer to the methodology for hybrid multizonal/CFD models defined by Bezzo (2002), which was briefly described above. A generic approach is to define two different objects representing the CFD model ( CFD object) and the multizonal model (MZ object). This two models are contained within the hybrid object defining the spatial partitioning. According to the object-oriented approach the CFD and MZ objects define a set of public variables representing the critical variables exchanged between the two independent packages. For instance, we may consider a stirred tank bioreactor. The CFD object would solve the fluid flow equation and set as public properties the mass flowrates between zones and the non-Newtonian viscosity field. The MZ would use viscosity values to calculate the mass transfer coefficient and return nonNewtonian law parameters based on composition in the tank. 3.1
The
MZ
object
In order to construct our multizonal model, the spatial domain of interest is divided into a number of internal zones representing spatial regions in the process equipment. Each single zone is considered to have a fixed volume and to be well-mixed and homogeneous. Two zones can interact with each other via an interface that connects a port of one zone with a port of the other. The flow of material and/or energy across each interface is assumed to be bi-directional, with both directions potentially being active simultaneously. The MZ object provide a number of public properties. These are: i n p u t s : zone network topology (to be commented on in w mass flowrates between internal and environment zones, zone volumes, a set of fluid flow dependent propcrties o u t p u t s : a set of zone intensive properties such as physical properties, temperature, composition The internal structure is not public and is the encapsulated part of the model. In fact, the internal model deal with public properties, but in general it will also contain a number of variables and parameters required to define the state of the system. The MZ object is defined by means of other lower-level entities establishing the zone network structure. These are the internal zones and the interfaces. An internal zone models is a self-contained object (IZ object). Each internal zone represents a portion of the physical domain and is represented by the same encapsulated model. The set of public properties is defined by: i n p u t s : number of ports, mass fluxes, zone volume o u t p u t s : records of intensive quantities to be exchanged with other zones, intensive properties required by CFD calculations. For instance, in the bioreactor example mentioned above the set of intensive properties required by the CFD calculations is represented by the parameters for the non-Newtonian law, while the set of properties exchanged with other zones comprises biomass, oxygen, product concentrations. At the same level as the zone objects, we define the last set of objects within the multizonal model, i.e. the interfaces. The interface objects (II object) do not perform
169 any calculations but are used to link internal zones through their ports. There are no cncapsulatcd properties. After defining the two sides of the interface as 0 and 1, the public properties are: inputs: internal zone on side O, internal zone on side 1, port on side O, port on side 1, mass flowrates between zones o u t p u t s : none. 3.2
The
CFD
object
The CFD object is responsible for determining the fluid flow behaviour within the physical domain by solving the mass and momentum conservation equations. The CFD object public properties include: inputs: cell-to-zone map (to be commented on in w set of cell intensive properties, set of hydrodynamic parameters (e.g. agitation speed in a stirred tank reactor) o u t p u t s : mass flowrates between zones, zone volumes, set of hydrodynamic related properties. The above protocol essentially defines the behaviour of a general class of computational fluid dynamic foreign objects. Specific instances of this class may correspond to individual items of equipment or spatial regions, each modelled separately. Each instance is charactcrised by its own geometry and also by the way this is represented within the CFD package (c.g. the discretisation grid). The definition of an instance in terms of this information must be dcclared before the combined simulation is initiated. 3.3
Topology
declaration
The IZ and II objects within the MZ object requires the definition of the zone network topology. We assume that this is achieved by means of a third object (TD object) delivering the required data. The TD object makes the number of zones and interfaces as well as their connectivity available. This information is used to set the right values of mass flowratcs, properties, number of ports in the IZ objects and the 0 - 1 zones in the II objects. The TD object is also used to define the map between cells and zones required by the CFD object to determine the correspondence between the two geometric representations (each internal zone is constituted by a subset of CFD computational cells). The calculation of mass flowrates between zones and zone mixing-related parameters needs the map between cells and zones. Similarly, fluid properties calculated within each internal zone had to be passed to the cells belonging to that zone. The encapsulated part of the TD object may be empty or it may be constituted by properties and models capable identifying homogeneous and well-mixed regions (i.e. internal zones) from the results of CFD calculations. For example the distribution of the eddy turbulent energy of dissipation may be used to define homogeneous and well-mixed zones. More details for these autozoning methods can be found in Bezzo (2002). The TD object after including autozoning procedures contains the following public properties:
170 Distillation Column Process Simulation)
.
.
.
.
.
.
.
.
.
.
.
.
. .. . . . .. . . ... . . ... . . .. . . . .. . . ..
.
.
.
.
Pubhc Properties: . . . . d . . . . . . . . . . . i.... o,... ......... ,o,,~oo . . . . . ,,,tpo,n,,
" Spatla ,,,~'
.
.
.
.
.
.
..................
......................
, Fuel Chamber
(CFD)
..................... Membrane
'
" '
_
Part
Air C h a m b e r
........................
(CFD)
.
Batch
Bioreactor
l
Properties
Feed flowrate,composmon
..
...
I '' ~_ ,
" Composition distribution . Elect~c potential
,
9 Heat generation
~
CFD Object
Temperature d|strlbut|on
;,-'
(Hybrid
.
.
.
.
Multizonal/CFD)
Partitioning :
Non-Newtoman vtscostty V d o c , ~ field Cell-to-Zone map .... 9. . . . . . . . . . . . . .
: CFD results i Zone Network topology i Cell-to-Zone map ...................... I MZ Object
. . . . . . . . . . . . . . . . .~ _._. .,...~. . .Public Properties:
:
I :"~'b'lic "Properties.: "'" : ~ _ ~ Non-Newton,an parameters i [ i Zone Network topology i, , .::.. . . . . . . . . . . . . . . . . . . .
i, 1 i , I i
Temprature r
............
Object ~ .....
Object . .... ~ ...........
:'Public Properties." 9 " Public Properties" .
9 Feed flow,ate Mass fluxes to membrane ... : .......................
!
."Public Properties
: ..__~
,
.
,,,d~ . . . . . . . . . . . . . . . . . ...... d,,~ . . . . .
Public Properties"
~_
,
:~
Model i
9 Public
............... Public Properties
:................ '
tion ng "'-.
/' " "Publicl~'op'erti~:.....
(Process- _SI__mula_ti_o_nl}! " I , , ..................
.
-~
~'" _ ,_ .,~ ,~ -
Stirred Reactor ( C F D )
.
~ [ I / H i l l H ! 11
,," ,,' ,'
& Process Simulation) , '
.
L
,, ,,,,
Fuel Cell (CFD
.
: :: : :
9 :
: Com ! N~r
sltton of ports
.
9
" ..0~-.! . . . . . . . . . . .
zones
, ..... ............ .........................................
Figure 1: Spatial partitioning can be used to model several unit operations according to different techniques (e.g. process simulation for a distillation column, CFD for a tank reactor, hybrid multizonal/CFD for a bioreactor). The same approach is used to divide the representation of a fuel cell: three physical domains are identified within the same unit. Each partition is defined as an object showing a set of public properties. The definition of a multizonal/CFD model for a bioreactor implies a model partitioning approach: the same spatial domain is simultaneously described by CFD and process simulation to take into account different phenomena. Once again an object representation is adopted.
inputs: outputs:
results of C F D calculations multizonal model topology, cell-to-zone map
Figure 1 illustrates the main ideas concerning the use of an object-oriented approach to modelling.
4
C a l c u l a t i o n flow
Few words will be dedicated to consider the computational process. This is a complex issue and here we will just mention what is required for a solution to be obtained. First of all a master program is required to manage the overall flux of information between objects. In the case of a steady-state analysis this may appear to be rather similar to the approach used for sequential modular simulators. However, we cannot forget that the use of different numerical approaches within each object requires special attention to ensure the robustness of solution since the level of approximation and the numerical fragility may vary among different tools (the solution scheme may need filtering procedures).
171 Furthermore, the efficiency of solution becomes a critical issue since the iterative solution of CFD models may become unfeasible even for the present computational capabilities. All these issue becomes even more stringent whenever a dynamic simulation (or optimisation) is needed. In that case special assumptions should be taken into account to separate time scales of different phenomena. For instance, fluid flow dynamics are usually instantaneous compared to polymerisation or crystallisation processes and and may be treated as a sequence of steady-state simulations. Some of these issues have been considered and solved for dynamic simulation of hybrid multizonal/CFD models in the work by Bezzo (2002).
5
C o n c l u d i n g remarks
Business decisions in the process industry are often related to the capability of predicting and optimising process behaviour and product design. This has always been one of the principal scopes of process simulation. Nowadays, tighter competition and complexity of products require a more precise understanding of processing dynamics. This paper identified a different approach to simulation design applied to CFD-process simulation integration. An object oriented approach has been used to show that open architecture and standardised software for modelling purposes may greatly improve the current modelling capabilities.
References [1] M. Bauer and G. Eigenberger. A concept for multi-scale modeling of bubble columns and loop reactors. Chem. Eng. Sci., 54:5109-5117, (1999). [2] M. Bauer and G. Eigenberger. Multiscale modeling of hydrodynamics, mass transfer and reaction in bubble column reactors. Chem. Eng. Sci., 56:1067-1074, (2001).
Design of a general architecture for the integration of process engineering simulation and computational fluid dynamics. PhD thesis, University of London,
[3] F. Bezzo.
United Kingdom, (2002). [4] F. Bezzo, S. Macchietto, and C.C. Pantelides. A general framework for the integration of computational fluid dynamics and process simulation. Comp. chem. Engng., 24:653658, (2000). [5] B.L. Braunschweig, C.C. Pantelides, H.I. Britt, and S. Sama. Process modeling: the promise of open software architectures. Chem. Eng. Progr., 96:65-76, (2000). [6] Z. Urban and L. Liberis. Hybrid gPROMS-CFD modelling of an industrial scale crystalliser with rigorous crystal nucleation and growth kinetics and a full population balance. Proc. Chemputers 1999 Conference, Diisseldorf, Germany, (1999).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
172
Integration of Decision Tasks in Chemical Operation Process Huanong Cheng, Yu Qian ~, Xiuxi Li Chemical Engineering School, South China University of Technology, 510640, E R. China
Abstract: In real industrial process, operation decisions conflict easily due to different domain knowledge and different time response scales, which make it difficult to integrate different decision tasks to the optimal process operation. In this paper, a multi-agent approach of decision integration is presented. And an experimental platform is developed. A case study, TE process, is used to demonstrate the decision integration approach. Keyword" process operation, decision task, agent, integration 1 INTRODUCTION In chemical industries, many computer aided process operations (CAPO) are developed for operation decisions, such as fault diagnosis, advanced control, on-line optimization and scheduling et al. These CAPOs are designed respectively for some an operation domain and lack of collaboration. Consequently, it is necessary to integrate the different CAPOs to computer integrated process operation system (CIPOS) for the optimal operation. Main challenge in CIPOS is task integration. Decision tasks conflict easily due to differences of domain knowledge and response time scales, which makes it difficult to coordinate different decisions efficient. Recently, more concentration is focused on agent-based approaches in decisions collaboration in the process system, especially in process design. The agent-based approaches are used to eliminate the decision conflictions among the multi-disciplinary teams and achieve the aim of concurrent engineering [1,2,31. Compared with prominent progress in process design, the complexity of operation system causes agent-based approaches in process operations are still applied in single domain, such as modeling of supply-chain [41, cooperation of fault diagnosis I51, abnormal management [61 and on-line product and financial scheduling E71. 2 SIMPLIFICATION OF PROCESS OPERATION SYSTEM It is still impossible to implement the integration of all process operations by one step in the current situation [81. One approach is to divide the process operation system into several sub-systems. Integration is implemented in these sub-systems respectively, then to integrate the sub-systems into a whole system. One sub-system is the integration of data reconciliation, advanced control, steady-state simulation and online optimization I91. The online optimization is executed by the advance control (such as MPC). These operations interact closely and time response is from about half an hour to a day or more. So they are treated as a category. One sub-system is to combine data reconciliation, monitoring, diagnosis with control i To whom corresponding should be addressed,
[email protected]
173 systems [101. Abnormal features are caught by process monitoring from the process data, then are treated to find the cause by fault diagnosis. According to the causes, the control parameters are modified and process transform to normal state. These operation tasks respond in time-scale from seconds to minutes. Usually, process monitoring and fault diagnosis are correlated in domain knowledge. Thus, process monitoring and fault diagnosis are set to a category. Another sub-system is scheduling system, whose time-scale is weeks or months. We define the scheduling system as another category jill. The above sub-systems can be found in real industries, so we use the idea in multi-agent based decision integration. First, process operation system is divided into three sub-systems. Second, FDD (Fault Detection and Diagnosis) agent, CO (Control and Optimization) agent and SC agent are built respectively. The system division approach is illustrated in figure 1. The plant and basic control system are passive elements, which are modeled as objects.
[
pl~t
I t
]
~t~rtet ce
"n
'",., i[
ili[
;
~.~ Diagnosis "\ 9,
I
I/li
/
.
'
Simdaticn ~ I9
, q
I,
"-., .....
.,""
i"
~u]e ]') "'-.
Jt ~ . ..'
I
etection
and
Figure 1 Dividing and simplifying of the process operation system 3 INTERACTION DIAGRAM OF AGENTS Cooperation of agents is a procedure in which agents dynamically distribute sources, knowledge and information, negotiate and cooperate with each other to eliminate the confliction among different operation decisions. For the collaboration, it is essential for an agent to know when inform/request other agents, the content of the inform/request. So a basic prerequisite for an operation agent model is to have enough knowledge and information about the related agent status and the common objective. These knowledge and information are stored in the internal database of agent. Therefore, construction of internal knowledge base is the solution for the integration of operation tasks. In this work, the performance of the operation system is decided with individual agents and activities emerged in their interactions. We depict the activities of agents in interaction diagram using extended unified modeling language (UML) [12], in which agent is added as the new component corresponding to object. How to construct the interaction diagram will be illustrated in following sections. When constitution of the interaction diagram of agents in process operation system, one more issue should be addressed: varies kinds of agents and objects would cause too complex behavior under different outside conditions, such as market, environment and society. To this problem, we use some main affairs as intriguing messages. For a example, a fault occur could be a intriguing message, then we use the interaction diagram to represent the cooperation
174 behavior of agents and objects in the operation system. 4 I M P L E M E N T OF MULTI-AGENTS
In this work, a multi-agent system includes [~31. the standard agent architecture, the communication platform of agents, communication language and protocol, and data format of the communication language. The standard agent consists of the internal knowledge base (KB), an executor, a set of function modules and a communication module. The internal KB represents the states of agent itself, environment and neighborhood agents. The executor evokes the different function modules under different conditions. The function modules exchange messages by the executor to perform activities, decisions, communication and learning of the agent. The function modules are the compiled executable files, which are implemented with the different program languages. The only pre-requirement is that the compiled function module must support some application interface protocol. The communication module receives and sends the messages from/to the other agents. Common Object Request Broker Architecture (CORBA) is used as a middle ware of the information exchange of agents [141. CORBA connect the different functional agents on heterogeneous computer platforms and operation systems. On the CORBA platform, agents are independent physically from each others and easy to be removed and added. So the multi-agent system with CORBA is an open architecture, which is essential to the requirements of the process operation system integration. Knowledge Query Message Language (KQML) has been widely accepted as the language and protocol of the information exchange between agents. Similar to Wang and co-workers [41, we use the idea and the philosophy provide by standard for exchange of product data (STEP) to be the data standard of the internal content of KQML. The internal content is expressed in EXPRESS, an information modeling language. 5 CASE STUDY 5.1 Experimental platform of TE process In this work, TE process t151with PI control strategy [16] is used as a case study. Category of raw materials and architecture of flow sheet are fixed, and factors need to be considered are price of products and faults of process. In the case study, we cancel the SC agent and add the price factor in CO agent model. Therefore, the decision integration of TE process operation is focused on the cooperation of CO agent and FDD agent. TE process and basic control system are simulated with TE process simulator. The TE process simulator, CO agent and FDD agent are located in three different computers, which connected with local networks. Fault diagnosis function module in FDD agent is based on PCA algorithm [17]. Optimal function module in CO agent uses the tools of MINOS 5.1 [18]. TE process simulator and CO agent are developed with Borland C++ Builder 5. FDD agent is built by Delphi 6 software. The information among the three components is transformed by CORBA middleware, which is developed with Borland VisiBroker tool. 5.2 Interaction diagram of operation agents in TE process The behavior of CO agent and FDD agent are depicted in the following interaction diagram (Fig. 2): The process data are transferred to CO agent and FDD agent. FDD agent analyzes the process data and abstract features. If an abnormality is detected and the cause of the fault
175 is found, the fault information will be send to CO agent. At the same time, the control action is send to the process operation to eliminate the fault. In addition, the optimization result is transfer to the fault detection and diagnosis to avoid mis-warning. Activities of CO agent receive the fault information, unit optimization results and market data to make out decisions of production modes and optimal set points. When CO agent and FDD agent could not make decisions based on the abilities themselves, they will consult with the operators
FDD agent
Process I
CO agent ]
_L
<
Process data
Process data
~-L ~_arket data
Consult[ Decision -)
!
Control parameters . . . . . . . . . . . . . . . .
>
Fault information
f Production
Consult > Decision
_
state
.
.
.
.
.
.
.
.
Control parameters
Fig. 2 The interaction diagram of the process operation system 5.3 Decision integration of FDD agent and CO agent According to the interaction diagram, the internal knowledge database is constructed. We investigate two situations of process operations: one uses the multi-agent approach; one doesn't. A step disturbance, lose of flow A, is loaded on TE process. Figures 3-8 are the results of process operation under two situations. The process optimization object is the minimum of operation cost. We use scores and score contribution to detect the abnormal feature and diagnose fault. Figure 3~5 describe operation process with proposed multi-agent cooperation approach. When TE process is loaded the disturbance at t=-l.8h, we observe the pressure increase in figure 3. Also observed the decrease of the operation cost in figure 4. At this time, FDD agent detects the abnormal feature and identifies process in fault state. In this situation, FDD agent inform CO agent to stop running. FDD use functional module to diagnose the cause of faults that score contribution of variable 1 is the maximum (variable 1 in figure 6 responds to input flow A). Then FDD agent makes decisions to eliminate the fault before the up-limit of the reactor pressure (3000kpa) is reached. From t=-4h to 6h, it is the period that fault is diagnosed and eliminated. In figure 3, it is found that process operation is back to normal state again. From this time, CO agent is evoked and executed. From t=-9h tollh, the process operation changes form the normal state to the transition state. From t=-12h, the process is steady at the optimal set point.
176
Fig. 3 Reactor pressure
Fig. 5 Scores contribution plot of TE process
Fig. 7 Operation cost of TE process
Fig.4 Operation cost of TE process
Fig. 6 Reactor pressure
Fig. 8 Scores contribution plot of TE process
Figure 6 and 7 are the reactor pressure and operation cost when CO agent and FDD agent run respectively. A same disturbance is loaded at t=0.8h. Although the process operation is in abnormal state, CO agent still controls the process and executes optimization function without cooperation with CO agent. FDD agent cannot diagnose the cause because CO agent is modifying the control parameters of the process. The above phenomena are illustrated in figure 8 that FDD agent cannot find the corresponding variable causing faults in the scores
177 contribution diagram. From t=lh, the process pressure increases to up-limit (3000kpa). Then the accident occurs. 6 CONCLSIONS An agent-based approach is presented for the decision integration in process operation systems. According to the relations of domain knowledge and time response, the process operation systems is divided into three sub-systems, which are fault detection and diagnosis; optimization and control; and scheduling. For three sub-systems, corresponding FDD agent, CO agent and SC agent are built. An interaction diagram with the extended UML is used to illustrate the behavior of agents in the operation system. The results of a case study show that different operation decisions can be cooperated efficiently using proposed approach. REFERENCE
[1] [2] [3] [4] [5] [6] [7] [8]
Batres, R., M. L. Lu and Y. Naka, Comp. Chem. Eng., 21(1997) $71. Batres, R., S. P. Asprey and Y. Naka, Comp. Chem. Eng., 23(1999) $653. Han, S. Y., Y. S. Kim, T. Y. Lee and T. S. Yoon, Comp. Chem. Eng., 24(2000) S 1673. Garcia-Flores, R., X. Z. Wang and G. E. Goltz, Comp. Chem. Eng., 24(2000) S 1135. Eo, S. Y., T. S. Chang, D. Shin and E. S. Yoon, Comp. Chem. Eng., 24 $729. Yang, A. and M. L. Lu, Comp. Chem. Eng., 24(2000): $39. Badell, M., J. M. Nougues and L. Puigjaner, Comp. Chem. Eng., 22(1998) $271. Pekny, J, Venkatasubramanian V. and Reklaitis, G.V. Computers-Oriented Process Engineering. Espuna Elseuier Science Amsterdam, (1991) 435. [9]. Yang, S. F. and Canfield, F. B. Proceeding of 4th Intern Symposium on Process Systems Engineering, Montebello. (1991). [10]. Musliner, D. J., and K. D. Krebsbach, Proceedings FOCAPO'98, (1998) 366. [ 11]. AsepnTech, http://www.aspentech.com, (2001). [12]. Booch, G., Rumbaugh, J. and Jacobson. I. (1997), The unified modeling language user guide. Reading, MA: Addison-Wesley. [13]. Cheng, H. N. Dissertation, South China University of Technology (2002). [ 14]. Object Management Group, OMG Unified Modeling Language Specification (1999). [15]. Downs J. J. and E. F. Vogel, Comp. Chem. Eng., 17 (1993) 245. [ 16]. McAvoy, T. J. & Ye, N., Comp. Chem. Eng., 18(1994) 383-414. [17]. Wang J. F. Dissertation, South China University of Technology (2002). [ 18]. Ricker, N. L., Comp. Chem. Eng., 19(1995) 949.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
178
Marginal Values Analysis for Chemical Industry K w o k - Y u e n C h e u n g a, Chi-Wai Hui a,*, Haruo Sakamoto b and Kentaro Hirata b
a Chemical Engineering Department, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. b Safety Engineering and Environmental Integrity Lab., Process Systems Engineering and Production Technologies Field, MCC-Group Science & Technology Research Center, Mitsubishi Chemical Corporation, 1, Toho-cho, Yokkaichi Mie 510-8530, Japan. Abstract
The paper proposes a systematic method called "Marginal Values Analysis" (MVA) to calculate marginal values of streams and process sections in a chemical production site. In the MVA, the marginal values are defined into three types to represent the marginal profit, production cost and product value of a stream. With these precise definitions, the marginal values can be used for locating process bottlenecks, pricing utilities, intermediate materials and energy flows. They can also be applied on developing sitewide management strategies, assisting business decisions. A general linear programming model, site-model, that includes all the heat and mass balances and interactions of a chemical production site is adopted for generating the marginal values. By proper formulations in the site-model, all the three marginal values of a stream can be obtained simultaneously. A case study of investment decision-making on a utility system is presented to illustrate the capability of MVA on debottlenecking. Keywords: Marginal Value, Marginal Cost, Site-Modeling, Utility System, Optimization 1. INTRODUCTION Marginal costs have various applications and definitions in different business areas. In chemical production industry, utility costing is one of major applications of the marginal costs. Conventionally, the marginal costs are calculated to reflect the production cost of a utility stream by tracing its corresponding production path [1-2]. This approach only takes into account the localized effects from the utility plant on utility generation. It may not be able to provide an accurate result. Since a utility can have more than one generation path, and production plants also affect the utility balances together with the utility plant. Instead of only calculating a marginal cost that reflects a stream production cost, this paper adopts two other marginal values representing the marginal profit and the product value of a stream [3]. The site-model is employed to calculate the marginal values in this paper. As the site-model includes all utility and material balances and interconnections inside the chemical production site, it can overcome the localized constraints in the traditional approach. With implementing proper formulations in the site-model, all three $
Author to whom all correspondence should be addressed.
Email:
[email protected]
179 marginal values of a stream can be obtained simultaneously from a linear programming solution. Marginal values are not limited for just the utility pricing. The concept of the marginal values can be extended to the intermediate material pricing and bottleneck determination. In this paper, an example of investment decision-making on equipment installation will be performed by using site-model to provide some insights into the marginal values applications. 2. DEFINITIONS OF MARGINAL VALUES Marginal values have been known as an important aid on interpreting model results already [4]. To make the marginal values analysis more systematic, three different marginal value definitions proposed by Hui [3] are adopted. They are marginal profit (MP), marginal cost as feed (MCF) and marginal cost as product (MCp). The MP of a stream refers to the additional profit caused by a unit increment on the stream flow. It is one of traditional definitions of marginal value. MCF and MCp of a stream are defined as the additional profit from adding to or removing by a unit flow of the stream without any charge respectively. Usually, MCF represents the market value of the stream and the MCp indicates the production cost. With proper formulations in the site-model, the three marginal values can be calculated easily. The interpretation of the marginal values is summarized in the Table 1. 3. CASE STUDY: INVESTMENT DECISION MAKING To improve the profitability of the production site, investments on new equipment items are regularly required for modifying the site infrastructure. To have a comprehensive investment decision, site-wide effects should also be considered. The site-model, which contains all site-wide mass and energy balances, is therefore a suitable tool for determining the best decision together with the marginal values analysis. Table 1: Interpretations of marginal values. Marginal Values Possible Meaning MP = 0 Stream flow at optimal value.
. . . .
MP > 0 or MP < 0
Bottleneck at the stream. Increasing or decreasing the stream flow respectively can obtain additional profits.
MCp = MCF, MCF > 0, final product value >> MCF
Indicates the stream's production cost. Bottleneck is in downstream processes.
MCp = MCF, MCF > 0, final product value ~ MCF
Indicates the stream's product value. Bottleneck is in upstream processes.
MCp = MCF and MCF < 0
Indicates surplus or waste. Introducing new users or even purging can obtain additional profits.
MCp ~ MCF, where MP = MCp - MCF
Bottleneck at the stream, same as M P , 0.
180 3.1. Problem Definitions An example production site shown in Figure 1 is employed in the case study. The example site consists of five production plants (ETY, VCM, PVC, PP, PE) originally. After three new plants are built (Plants A, B and C), the total electricity demand in the production site is increased greatly. This reduces the cogeneration ability of the utility plant as well as the total site profitability. To resume the capability of the utility plant, new turbines and/or boilers may be required.
Currently, the utility plant contains two boilers (B 1 and B2), two back-pressured turbines (T1 and T2) and a condensing turbine (T3). Options of new turbines and boiler (TX1 to TX6 and BX1, shaded in Figure 2) are suggested for new installation. A site-model, which includes all the information, is then applied to identify the best investment option.
Figure 1: Sample production site.
Figure 2: Utility plant configurations.
3.2. Site-Model Definitions The site-model is a linear programming model, which contains all the mass balances, energy balances and plant unit interactions in the chemical production site(s). The basic components used for the site-model are defined as follows. Indices:
p m a t s r
Sets:
Plant or unit. Material (includes utilities). Alternative (variable properties). Time period (month). Shift in a time period, t. Balance equation index.
Parameters: Er,p,m,a,t,s Zp, m,a,l,s
Up, m, a, l, s
p, m,a,t,s
SLt, s
P M A T S R
Set Set Set Set Set Set
ofp. of m. of a. of t. of s. of r.
Coefficient for variable at plant p with material m and alternative a in period t, shift s for balance equation r. Lower bounds for variable at plant p with material m and alternative a in period t, shift s. Upper bounds for variable at plant p with material m and alternative a in period t, shift s. Cost/price of material m with alternative a at plant p in period t, shift s. Length of shift s in period t.
181
Positive Continuous Variables: Fp,m.a.t,s Variable of material m with altemative a at plant p in period t, shift s. Continuous Variables." Profitt, s Profit in period t, shift s. TProfit Total profit of the production site during the planning period. Mass and Energy Balance Equations." All mass and energy balance equations are arranged by the index r in the site-model. With a predefined set of (p,m,a), the equations can be represented by (1). ~_,(Fp ..... t,s X Ep ..... t,s) =0, FER, t E T , s E S . (1)
r~(p,m,a),t,.~
Bounds of Variables: The upper and lower bounds of variables are controlled by two simple constraints. Fp ..... t,s
< Up ..... ,,s ,
Fp ..... ,,s >- L p ..... t,.~ ,
peP, meM, aeA, teT, seS. peP, meM, aeA, teT, seS.
(2) (3)
Profit Calculation. The following equation calculates the site profit in period t shift s.
Profit,,, = Z (Fp..... ,,., x C p..... ,.., x SL,..~),
peP, m eM, a eA, te T, s eS.
(4)
p,m,a
Objective Function." The site-model objective function is to maximize the total profit in the planning period.
max(Tprofit = ~-" profit,,s l
t~ T, seS.
(5)
The site-model is actually a general model, so that, it can be easily modified to use on investment decision making as well as other applications. 3.3. Base Case
Base case shows the production site's conditions when there is no any new equipment. The annual total site profit is 11,134.24 million Yen. The utility marginal values under this situation are studied and shown in Figure 3.
MCF values of all utilities vary regularly throughout the year, except in maintenance periods (April and September). During the normal operation periods, it is observed that the electricity MCF has relatively high values when comparing to the importation prices. (MCF: Day: 21,000 Yen/MW, Night: 20,000 Yen/MW, Mid-night: 6,000 Yen/MW; Price: Day: 21,000 Yen/MW, Night: 12,000 Yen/MW, Mid-Night: 3,000Yen/MW). The higher MCF values suggest that additional production as well as profit can be induced by extra electricity supply. Besides this, low pressure (LP) steam MCF value in day shift equals to zero, which indicates purge of LP steam. Excess steam is produced because of maximizing the existing turbines electricity generation. However, steam purge is a kind of energy loss and it should be avoided.
182
Figure 3: Utility MCF in base case.
Figure 4: Utility MCF in case 1.
3.4. Case 1: Low Pressure Condensing Turbine To improve the situation in base case, a low pressure condensing (LP-CT) turbine, TX2, is therefore proposed to utilize the LP steam surplus. After the site-model calculation, the generation capacity of turbine TX2 is determined as 4.374 MW. It requires the capital cost of 84.99 million Yen. The total-site profit increases to 11,278.20 million Yen, it is about 1.3% more than the base case. The electricity MCF of this case is similar to the base case (Figure 4). However, the steam MCF values are increased about 2.5 times to about 2,500 Yen/Ton. The value is much higher than its production fuel cost. It suggests that both electricity and steam are limited in the site. After adding a new turbine TX2, the existing boilers already operate at their maximum loading. There is no more steam for extra electricity generation, so that, the new turbine contribution is restricted. 3.5. Case2: Low Pressure Condensing Turbine & Boiler In order to overcome the limitation, a new boiler BXI is then proposed to operate together with the LP-CT turbine, TX2. The result capacity of turbine TX2 is 14.336 MW and the capacity of boiler BXI is 180.55 T/H. Although the total capital cost is greatly increased to 274.97 million Yen, the overall profit still increases to 11,456.05 million Yen (about 2.9% increment with respect to the base case). This is because the new boiler BX1 can provide more steam for the new turbine TX2 to generate electricity. The turbine TX2 then can have a larger generation capacity, and it can support additional production as well as reducing the electricity importation amount. Combining all the effects, a net profit increase is obtained. The utility MCF is studied again to explore any improvement opportunity (Figure 5). The electricity MCF remains at a high level and the electricity supply still acts as a pinch in the site. Figure 5 also shows that the MCF of different pressure steams have the same value at day and night shifts. It indicates that improper steam balance occurs in the utility system. Large amounts of VHP and HP steam letdown to LP level for utilizing TX2 without further electricity generation. Such inefficient generation path increases the electricity production cost and makes addition production be non-profitable. 3.6. Case 3: Very High Pressure Condensing Turbine & Boiler To prevent the improper steam balance, a very high pressure condensing (VHP-CT) turbine, TX6, is suggested to substitute the LP-CT turbine, TX2. The boiler BX1 is kept unchanged. With this new combination, the VHP is well utilized for electricity generation in the turbine TX6 without letting down. The result turbine capacity is 72.837MW and the boiler capacity is 290.87T/H. This combination provides a huge selfelectricity generation capacity and can reduce the electricity importation cost a lot as well as maximizing the production level. The total profit is therefore increased to 13,260.55
183
Figure 5: Utility MCF in case 2.
Figure 6: Utility MCF in case 3.
million Yen. It is about 19% more than the base case. In this case, the MCF values of different kinds of steam are no longer the same (Figure 6). The electricity MCv also decreases to a low level. These two facts indicate the pinch removal from the utility system. 3.7. Case 4: Mixed Integer Linear Programming Model Conventionally, engineers apply mixed integer programming to obtain the optimum solution. In order to compare the conventional method and the marginal value analysis, the linear programming site-model is modified to include binary variables for optimizing the design options as a mixed integer linear programming (MILP) model. After optimization, the MILP site-model provides the same results as the case 3. It includes the new turbine TX6 and the new boiler BX1. This proves that the MVA is also possible to obtain the optimal solution. In fact, the MVA requires less computational effort than the MILP model, as increasing the number of design options will slow solving process down. It is especially unfavorable in industrial scale problem. In addition, the marginal value analysis can give a clear explanation during the utility plant configuration changes. It makes the results much more meaningful than only providing the optimum solution. 4. CONCLUSIONS The marginal value analysis (MVA) is introduced in this paper, which can be used to determine the utilities and the intermediate materials prices, make appropriate business decisions and locate or remove bottlenecks in a chemical production site. A case study of investment decision making is performed in this paper by adopting MVA. The MVA solution is then compared with the MILP optimal solution. It is believed that the MVA can also obtain a near best solution or even the optimum solution without requiring huge computational efforts. ACKNOWLEDGMENTS The authors would like to acknowledge financial support from the RGC (Hong Kong) and the Major State Basic Research Development Program (G2000026308) and technical support from Mitsubishi Chemical Corporation. REFERENCES [1] A.P. Rossiter, and S.M. Ranade, IChemE Symposium Series, 109, (1988), 283-301. [2] S.M. Ranade, S.C. Shreck, and D.H. Jones, Hydrocarbon Processing, 68(9), (1989), 81-84. [3] C.W. Hui, Computers and Chemical Engineering, 24, (2000), 1023-1029. [4] J.C.M. Hartmann, Hydrocarbon Processing, 78(2), (1999), 64-68.
184
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier ScienceB.V.
An Integrated Decision Support Framework for Managing and Interpreting Information in Process Diagnosis Michael Elsass (Ohio State University), Saravanarajan (UCLA), James F. Davis (UCLA)*, Dinkar Mylaraswamy (Honeywell Labs), Dal Vernon Reising (Honeywell Labs) and John Josephson (Ohio State University) In this paper we describe and demonstrate a comprehensive decision support framework for rapid operator understanding of abnormal plant situations. The operational objective is to manage fault situations that are on a trajectory to exceed the capabilities of distributed control (or optimization), but have not yet reached alarm limits. This is accomplished through early detection of an abnormal situation, assimilation of relevant information for quick understanding, rapid assessment and diagnostic localization. The early detection, assimilation and assessment components are briefly described in the context of the operator Graphical User Interface (GUI). The paper focuses on diagnostic localization, diagnosis to a level of explanatory detail that is just sufficient for operator action. Abstract
* James F. Davis (
[email protected]) author to whom all correspondence to be made 1. INTRODUCTION Management of abnormal situations is a challenge in the chemical industry accounting for $10 billion in lost revenue in the US alone [1]. The Abnormal Situation Management| Consortium (ASM| was formed in 1992 to create solutions that reduce the number and severity of abnormal situations +. One area pursued by the ASM Consortium is operator assistance with early event detection and situation assessment to avoid prolonged events, alarms, and potential safety hazards. The consortium perspective is holistic, encompassing the plant and the people that operate it. Decision support therefore includes tools that help the operator explore and quickly localize abnormal situations. 2. THE O P E R A T O R GUI A N D THE DECISION SUPPORT SYSTEM In any abnormal situation, the role of the operator is to accurately detect, diagnose and take proper corrective action. The operators' interface is, therefore, a key system element for rapid understanding and is also a key motivator for an integrated approach [2]. Figure 1 shows a prototype interface for a decision support system developed for a demethanizer, the industrial case study used in this work. The interface shows four distinct areas: the polar star GUI, key variable trends for rapid assessment, a process flow sheet showing system and device localization and malfunction hypotheses for diagnostic localization. +
Honeywell, Celanese, ChevronTexaco,ConocoPhillips, ExxonMobil,, NOVA Chemicals, Shell, Brad Adams Walker Architecture, P.C., TTS Performance Systems,User CenteredDesign Services, LLC and UCLA
185
Figure 1. Elements of a decision support system interface In the top left panel, a polar star is shown as one GUI for presenting plant functionality by displaying the status of functions at each point on the star. Early detection and rapid assimilation of information is facilitated by a functional view of the plant operation based on a definition of plant objectives and how they are achieved. Each plant function aggregates key sensors into a multivariate detector called a state estimator (SE) [3]. This functional assessment provides a first level diagnosis by virtue of the distributed functional organization itself and the associated SEs. Key trends are shown in the right hand panel. ASM studies place high value on relevant trend plots that are readily available to help operators better understand a given abnormal situation. Upon detection, operators can view and manipulate sensor displays grouped according to functional abnormality. Diagnostic localization is a form of diagnosis that evaluates process behavior to narrow the focus to those sub-systems and devices that are in fault or failed modes. While the SEs run continuously, diagnostic localization is triggered and runs only during an abnormal event. The upper right hand panel in Figure 1 displays a list of possible localized fault and failure hypotheses that are updated throughout the duration of the event. The lower left hand displays the device and system level contributors. 3. DIAGNOSTIC L O C A L I Z A T I O N The task of diagnostic localization involves aggregating evidence from multiple sources, such as SE output and sensor measurements, and applying the information to a process model. Using these input sources, a localization algorithm operates on a causal process model to assimilate data into possible process behaviors and generate diagnostic hypotheses.
186
3.1 Functional Representation A causal process model is built as a Functional Representation (FR), a modular, device-centered formalism that models structure, mode of operation, function and behavior [4]. Function and behavioral knowledge is stored within the device models, and is expressed in a process sense when the models are connected to reflect process topology. Process behavior is modeled as a highly distributed set of device level Causal State Digraphs (CSDs) and Causal Process Diagrams (CPDs). Figure 2 shows the FR model for a control valve. The CSD is shown at the bottom of the diagram. It is composed of a set of connected device port states (inlet, outlet, and an intemal state) with each state consisting of qualitative process variable descriptions. Process variables can be material, energy or information. Material refers to any process fluid characteristics such as flow, temperature, phase, etc. CSD states are causally connected to model the range of device states for an instant in time, giving a 'snapshot' of the device behavior. Behavior modeled with this static perspective significantly decreases the possible causal linkages a device can exhibit in a CSD. In the valve example, a typical representation would have a behavior describing high inlet flow and low outlet flow (caused by the valve closing), a behavior occurring over a period of time. However for an instant in time, the valve inlet flow must equal the outlet flow. While the CSD represents all device behaviors, a Causal Process Diagram (CPD) represents a single device behavior. Each device has a distributed set of CPDs, each of which models a single consistent path through the CSD. Only a few of the valve CPDs are shown in the figure for the purposes of clarity. Function groups CPDs, (i.e. individual device behaviors) based on various userdefined categories relating to device operation or the internal variable transformations. In Figure 2, control valve functions model behavior around valve aperture since this directly influences the flow through the valve. CPDs can be linked to multiple functions, a particularly useful capability when categorizing the very large number of possible behaviors associated with complex devices, such as a reactor.
Figure 2. FR representation of control valve (partial)
187
Modes are located at the top of the FR construct (see Figure 2) and organize functions and CPDs. In this work, modes characterize failure, fault and normal behaviors. Normal refers to a behavior in which all pertinent process variables are within a normal range. Failure refers to abnormal behaviors associated with an internal malfunction. Fault refers to an abnormal behavior resulting from abnormal input. A normal behavior can be restored when a deviating input is returned to normal. Each device model is composed of a linked set of the above constructs. By connecting the device models, these constructs are brought together into a highly distributed process model that decomposes behavior into failure, fault and normal categories. This kind of decomposition allows us to operate on the model in a highly selective manner. 3.2 Causal Link Assessment (CLA) Because of the modularity of the FR process model, the information system can explore all feasible behavioral links based upon sensor measurements. Diagnostic localization is achieved by applying the CLA algorithm to the FR process model to first generate hypotheses and then discriminate among them at a point in time and over time. 3.2.1 Hypothesis Generation Hypothesis generation produces a set of possible static state descriptions using abstracted sensor readings as input. An abstracted sensor reading can have the values of low, normal, or high. Resulting process states comprise the list of individual device behaviors (CPDs) that are linked. The CLA algorithm produces process states through an exhaustive device-bydevice analysis of the entire process model that assembles feasible device behaviors into an overall process behavior. Figure 3 shows a simple example process containing a temperature sensor and valve with an initial boundary state of [high temperature: normal flow]. The boundary state is applied to the temperature sensor inlet, and matched to the inlet states of the temperature sensor's CSD.
Figure 3. Process state generation
188 Since only temperature is modeled in the temperature sensor device, the [temperature high] state is matched to 'high inlet temperature' in the CSD, while the [flow normal] description is stored and passed on to the valve device. Every CPD containing the 'high inlet temperature' state is tagged as a possible behavior for the sensor, e.g. [high temperature, high signal], [high temperature, normal signal] and [high temperature, low signal]. For these CPDs, signal refers to the sensor measurement. Even though the process fluid temperature is high for all these states (due to the inlet condition), a sensor malfunction can result in a low or normal measurement. Sensor behaviors are then matched against the actual sensor measurement. Process states are built by accumulating device states starting from the initial boundary state. In the figure the [high temperature, normal signal] CPD is added to the [high temperature, normal flow] boundary state. The outlet state of the sensor CPD [high temperature] is combined with the stored variable description [flow normal], and used as the inlet to the control valve ([high temperature, flow normal] state). Since flow is the only variable of concern in a control valve, the [flow normal] state is matched to the inlet states of the control valve CSD. Three CPDs match the normal inlet flow state as shown in the figure. Because there are three possible behaviors, the system branches into three different process states, each corresponding to a different valve behavior (with the sensor behaviors the same for all). The control valve example demonstrates how the procedure branches as it progresses from device to device and how multiple unique process states develop. Consequently, many process states will be generated that show multiple malfunctioning devices, and in the extreme, there will be process states indicating that a majority of devices are malfunctioning. Note that CLA is not causal propagation. In a general sense, causal propagation uses upstream conditions to predict downstream behavior, a procedure that examines a process over space and time. CLA uses current sensor values to examine the plant at a static point in time. A hypothesis is established by analysis of repeated static views throughout the duration of an abnormal event. Branching in CLA is not about exploring feasible propagation paths, but about exploring feasible paths of static causal links. It is in this context that the branching process needs to be managed. The static basis of the state descriptions lets the branching be effectively managed by ordering the investigation and matching against snapshot sensor information. Order of investigation is a strong factor in managing process state branching by constraining the number of possible behaviors a device can exhibit by constraining the number of inlet conditions considered. A relatively simple device such as a valve will have a small number of possible behaviors whereas a complex device will have a large number. By setting the states of the simpler devices around complex devices, the complex device inlet possibilities will be constrained, thus constraining the number of possible behaviors the complex device can exhibit. Matching sensor devices with actual sensor measurements constrains the signal inlet and reduces the possible sensor behaviors (as shown in Figure 3).
3.2.2 Hypothesis Discrimination Finally, CLA generates diagnostic hypotheses by accessing modes related to each device CPD in the set of process states. Any device CPD linked to a failure mode is considered a feasible hypothesis in that the failure device is the malfunction hypothesis. The set of feasible malfunction hypotheses is then evaluated to determine the most probable
189 hypotheses using three criteria: number of simultaneously malfunctioning devices, number of time-steps a malfunction persists, and comparison with SE outputs from the rapid assessment module. Malfunction hypotheses that contain a single malfunctioning device are considered more probable than hypotheses with two or more malfunctioning devices, and are therefore ranked higher. The CLA algorithm is constructed so the user can specify the number of simultaneous malfunctions to consider. For the case study described in this work, the threshold was set at three. Those malfunction hypotheses that persist over time are ranked higher than those appearing sporadically throughout the duration of the abnormal event. Finally, each malfunction hypothesis is compared to the state estimator output, with unmatched hypotheses rejected. While both state estimator and FR systems use sensor data as input, they constitute independent models that should be in agreement. 4. CASE STUDY An industrial demethanizer unit was used to evaluate prototype performance. The unit had several hundred sensors that recorded readings once per minute. In this work, a subset of 30 key sensors was used to drive both the state estimators and the CLA in the information system. A case study involved an increasingly erratic condenser level sensor caused by an oil leak that slowly upset the process over a period of two days. Operators initially misdiagnosed this situation and took improper corrective action that further exacerbated the upset. Applying plant data to state estimation and diagnostic localization demonstrated that the SE detected an abnormal situation within seven hours of the initial malfunction. Once a functional abnormality was detected, the CLA system was triggered. Initially, hypotheses were about several unrelated malfunctions. As the malfunction further manifested itself, the CLA algorithm localized the malfunction to the overhead section. Soon after, the malfunction was narrowed to the condenser control system and finally the condenser level sensor malfunction. This was a blind case study analyzed by the decision support system. REFERENCES
[ 1] Nimmo, I. "Adequately Address Abnormal Operations." Chemical Engineering Progress, 91 (9) pp. 36-45 (September 1995). [2] Cochran, E., C. Miller and P. Bullemer, "Abnormal Situation Management in Petrochemical Plants, NAECON (1996). [3] Bullemer, P., D. Mylaraswamy and K. Emigholz, "Fielding a Multiple State Estimator Platform, NPRA conference (2000). [4] Elsass, M. J. "Multipurpose Sharable Engineering Knowledge repository", PhD thesis, The Ohio State University, (June 2001).
190
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Neural Networks Applied to a Multivariable Nonlinear Control Strategies La~rcio Ender a, Rubens Maciel Filho b
aDepartment of Chemical Engineering, Regional University of Blumenau, Brazil; bLaboratory of Optimization, Design and Advanced Process Control (LOPCA), Faculty of Chemical Engineering, State University of Campinas (UNICAMP), Brazil. Abstract The Artificial Neural Networks (ANN) are computational tools with great number of applications in modeling and process control techniques. The neural networks can learn sufficiently accurate models and give good nonlinear control when model equations are unknown or only partial state information are available. Neural network approach allows taking into account non-linearities of the process as well as variable interactions. The proposed work has as objective to explore the use of the neural networks in multivariable control strategies in which ANN are used as dynamic models in the generation of predictions, as well as in the definition of adaptive control strategies based on neural networks. Keywords: neural networks, control strategies, on-line learning, catalytic reactor, soft sensor.
1. INTRODUCTION The ability of the neural networks to represent nonlinear behaviour and noisy data has been attracting the community of process control. Bhat and McAvoy Ill were among the first ones to use ANN's to model nonlinear chemical processes.The abilities of the nets to representing the dynamic behavior of processes have been used to solve problems applied in engineering for the generation of predictive dynamic models as well as in the definition of control strategies [21. Multilayered feedforward neural networks represent a special form of connectionist model that performs a mapping from an input space to an output space. They consist of massively interconnected simple processing elements arranged in a layered structure. Bearing this in main, this work has as objective the discussion of some applications of the multilayered feedforward artificial neural networks in adaptive control strategies. 2. NEURAL NETWORKS APPLIED IN CONTROL PROCESSES Process control has been by far the most popular area of neural network applications chemical engineering. The following characteristics and properties of neural networks are importantt31: a) Nonlinear systems. Neural networks have greatest promise in the realm of nonlinear control problems. This stems from their ability to approximate arbitrary nonlinear mappings; b)Parallel distributed processing. Neural networks have a highly parallel structure which lends itself immediately to parallel implementation; c) Learning and Adaptation. Networks are trained using past data records from the system under study. A suitable trained
191 network then has the ability to generalize when presented with inputs not appearing in the training data. Networks can also be on-line adapted; d) Data fusion. Neural networks can operate simultaneously on both quantitative and qualitative data; e) Multivariable systems. Neural networks naturally process many inputs and have many outputs. Baughman and Liu[3]classify various approaches into three categories: a) Direct network control: training a neural network as the controller and determining the controller output directly; b) Inverse network control: Training a neural network as an inverse model of the process, predicting the process inputs necessary to produce the desired process outputs; c) Indirect network control: training a neural network to serve as a model of a process, or to determine a local controller. Psichogios and Ungar [21 classified the applications of neural networks as direct and indirect method. In the direct method, a neural network is trained with observed input-output data from the system to represent its inverse dynamics. In the indirect method the neural network is trained with input-output data from the dynamic system to represent the forward dynamics. This work considers only two classifications: direct and indirect control. Direct control whenever the neural network acts as controller and indirect control when the neural network is used as models to prediction the states or future dynamics behavior. 3. ON-LINE LEARNING OF THE NEURAL N E T W O R K The on-line learning presents limitations due to the number of necessary iterations to attend the error criterion adopted as well as the necessity of the learning to be in real time. A maximum number of iterations were allowed to outline these limitations. Two vectors formed with the last inputs/outputs of the process compose the patterns of the on-line learning. The vector that contains the oldest information is used in the training the neural networks and the vector that contains more recent information it is used to evaluate the obtained nets. To guarantee the good representation of the process through neural networks, a strategy formed by three nets acting parallelly was adopted. The first is formed by weights of the off-line learning, here denominated of standard weights; the second, is initialized with the standard weights and it is submitted at the on-line learning. Whenever the standard weights present better performance, this net has its weights substituted by the standard weights. The third is initialized with the standard weights and continually is submitted to the on-line learning at each sampling time. The on-line learning procedure is carried out with data set provided by the reactor model emulating a real plant. This structure formed by three neural networks will be used in the representation of the dynamic behavior in control strategies, as well as in control strategy where the network acts as a controller. The neural network that presents the smallest quadratic error in the representation of the vector that contains the more recent inputs/outputs (patterns) of the process is used in the control strategy in this sampling time. This procedure is repeated at each sampling time. 4. CONTROL S T R A T E G Y - DIRECT ADAPTIVE N E T W O R K C O N T R O L The proposed control strategy is formed by two optimization procedures named controller optimization and setpoint optimization, based on neural networks [a' 5] showed in the Fig. 1. Historical input-output data were used to train two dynamical neural networks of the control strategy [5] and a stationary neural network to estimate the concentration and temperature in the soft sensor. The first of the dynamical network is trained to represent the forward process dynamics. The inputs of the network are the current and past values of the controlled and manipulated variables and the outputs of the network are the one step ahead
192 prediction of the process outputs. The second dynamical neural network is trained to represent de inverse process dynamics and acts as a controller of the strategy. The inputs are the setpoints of the closed loop for the next sampling time; past controlled and manipulated variables and the outputs of the neural network are the manipulated variables for the next sampling instant. The stationary neural model is used as soft sensor to predict the concentration of the reactor in the setpoint optimization procedure. The neural networks used in the proposed control strategy are on-line trained through an appropriate methodology described in the sequence.
Controller I Design I
inpu
++'
I
I Filter I
I i '
Setpoint 9. . Optimization
9 +P Destred Concent
Neural
,q |
tp ts
I
ControllerI ]-' " 1" +P I
?
Process Model
[_.._A |
oed~ l
?
liT,.. /" I Pro+os, IT---" /..I
e cess I Model 9 Learning I (Stationary Model) "T
I
.-,
S e t,pore t
i: : :
: :
tJptlmtzatton " I Process Parameters ....I:, ........................ J...................... J.................. :, Fig. 1 - Control Strategy- Direct Adaptive Network Control
The controller is based on a neural network that represents the inverse dynamics of the system, which is on-line trained through an optimization routine. The controller optimization routine adjusts the weight of the neural controller using the estimated global error of the closed loop at each sampling time, based on a dynamic model of the process, represented by a neural network with on-line learning ESI. The dynamic model of the process is on-line trained with the last ones inputs/outputs data of the process stored in a vector. The controller design uses the same inputs of the neural controller at each sampling time. This optimization routine adjusts the weights of the neural network of the controller in such way to minimize the estimated global error (e yset_ y pred). Considering that the estimated error is based on a neural model, it is necessary to have a model that represents with fidelity the dynamic behavior of the process. When the quadratic error of the neural model outputs is smaller than the desired tolerance, this model is used in the optimization routine. If the quadratic error becomes larger in relation to a determined error, the controller makes use of the standard weight (weight of the off-line learning) to generate the control action for this sampling instant. The global error (e) cannot backpropagate directly because of the location of the process information (it is available process exit data). The global error is propagated back through the plant using the Jacobian matrix of the process [4' 51. These control actions deviations are used in the on-line learning of the neural network of the controller. The proposed setpoint optimization strategy is based on a neural network representing the stationary model of process and the optimization procedure using Sequential Quadratic Programming algorithm (SQP). As a case study a fixed bed catalytic reactor for production of acetaldehyde by ethyl alcohol oxidation over Fe-Mo catalyst proposed by McGreavy and Maciel Filho [6] and Toledo [7] was used. The neural model able to represent the stationary model of the process is used to predict the concentration and temperatures of the reactor, acting as soft sensor. The inputs of this neural network are state variables that present strong influence on the process and the outputs are the concentration and temperatures of the reactor. =
193 The output variables are used in the objective function in the optimization routine and the obtained temperatures are the setpoints for the control algorithm. A sensitivity analysis through complete factorial design is performed in order to estimate the influence of the process parameters on the output variables. By this analysis the parameters are determined and they will be used as inputs and outputs of the neural network. The inputs in the ANN are feed mass flow (GMo), feed temperature (Tfo), entrance cooling fluid temperature (Tg0) and air/ethanol relationship (R). The outputs of the neural network are the prediction of the concentration in the third (C(3)) and seventh (C(7)) axial orthogonal collocation points and temperatures in the first (Tpred(1)) and third (Tpred(3)) axial orthogonal collocation points. In the control strategy the manipulated variable are feed mass flow (GMo) and air/ethanol relationship (R) and the controlled variables are the temperature of the first and third axial orthogonal collocation points. The setpoint optimization procedure is accomplish by SQP which manipulates GMo and R to take C(3) and C(7) to the concentration setpoint. The obtained values of GMo and R are not the control actions of the strategy, but they are the stationary values for these variables to lead the process to the desired concentrations in the actual conditions. The temperatures obtained (Ypred(1) and Tpred(3)) in the optimization procedure are the temperature setpoints for the control strategy. The optimization procedure is accomplished in a time interval larger than the sampling time of the control strategy or is executed when a significant disturbance is detected. This procedure is used to avoid many changes in the setpoint of the closed loop. The objective function in the optimization procedure is given by: J(k) :)--'~ -Csp(i)-C(i,k) )3 ,T(i,k)2,103
(1)
#=1
subject the following restrictions.'
(2) (3)
Cram
where: N- number of control loops; T(i,k)= T,,re~(j), for Tprea(j)>_1 and T(i,k)= 1, for Tprea(j)( 1 C(i,k) = C(j), wherej is the orthogonal collocation point and
c w
- concentration setpoint.
The deviations of temperatures were used in the objective function to force the system to converge to smaller deviations of the process. The stationary model is submitted to on-line learning with real data of the process. The real data of the process used to train the stationary neural network should really represent the stationary states of the process. The selection of these data is accomplished verifying the behavior of the output vector in relation to the input vector parameters to be used in the training of the neural network. Whether the derivatives of the outputs are larger than a predetermined value, the parameters of this sampling time are discarded. 5. CONTROL S T R A T E G Y - DIRECT ADAPTIVE FEEDBACK C O N T R O L The artificial neural networks can be used directly in the generation of the control actions, representing the feedback control structure. Given an error in the response of the process, the
194
feedback controller produces a control action to minimize this error. This simple configuration includes the difficulty to teach the network; it is not easy to tell what is the proper output of the controller (network) which cancels the plant output error. In order to satisfy these needs, an adaptive strategy is proposed to accommodate the network learning procedure, showed in Fig. 2 . Initially, the neural controller is off-line trained with a set of input/output data from a classical controller. This initial learning should assure that the neural controller maintains the stability of the closed loop. The Controller Optimization module in this control strategy is similar to the other strategy presented in this work. +
~r
ControllerOpttmtzatton
_
Contoller
I I
Design t
~
......................
inputs/outputs
--~
Filter ~-
t
(feedback) ~
Setpoint
:
Process [ " ' : Model J ] ...................
Learning
Process
Fig. 2 - Control Strategy - Direct Adaptive Feedback Control 6. RESULTS The potential of the multivariable Direct Adaptive Network Control strategy were verified by application of perturbations in the concentration setpoints of the process in according of the Fig. 3 and Fig4.
"-'
o i. =
E b-,
463 462 461 460 459 458 457 456 455 454 453 452 451 450 449
Tern p e r a t u r e ........ T e m p e r a t u r e
T(I) T(3)
......... O p t i m i z e d ~ O ptim i z e d
Setpoint Setpoint
2 0
Concentration T(1)
~
T(3)
o
....................................... ,
1 8
E
l 6
o ~
1 4
~
1 2
~
1 0
........ C o n c e n t r a t i o n .... Setpoint C(3) ......... S e t p o i n t C(7)
C(3) C (7)
........ ,,.-
--l"._
t
| 0.0
0 2
014 Tim
016
0'8
1'0
e (hour)
Fig. 3 - Servo Problem - Closed loop response and optimized setpoint for the temperatures; (b) Concentration closed loop response
0.0
012
0 ;4 Tim
0 ;6
0 '8
' 1 1.0
e (hour)
Fig. 4 - Servo Problem - Concentration closed loop response
The manipulated variables of the control strategy were the feed mass flow rate and the air/ethanol relationship and the observed variables were the temperature of the first and third orthogonal collocation points in the reactor. The objectives were the control of the
195 concentration in the third and seventh orthogonal collocation points. The proposed control strategy acts with different sampling times in the generation of the control action, updating the temperature setpoints and reading the real concentration of the process. The following sampling times were used: 0.09 min. for the closed loop; 2 min. for updating the temperature setpoints and 10 min. for sampling the real concentration data of the reactor. Fig. 3 and 4 show the servo problem. The results of the Direct Adaptive Feedback Control are showed in Fig. 5. 453
5
453
0
452
5
452
0
~" 4 5 1
5
451
0
............ T e m p e r a t u r e ....... T e m p e r a t u r e
.,.--
T(1) T(3)
/ , i
S e tp o i n t s
/
t.
~450
I'!
/ ~ .......
5
450
0
449
5 -]
449
0 ] 0.0
"
:"
.
|
0.2
014
016
0'8
1'0
Tim e (hour)
Fig. 5 - Servo Problem- Direct Adaptive Feedback Control According to the results, it can be said that an excellent performance of the proposed control strategies are observed in spite of the process presents a complex dynamic behaviour. 7. CONCLUSIONS The neural networks offer a feasible alternative as a soft sensor or controller design tools when there are no good and reliable deterministic mathematical models valid in all the operation conditions or only historical input-output data are available. The control of fixed bed catalytic reactors has been considered as a case study in this work. Such systems present a complex dynamic behavior and even so the performance of the proposed control strategy has shown to be very good in a large range of operating conditions. Hence it can be concluded that the approach considered here has a great potential to be applied in process with complex and non-linear dynamics, even when severe operation conditions are required. REFERENCES [ 1] Bhat, N., McAvoy, T. Computer Chemical. Engineering, v. 14, n. 4/5, p. 573-582, 1990. [2] Psichogios, D. C. and Ungar L. H. Ind. Eng & Chem. Res, v.30, n. 12, p. 2564-2573, 1991. [3] Baughman, D. R., Liu, Y. A., Neural Networks in Bioprocessing and Chemical Engineering, London: Academic Press, 1995,488p. [4] Ender, L., Maciel Filho, R. (2001 a). 6th IFAC, Cheju Island, Korea. [5] Ender, L., Maciel Filho, R. Comp. & Chemical Engeneering, v. 24, p. 937-943, 2000. [6] McGreavy, C., Maciel Filho, R., IFAC, Maastricht, The Netherlands, p. 119 - 124, 1989. [7] Toledo, E. C. V. (1999). Ph.D. Thesis, FEQ/UNICAMP, Campinas, Brazil.
196
Process SystemsEngineering2003 B. Chenand A.W.Westerberg(editors) 9 2003Publishedby ElsevierScienceB.V.
Decision support system for multi-layered negotiated management in supply chain networks A. Espufia a, M.T. Rodrigues b, L. Gimeno b and L. Puigjaner a
a Departament d' Enginyeria Quimica, Universitat Polit~cnica de Catalunya, ETSEIB, Avda. Diagonal 647, pab. G-2, E-08028 Barcelona, Spain b Departments of Electrical and Chemical Engineering, State University of Campinas, UNICAMP-FEEC/DCA C.P. 6101, 13083 Campinas S.P., Brazil. Abstract The management of multi-site manufacturing networks and their co-operative
supply chain systems is addressed using different structures that are conceived as module components of a holistic framework. In this way, decision support is supplied at different levels (management time horizon) and for different functionality (general manager, marketing, production, distribution, sales, etc.), thus the system may be also used for multiobjective assistance to decision making, applicable to the different supply chain layers, from de individual single sites to the global Supply Chain optimisation, including multi-site (single company) competitive strategic decisions. The result conforms a configurable modular system provided with interfaces that strictly adheres to consolidated and emerging standards (ISA, CO, ...) Thus, software transparency and interoperability is guaranteed allowing to easy customisation and agile reusability of existing software. Keywords Supply Chain Management, Planning and Scheduling, Decision Support Systems 1. INTRODUCTION Supply chain (SC) management problems include a large scope of related aspects and papers in the literature or commercial software, but normally concentrate on some of them. Among them for example: transfer prices [1,2], investment strategies [3], etc. This work addresses only the planning problem in terms of selection and allocation of production, storage and transportation tasks considering its associated costs (production, manipulation and transportation). Related works can be found in [4, 5, 6]. Supply chain (SC) planning problems force to consider the whole set of planning and scheduling aspects that often are not considered all together in planning and scheduling of single plants or production lines. Of course the large dimension of the problem imposes some sort of aggregated models at higher planning levels but nevertheless these models have to capture the main lower level aspects to be useful. For example each partner will accept the SC
197 solution only after obtaining an acceptable scheduling solution for his site. In this case detailed scheduling models of the production resources will be utilized, but they do not need to burden higher level planning analysis; instead, some aggregated view of the corresponding low level constraints has to be taken into account by higher planning levels to ensure global feasibility. We consider three major decision making areas involved in SC networks: i) choice among alternative facilities for production, purchase and transportation, ii) choice among alternative resources in given facilities and iii) choice among possible tasks start times on given resources. These three areas can also be seen as recipe selection, resources assignment and task scheduling and obviously all of them can be present in problems with smaller dimension but often they are not treated all together to reduce complexity. In this way, scheduling techniques and commercial scheduling tools often concentrate in the third area, considering recipe selection and equipment units assignment as input data. This is the case for a great number of metaheuristic solutions, constrained based techniques and heuristic approaches for scheduling applications, like APS (Advanced Planning and Scheduling). In some cases, alternative assignments are considered but without changing the production effort in terms of number of batches or throughput for each task, meaning that recipes are not substantially altered. With respect to planning techniques, one of the main objectives is the determination of what has to be produced and when in order to constrain the scheduling system so that production objectives will be attained. To determine number of batches or runtimes duration, current solutions often do not consider alternative recipes, since mass balance calculations require a fixed recipe. This is the case for example of MRP approaches in production planning or DRP approaches in transportation planning. Supply Chain techniques cannot avoid sites/recipes/transportation selection problems since they are the central question to be addressed and this leads to a huge problem if the three areas are considered all together. Some sort of decomposition seems necessary and this paper discusses a proposal following this approach. 2. USUAL SEQUENTIAL APPROACH A simplified usual sequential approach to SC problems is illustrated in Fig. 1. Production objectives and partners possibilities are utilized to determine possible scenarios selecting sites, that will be used and production duties for each site to fulfill its demand. Then, each site can proceed to resources planning and scheduling, in order to obtain its best solution, always constrained by the production commitments given by its duties. In order to obtain feasible situations, certainly sites selection and production duties in the SC level will be established taking into account, in some way, sites capabilities. Nevertheless, capabilities can be time variable (e.g. maintenance scheduled tasks, manpower level variations, etc.) which means that times at which site duties are allocated have to be considered. In the same way sites can have equipment units assignment altematives and different costs which have impact on the global solution. As a consequence, the SC level should consider in some way these types of information to avoid excessive backtracking.
198 To discuss how the SC level can represent relevant information concerning the other two levels it is worth to consider how they work. Planning with fixed resources of a single site determines batches or tasks throughput, and MRP systems are a typical example. In this case it must work with a fixed bill of materials so mass balances can determine what has to be produced to fulfill products demand. Aggregated information at the SC level must allow selecting sites, respective duties and time interval for production so that a feasible solution can be obtained at the site planning level. In the proposal discussed in the next section, this information is introduced in the SC level as site production rates for the different possible operation modes, and consequent consumption rates. Production and consumption rates have been often utilized in the literature [5]. Each operation mode corresponds to a specific assignment of equipment units to tasks.
Fig 1. Sequential approach for integration of the different decision making processes Scheduling establishes start and finishing times for the tasks/batches to be processed. For instance, an optimization technique can be used to minimize (maximize) some operational objective or heuristic procedures, like APS, obtain some good compromise on user criteria. In both cases surely the user will be interested in comparing and evaluating different solutions obtained changing weights or heuristic procedures. The SC level will be concerned with obtaining a global solution that fulfills time commitments for end products, so that this level has to estimate in some way this time. In the proposal discussed in next section, completion times for all the selected sites are estimated in two limit situations: i) intermediates transportation at the end of production, that is a worst case in terms of completion time but with the lowest possible burden in transportation scheduling and, ii) intermediates free transportation from the time instant where some minimum quantity has been produced, which corresponds to a best situation from the point of view of completion time, but that will imply frequent transportation tasks between sites.
199 3. SUPPLY CHAIN PLANNING LEVEL The proposed SC planning level is mainly concerned with determining production and storage sites selected, its throughputs and transportation needs; the objective is to fulfill a specific demand of end products characterized by quantities and due dates with acceptable costs for the whole SC as far as for partners in the SC network. This multi-objective situation is addressed through a multi-agent implementation where agents' actions are constrained by a coordinating agent which establishes the span for agents decisions.
3.1. Supply chain representation Every single actor in the Supply Chain may be modeled through an individual autonomous agent, capable to make any local decisions affecting the links between this actor and the rest of the system (SC). Usually, a site will be composed by several actors with coherent objectives, so one agent may be used to model the whole site (production site, storage site, etc.) or even a multi-site organization, although in a general case different agents may be used to model different functionalities (production, financial, commercial, etc.) even in a single site. Each agent, taking care of the specific scenario where it is working (production capacities and costs, resources availability, confirmed arrival of raw materials, etc.) will make decisions regarding the specific planning and scheduling in the site. Note that these decisions may include demand acceptance/rejection (or even fixing selling prices to fulfill future demands) emission of calls to raw materials suppliers (and acceptance/rejection of their offers), etc. In order to make decisions in all these scenarios, several procedures have been tested, ranging from greedy expert systems to combinatorial optimization procedures. Taking into account the characteristics of the multi-agent structure described below, stochastic metaheuristic optimization systems have appeared as the most advisable, specially in terms of the capacity to be always able to offer reasonable solutions in reasonable times. 3.2. Multi-agent structure Since these agents try to accurately reproduce the actions and interactions that occur in a real SC, it is unrealistic to try to address the optimization of these interactions, it is proposed to explore scenarios through simulation techniques [7]. So the proposed approach is based on an evolutionary procedure, where each agent is allowed to reconsider its decisions while the rest of agents also evolve. This leads to an continuous changing scenario for each agent that, for one part, might not converge to a final state and, for another part, in the case of convergence, will not necessarily lead to a desired global optimum (if it can be even defined). In order to ensure the evolution of the system towards a compromise between individual agent objectives and global objectives, two additional controlling agents have been introduced: a coordinating agent and a smoothing agent. The coordinating agent calculates the range of global costs (upper and lower bounds) looking at the SC as a global supplier that should accomplish different internal constraints, defined through aggregation of the individual characteristics of each site. This may lead to a significant reduction in the decision making range of the individual agents. The smoothing agent enforces convergence in a similar way, but based on the assessment of the evolution of the system.
200
3.3. Coordinating agent The coordinating agent selects sites to be utilized and determines runtimes for production sites, throughput at storage sites and quantities transported between sites. The cost function minimized involves: i) production costs as a function of runtimes or quantities produced, ii) transportation costs between sites which depend on quantities transported of each state and iii) manipulation costs at storage sites as a function of the manipulated mass of each state (throughput). Production sites are characterized by states production and consumption rates in each possible site operating mode. Storage sites can also operate under different operating modes that determine which states can be stored. During the time horizon each site can utilize only one mode of operation. A MILP formulation related to those presented in [4,5] has been developed [8] where a binary variables is defined for each site operational mode. Once the solver has determined selected sites, runtimes and quantities transported, it is possible to build a kind of Gantt chart in two limit situations: i) "continuous" transportation of sites output products, that is transportation can occur immediately after sites start times and ii) transportation only after total production. Any real situation will be between these two limits. The above formulation does not consider due date constraints. Cost minimization can lead to unacceptable runtimes since the implied completion times can prevent fulfilling due dates. If completion time estimates are not acceptable, other supply chain configurations, utilizing alternative suppliers and/or producer sites, can be preferable in spite of higher production or transportation costs. Instead of combining these two aspects in a single cost function, it seems preferable to allow the user to suggest some acceptable cost degradation and to minimize a completion times based objective function. This reformulation includes the previous model, where the cost-time objective has been transformed in a constraint (cost) plus conditions modeling completion times. This reformulation implies additional binary variables to represent transportation between sites; this increase is nevertheless small, as only one binary variable has to be introduced for each pair of sites linked by possible transportation.
Fig 2. Representation of SC altematives" a) Minimum cost, b) 50% cost degradation
201 The kind of obtained results is shown in Fig. 2. At the left side, the result of the first step shows the selected sites, runtimes and completion times for the best and worst situation. Acceptance of a degradation o f - 5 0 % in the cost function leads to the situation shown at the fight side, where other sites have been additionally selected leading to a noticeable reduction in the time to fulfill the demand. The whole SC structure and the required active sites for each solution are also shown in the figure.
3.4. Smoothing agent and agents behavior Since the global problem can be formulated as the simultaneous (multiobjective) optimization of each of the individual systems, the reduction of the agent activity can be progressively performed by a second mechanism based on the estimation of the Pareto Optimal Space of solutions for this problem. By continuously assessing the evolution of the agents' behavior, it is possible to identify patterns of good performance and also patterns of poor performance. These patterns are use to reduce the degrees of freedom of the individual agents, superimposing a kind of"tabu search" system to enforce convergence 4. CONCLUSIONS In order to cope with the large scope of related aspects and multi objective nature of SC planning, a layered approach is proposed characterized by a multi-agent structure to represent partners individual interests. A coordinating agent using an aggregated global view establishes bounds on agents degree of freedom and an evolutionary procedure is proposed to ensure the evolution of the system towards a compromise between individual agent objectives and global objectives.
REFERENCES [1] J. Gjerdrum, N. Shah and L. Papageorgiou, Industrial & Engineering Chemistry Research, 40 (2001) 1650. [2] M. Goetschalckx, C.J. Vidal and K. Dogan,. European Journal of Operational Research, 143 (2002) 1. [3] L.G. Papageorgiou, G.E. Rotstein and N. Shah, Industrial & Engineering Chemistry Research, 40 (2001) 275. [4] A.M. Geoffrion and G.W. Graves, Management Science, 20 (1974) 822. [5] C.H. Timpe and J. Kallrath, European Journal of Operational Research, 126 (2000) 422. [6] P. Tsiakis, N. Shah and C.C. Pantelides, Industrial & Engineering Chemistry Research, 40 (2001) 3585 [7] F.D. Mele, A. Espufia and L. Puigjaner, AIChE Annual Meeting, Nov. 3-8, Indianapolis, USA (2002). (Accepted). [8] L. Gimeno and M.T. Rodrigues, 15th IFAC World Congress, Barcelona, Spain (2002) CD
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
202
Optimal operation strategy and production planning of multipurpose batch plants with batch distillation process Jin-Kuk Ha a, Euy Soo Lee a and Gyeongbeom Yi b
aDepartment of Chemical & Biochemical Engineering, Dongguk University, Seoul 100715, Korea bDepartment of Chemical Engineering, Pukyong National University, Busan 608-739, Korea Abstract
Fine chemical production must assure high-standard product quality as well
as characterized as multi-product production in small volumes. Installing high-precision batch distillation is one of the common elements in the successful manufacturing of fine chemicals, and the importance of the process operation strategy with quality assurance cannot be overemphasized. In this study, we investigate the optimal operation strategy and production planning of a sequential multi-purpose plants consisting of batch processes and batch distillation with unlimited intermediate storage for the manufacturing of fine chemical products. Illustrative examples show the effectiveness of the approach. Keywords
batch process, batch distillation, optimization, scheduling
1. INTRODUCTION Manufacturing technology for the production of high value-added fine chemical products is emphasized and getting more attention as the diversified interests of customers and the demand of high quality products are getting bigger and bigger everyday. Thus, the development of advanced batch processes, which is the preferred and most appropriate way of producing these types of products, and the related technologies are becoming more important. Fine chemical production must assure highstandard product quality, and multi-product production in small volumes is one of its characteristics. Especially, establishing raw material requirements satisfying high quality standards and advanced quality control throughout the whole production process is exceptionally required. Therefore, high-precision batch distillation is one of the
203 important elements in the successful manufacturing of fine chemicals, and the importance of the process operation strategy with quality assurance cannot be overemphasized. In the last decade, many theories were proposed for the design and operation of batch processes and many papers for the production planning and scheduling of batch processes have been published [1'2'3]. But, optimal operation strategy and production planning of the processes consisting of batch processes and batch distillation has not been reported. Therefore, proposing a process structure explanation and operation strategy of such processes including batch processes and batch distillation would be of great value. In this study, we investigate optimal operation strategy and production planning of multi-purpose plants consisting of batch processes and batch distillation for the manufacturing of fine chemical products. Especially, the multi-purpose batch plants are divided in sequential and non-sequential plants [4' 51. In a sequential multipurpose plant, the production paths of all of the products follow the same order in the sequence of units in the recipe but do not necessarily include all of the same units. Non-sequential plant, the production paths of all of the products don't follow the same order in the sequence of units in the recipe. Among these, sequential multi-purpose plants are mostly common in industry. Therefore, we propose a mathematical model for the short-term scheduling of a sequential multi-purpose batch plant consisting of batch distillation in a mixed product campaign. Also, we consider that the waste product of being produced on batch distillation is recycled to the batch distillation unit for the saving of raw materials. 2. ANALYSIS OF BATCH DISTILLATION WITH TIME SLOT APPLICATION In the scheduling of process plants, the fundamental issue in assignment and sequencing decision concerns the time domain representation. The definition of the time slots, i.e., the time intervals for unit allocation is determined based on how to define the production sequence and to select the production path for each batch of product. Every batch of the production is assigned to a time slot. Namely, there are an equal number of time slots and required production batches. We propose a mathematical model for the short-term scheduling of a sequential multi-purpose batch plant consisting of batch distillation in a mixed product campaign under unlimited intermediate storage policy. First of all, as every production batch occurs exactly once in processing sequence, the following constraints must be satisfied for the binary variable X,k :
204
ViE/
(1)
V kEK
(2)
~~X,k =n, kEK
~]X,k =I iEl
where n, is the required number of batches for product i and X,k is represented by the following definition" X,k = 1, if product i is manufactured in time slot k 0, otherwise We introduce the necessary assumptions for the batch distillation processes. 1) Binary processes 2) After one batch processing of raw material (P 1), recycle product (P4) is recycled. 3) Every product that was produced in the batch distillation is considered as an independent batch in each unit. .............................. 9
Raw !
P5 P2 P1
roD,,
material (PI)
" D
Ul
P2
P4
Recycle product
U2
U3
U4
U5
P3
(BD)
P5
(P4) r"=
.......................il~
P3
P6
* BD
9 batch
distillation
Figure 1. A schematic diagram of conceptually converting a batch distillation process into a equivalent batch process Suppose when product P1 is processedon batch distillation, products P2, P3 and P4 (recycle product) are produced from batch distillation. Products P2 and P3 are transferred to the next unit. But product P4 is recycled to the batch distillation and another products P5 and P6 are produced from the batch distillation. Figure 1 is a schematic diagram of conceptually converting a batch distillation processes into the equivalent batch processes. In order to start a processing on each unit, every material must have always been prepared in advance. When product P1 finishes processing on the batch distillation and products P2 and P3 are produced, although units 2 and 3 are
205 ready, products P2 and P3 do not start processing at the same time with product P1. Therefore, in order to assign product P1, P2 and P3 in the time slots, product P1 must have a preordering procedure than product P2, P3 and P4 in the optimal scheduling sequence. As the same way, since product P5 and P6 are the product of batch distillation of product P4, product P4 must have a preordering procedure than production P5 and P6. When such conditions are satisfied, product P2, P3, P4, P5 and P6 can start processing on each first unit. Product P2 and P3 must have a preordering procedure than production P5 and P6. Accordingly, the constraint term of the optimal scheduling sequence must satisfy the following formulation. m
Xi, k,+
Z Xi,,k<-I i' ~S.
Vie
Is,k,k'
e K -{k
),k' < k
(3)
l
m
Where Is, S i 9product set;
K: time slot, k 9the last time slot
The product pairs in the example of fig. 1 that need the preordering procedure by using Eq. (3) are (P1,P2), (P1,P3), (P1,P4), (P2,P5), (P2,P6), (P3,P5), (P3,P6), (P4,P5) and (P4,P6). Define that Is means the product set that is assigned to the first position of the product pairs on time slot k. ls = {P1, P2, P3, P4}. Define that S~ means the product set of product i that is assigned to the second position of the product pairs on time slot k where product i~Is Namely, SpI={P2,P3,P4}, Sp2={P5,P6}, Sp3={P5,P6}, SP4=(P5,P6). And, we define nonnegative continuous variables STkj and ETkj as the start and end times of unit j in time slot k. Then, the starting, ending and processing times have the following relationship.
ETkj - STkj = E P;jX~,
Vk e K, j e J
(4)
Where P~j is processing time. The timing between units, j and j', included in a given path is expressed in equation (5).
STkj.' m ETk). > --U(1 -- Z X'k )
Vk e K, j e J -- { ] }
(5)
iaI
Equation (6) establishes the relationship between time slots, k and k+ 1. m
STk+I.j - ETkj >_0
V k e K - { k }, j e J
(6)
206 Where U is sufficient large positive number. For the scheduling of a sequential multi-purpose batch plant consisting of batch distillation under MPC and UIS policy, the objective function is set to minimize the makespan.
Min'Makespan > ET~
Vj e J
(7)
Therefore, the optimization problem is composed of minimizing the makespan for the sequential multi-purpose batch plant consisting of batch distillation under MPC and UIS policy, as represented equation (7), subject to constraints equation (1)-(6). We formulated this problem as an MILP model. We have tested the proposed model against several examples to show its effectiveness. 3. EXAMPLE The scheduling problem in this paper is solved on PC, Intel Pentium III 800 MHz, using the solver Hyper LINGO 6.0. The structure of example process is shown at Figure 1. Processing time, production path and batches of each product is given in the following Table 1. Table 1. Data for example ~uct unit
PI ~
Unit 1
P2
P3
P4
P5
P6
P7
7
Unit 2
4
Unit 3
5
6
5
6
3
Unit 4
5
4
5
4
3
Unit 5
batch
P8
4
6 1
1
1
4
6 1
1
1
2
When the proposed model was implemented, the scheduling problem size of the example consisted of 80 binary variables, 181 continuous variables, and 252 constraints. The computation time was 16.0 seconds. The optimal scheduling sequence is P 1-P8-P7-
207 P4-P3-P8-P2-P7-P6-P5, as seen in Figure 2, with the minimum makespan, 45h. slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 8
slot 9
slot 10
Unit 1
~'A
Unit 2
~
7JA
Unit 3 Unit 4 Unit 5
Figure 2. Optimal schedule of example. 4. CONCLUSION We investigated a production scheduling of multi-purpose plants consisting of batch processes and batch distillation for the manufacturing of fine chemical products. For the short-term scheduling of a sequential multi-purpose batch plant consisting of batch distillation under MPC and UIS policy, we proposed a MILP model based on a priori time slot allocation. The developed methodology will be especially useful for the design and optimal operations of multi-purpose and multiproduct plants that is suitable for fine chemical production. ACKNOWLEDGMENT The authors would like to acknowledge the financial support from Korea Science and Engineering Foundation (R01-2002-000-00007-0). REFERENCE
[1 ] Pinto, J.M. & Grossmann, I.E., Ind. Eng. Chem. Res., 34 (1995) 3037 [2] Jung, J. H. & Lee, H., Comput. Chem. Engng., 20 (1996) 845 [3] Moon, S. & Hrymak A. N., Ind. Eng. Chem. Res., 38 (1999) 2144 [4] Bok, J-.K. & Park, S., Ind. Eng. Chem. Res., 37 (1998) 3652 [5] S. B. Kim, H. K. Lee, I. B. Lee, E. S. Lee, B. Lee., Comput. Chem. Engng., 24 (2000) 1603 [6] Pinto, J.M. & Grossmann, I.E., Ind. Eng. Chem. Res., 35 (1996) 338 [7] Pinto, J.M. & Grossmann, I.E., Comput. Chem. Engng., 20 (1996) S 1197 [8] Jung, J. H., Lee, H., Yang, D. R., & Lee, I., Comput. Chem. Engng., 18 (1994) 537
208
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby Elsevier ScienceB.V.
Dynamic Project and Workflow Management for Design Processes in Chemical Engineering Markus Heller a, Bernhard Westfechtel a aComputer Science III, RWTH Aachen, D-52056 Aachen, Germany
Abstract Design processes in chemical engineering are hard to support. The design process is highly creative, many design alternatives are explored, and both unexpected and planned feedback occurs frequently. Thus, it is inherently difficult to manage the workflow in design processes, i.e., to coordinate the effort of experts working on tasks such as creation of flow diagrams, steady-state and dynamic simulations, etc. Conventional project and workflow management systems support the management of design processes only to a limited extent. In contrast, the management system AHEAD is designed specifically for dynamic design processes.
Keywords process systems engineering, business decision making, workflow management 1. INTRODUCTION
Design processes in chemical engineering are hard to support. Since design processes are highly creative, they can rarely be planned completely in advance. Rather, planning and execution may have to be interleaved seamlessly. In the course of the design process, many design alternatives are explored which are mutually dependent. Furthermore, design proceeds iteratively, starting from sketchy, coarse-level designs to detailed designs which are eventually needed for building the respective chemical plant. Iterations may cause feedback to earlier steps of the design process; it may be necessary to revoke inadequate design decisions. Finally, design involves cooperation among team members from different disciplines and potentially multiple enterprises, causing additional difficulties concerning the coordination of the overall design process. Technical tools such as flowsheet editors, simulators for steady-state and dynamic simulations, etc. are crucial aids for effectively and efficiently performing design tasks. In addition, managerial tools are required which address the coordination of design processes. In fact, such tools are crucial for supporting business decision making. In the course of the design process, many decisions have to be made concerning the steps of the chemical process, the relationships among these steps, the realization of chemical process steps by devices, etc. To perform these decisions, design alternatives have to be identified and elaborated, and the respective design tasks have to be coordinated regarding their mutual interfaces and dependencies. To support business decision making, managerial tools must provide chief designers with accurate views of the design process at an adequate level of granularity, offer tools for planning, controlling, and coordinating design tasks, thereby taking care of the dynamics of design processes. Currently, many management tools are available which, unfortunately, often have not been developed for supporting complex and dynamic design processes. In fact, most of these tools are generic in the sense that they can be applied to any kind of business process (there are a few process support tools designed for chemical engineering such as e.g. n-dim [ 1] and KBDS [2]). For example, project management systems [3] assist managers in project planning and control, assuming that the project can be represented by a partially ordered set of activities. However, iteration and feedback cannot be represented in project plans. Furthermore, important
209 information is missing concerning e.g. the products of design tasks such as e.g. different kinds of flow sheets and simulation models. To some extent, project management systems are useful for high-level planning and control at the level of milestones. But to support decision making effectively, other sources of information have to be exploited, as well. Workflow management systems [4] have been developed for supporting routine processes performed e.g. in banks, insurance companies, administrations, etc. A workflow management system manages the flow of work between participants, according to a defined procedure consisting of a number of tasks. It coordinates user and system participants to achieve defined objectives by set deadlines. To this end, tasks and documents are passed from participant to participant in a correct order. Moreover, a workflow management system may offer an interface to invoke a tool on a document either interactively or automatically. Workflow management systems differ from project management systems since they address rather detailed execution support rather than only high-level planning. Their most important restriction is limited support for dynamic design processes. Many workflow management systems assume a statically defined workflow that cannot be changed during execution. This assumption does not match the characteristics of design processes, which are highly creative, dynamic, and iterative, and therefore cannot be controlled by a static workflow defined in advance. Though this problem has been recognized in the workflow community [5], so-called adaptive workflow management systems provide at best partial solutions (e.g., based on exceptions, which, however, have to be pre-defined). Still, many of the commercial tools hardly address the problem of evolving workflows. In this paper, we present a management system which has been developed to support engineering design processes. This system is called AHEAD (Adaptable and Human-Centered Environment for the MAnagement of Design Processes [6, 7]). It has been developed in the context of the long-term research project IMPROVE [8] which is concerned with models and tools for design processes in chemical engineering. AHEAD equally covers products, activities, and resources and therefore offers more comprehensive support than project or workflow management systems. Moreover, AHEAD supports seamless interleaving of planning and execution a crucial requirement which workflow management systems usually do not meet. Design processes are represented by dynamic task nets, which may evolve continuously throughout the execution of a design process. Dynamic task nets include modeling elements specifically introduced for design processes, e.g., feedback relationships for iterations in the design process which cannot be represented in project plans. This way, AHEAD improves business decision making since it offers a more natural, realistic, and adequate representation of design processes. -
-
2. SYSTEM DESCRIPTION Figure 1 gives an overview of the AHEAD system. AHEAD offers environments for different kinds of users, which are called modeler, manager, and designer. Please note that "modeler", "manager", and "designer" denote roles rather than persons. In particular, one person may play multiple roles. For example, the chief designer of a project may act both as a manager and as a designer. Similarly, management and modeling may be performed by the same person. The management environment supports project managers in planning, analyzing, monitoring, and controlling design processes. In particular, planning and execution may be interleaved seamlessly so that the dynamics of design processes may be taken into account. Since the management environment is coupled with the work environments for designers, the manager is provided with accurate and current information about the state of his project. Management comprises activities - - the design tasks and their relationships, products n the design doc-
210
Figure 1. Architecture of the AHEAD system.
uments and their dependencies, and resources - - both human resources, i.e., the design team, and technical resources, i.e., the software tools used by the team. The management environment offers different kinds of graphical representations, e.g., trees, diagrams, and tables. The work environment supports designers in their work as far as the coordination aspects are concerned. Management and work environments are integrated through a common database which stores a graph representation of the respective design process. The work environment provides an agenda tool which displays the tasks assigned to a designer in a table containing information about state, deadline, expected duration, etc. The designer may perform operations such as starting, suspending, finishing, or aborting a task. Furthermore, he may open a work context which manages the documents and tools required for executing a certain task. The designer is supplied with a workspace of versioned documents. He may work on a document by starting a tool such as e.g. a flow diagram editor, a simulation tool, etc. The workspace may change dynamically during task execution (which may take hours, days, or weeks rather than minutes); e.g., a new version of some input document may arrive during task execution. Management and work environment are used at project run-time to execute design processes. The modeling environment allows incorporation of domain-specific knowledge about design processes into the AHEAD system. Domain knowledge is expressed in the Unified Modeling Language. With the help of class diagrams, design processes may be modeled on the type level. For example, task classes may be defined for designing flow sheets, performing steady-state or dynamic simulations, etc. In addition, recurring patterns of task instances may be defined with the help of collaboration diagrams. Using the UML model, process support may be tailored towards the respective domain. These models may evolve during execution, as well [9].
211 3. APPLICATION In the IMPROVE project, we are studying the design of a chemical plant for Polyamide6 as a reference process against which the developed tools (e.g., AHEAD) are evaluated. The process focuses on the early lifecycle phases, namely basic engineering and conceptual design. It is based on interviews conducted with an industrial partner, study of literature, and own research. To some extent, the reference process reflects current industrial practice, but it also contributes innovative aspects such as simultaneous engineering and inter-organizational cooperation. For a comprehensive description of the reference process, see [ 10]. Here, we focus on the managerial parts of this process, which are supported by the AHEAD system. We illustrate this process support by discussing the task net displayed in Figure 2. This task net evolves during execution, but this evolution is not shown graphically due to the lack of space. Task nets are shown to the user of the management environment of AHEAD in roughly the same way as in the figure. Tasks are represented by rectangles. Each task may have inputs and outputs shown as white and black circles, respectively. Tasks are connected by control flows which resemble precedence relationships in project plans. Feedback flows represent feedback in the design process and are oriented oppositely to control flows. Data flows connect task outputs and inputs. Hierarchical relationships are not shown explicitly in the figure; rather, they are expressed by placing subtasks below their supertasks. At the start of the design process, the manager creates an initial task net comprising only the task Preparation and its refinements. In the preparation phase, the requirements to the chemical plant are defined, a literature research is carried out, and an initial abstract flow diagram is created. Based on this information, the alternatives batch and continuous operation are compared, and a decision is performed towards continuous operation. The result of the preparation phase is an abstract flow diagram, according to which the design process may be detailed further. To this end, the manager extends the task net with tasks for designing the reaction, the separation, and the compounding (using an extruder), which are the major steps of the respective chemical process. To design the reaction, a process flow diagram is created which defines alternatives for this part of the chemical process. Only then may the task net be extended with design tasks for elaborating these alternatives (a CSTR reactor, a PFR reactor, or two reactor cascades). Simulations are carried out to evaluate these alternatives; if necessary, laboratory experiments are performed to validate the simulation models. Finally, a decision is performed with respect to the selected alternative (task Compare). The subnet for designing the separation is structured in a similar way; here, the alternatives extraction and evaporation are considered. To start the investigation of the separation step as soon as possible, an initial estimate of the output of the reaction step is made (simultaneous engineering). The subnet for designing the extruder is structured in a different way; special-purpose simulation tools are used to this end. Please note the feedback flow from DesignExtruderto DesignSeparation: it is not clear from the beginning to what extent the extruder can be used for the separation of remaining input substances. This requires negotiation between the designer of the separation step and the designer of the extruder. After having designed all parts of the chemical process, the overall concept is jointly discussed in a final decision step. 4. CONCLUSION The sample process sketched above shows how managers and designers are supported in making business decisions concerning the plant design. It shows the typical structure of the design process: elaborate alternatives, evaluate them by simulations, laboratory experiments, and cost
212
Figure 2. Task net for the polyamide6 design process.
213 calculations, select an alternative, and proceed by iterative refinement of the design. Decision steps are explicitly modeled as tasks. The overall task net evolves during execution. Different aspects of dynamics are taken into account, such as product-dependent task nets, simultaneous engineering, and feedback. Please note that managerial and technical activities are tightly integrated; this is necessary to provide effective decision support. Business decision making of this kind cannot be provided by conventional project and workflow management systems. In a project management system, the manager may define milestones based on a simple conceptual model of partially ordered activities. But project plans do not reflect well the characteristics of highly dynamic, iterative, and creative design processes. Furthermore, the link to the actual design activities is missing. Workflow management systems would typically constrain the design process very tightly and tend to automate the design process too far. The approach realized in the AHEAD system provides the required flexibility and promises more adequate decision support at the managerial level. REFERENCES
[ 1] Arthur W. Westerberg, Eswaran Subrahmanian, Yoram Reich, Suresh Konda, and the ndim group. Designing the process design process. Computers & Chemical Engineering, 21 (S):S 1-$9, 1997. [2] R. Bafiares-AlcS.ntara and H.M.S. Lababidi. Design support systems for process engin e e r i n g - II. KBDS: An experimental prototype. Computers & Chemical Engineering, 19(3):279-301, 1995. [3] Harold Kerzner. Project Management: A Systems Approach to Planning, Scheduling, and Controlling. John Wiley & Sons, New York, 1998. [4] Peter Lawrence, editor. Workflow Handbook. John Wiley & Sons, Chichester, UK, 1997. [5] Dimitrios Georgakopoulos, Wolfgang Prinz, and Alexander L. Wolf, editors. Proceedings of the International Joint Conference on WorkActivities Coordination and Collaboration (WACC-99), volume 24-2 of ACM SIGSOFT Software Engineering Notes, San Francisco, CA, 1999. ACM Press. [6] Dirk J~iger, Ansgar Schleicher, and Bernhard Westfechtel. AHEAD: A graph-based system for modeling and managing development processes. In Manfred Nagl, Andy Sch/irr, and Manfred M/inch, editors, AGTIVE m Applications of Graph Transformations with Industrial Relevance, LNCS 1779, pages 325-339, Castle Rolduc, The Netherlands, 1999. Springer-Verlag. [7] Manfred Nagl, Bernhard Westfechtel, and Ralph Schneider. Tool support for the management of design processes in chemical engineering. Computers & Chemical Engineering, 27(2):175-197, 2003. [8] Manfred Nagl and Wolfgang Marquardt. SFB-476 IMPROVE: Informatische Unterst/itzung/ibergreifender Entwicklungsprozesse in der Verfahrenstechnik. In Matthias Jarke, Klaus Pasedach, and Klaus Pohl, editors, Informatik '97: Informatik als Innovationsmotor, Informatik aktuell, pages 143-154, Aachen, Germany, September 1997. Springer-Verlag. [9] Ansgar Schleicher. Management of Development Processes ~ An Evolutionary Approach. Deutscher Universit~its-Verlag, Wiesbaden, Germany, 2002. [10] B. Bayer, M. Eggersmann, R. Gani, and R. Schneider. Case studies in process design. In B. Braunschweig and R. Gani, editors, Software Architectures and Toolsfor Computer Aided Process Engineering. Elsevier Publishers, 2002.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier Science B.V.
214
Environmentally Conscious Planning and Design of Supply Chain Networks A. Hugo and E . N . Pistikopoulosl Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London, SW7 2BY, UK Abstract In this paper, a methodology is presented for the explicit inclusion of environmental impact assessment criteria as part of strategic investment decisions related to supply chain networks. At this level, the earliest opportunities exist for a company to influence both its future market competitiveness and environmental performance. Trying to give consideration to not only the traditional economic criteria, but also to the multiple environmental concerns, inevitably results in the design task being formulated as a multi-objective optimisation problem. In this way, the simultaneous consideration of multiple selection criteria provides an ideal platform for exploring trade-offs upon which environmentally conscious business decision-making can be based. K e y w o r d s Long-Range Planning, Life Cycle Assessment, Multi-Objective Optimisation
1
Introduction
Although sustainable development can be defined in a number of ways, the fundamental ideal that it pursues, whereby economic development, improved environmental quality and social prosperity are seen as complimentary, has become widely accepted. For the chemical processing industry, this paradigm shift means that step-change improvements are demanded in the type of processes developed, the efficiency with which resources are utilised and the products manufactured. However, despite the consensus about the relevance and benefits of adopting more sustainable business practices, the greatest challenge still lies in the practical application of its principles in pursuit of technological innovations. With one of the main obstacles preventing its wide-scale implementation in the past being the difficulty in measuring the progress towards sustainability, a number of initiatives have responded by proposing environmental management guidelines and performance indicators (see for example [1]). The natural next step is the application of these management strategies, aimed at the operations level, as business decision-making tools during the early stages of conceptual process and product development. As such, this paper presents a methodology for the inclusion of environmental performance criteria as part of a model for the strategic investment planning and design of supply chain networks. During these stages a manufacturing company has the earliest opportunity to influence both its future market competitiveness and environmental performance.
2
Supply Chain Analysis and the Environment
In the past, the design and management of supply chains have largely concentrated on improving market competitiveness through the use of objectives such as (i) minimising costs, delivery delays, inventories and investment, or (ii) maximising profit, return on investment (ROI) and customer service level. Decisions are generally grouped either as operational (related to production schedules, inventory levels and transportation logistics) or strategic (related to, for example, the 1To whom correspondence should be addressed. Tel.: (44) (0) 20 75946620, Fax: (44) (0) 20 75946606, E-marl:
e. pistikopoulos@ic,
ac. u k
215 location of production facilities and allocation of suppliers to plants), with time horizons ranging between hours and years [2]. Increasingly, researchers and practitioners have started taking note of the impact that supply chains have on the natural environment. However, despite this growing awareness, challenging research problems still remain. In a review aimed at informing operational researchers and environmental scientist of the possibilities of exchanging skills, the challenges of linking life cycle assessment (LCA) and operations research (OR) models in pursuit of more responsible supply chain management are highlighted [3].
3
Problem Formulation
Taking these challenges into account, the problem that will be consider here for the inclusion of environmental concerns within the process selection and supply chain optimisation task can be stated as follows: Given: 9 a set of markets (distributors or customers) and their demands for a set of chemicals over a given time period (planning horizon) 9 a set of potential plants using known technologies to produce the desired products 9 a set of potential sites for locating the plants, and the availabilities of the raw material and utility suppliers over the planning horizon
then the task is to: 9 design the supply chain network of the integrated production facilities that would satisfy the demand over the entire planning horizon,
such that both the: 1. net present value of the capital investment evaluated at the end of the planning horizon is maximised, and the 2. impact that the entire network has on the environment is minimised.
Finding the optimum solution to this multiple criteria problem involves (a) selecting the most appropriate technologies and their production profiles over time (b) allocating the technologies to the potential sites, and (c) assigning the products produced by the selected plants to the demands at the markets. This formulation contains elements of multi-product, multi-site supply chain network design problems [4], while also resembling strategic decision-making models for technology selection [5] and the long-range investment planning within the process industry [6].
4
Multi-Objective Optimisation Model
The mathematical programming model developed here is based upon a supply chain network superstructure consisting of a set of N M existing markets - representing either distribution centres or customers - demanding a set of N I chemical products. Also given is information regarding the location and availability of a set of N R chemical feedstock suppliers. At the centre of the superstructure is a set of N J known chemical processing technologies (plants) that can perform the conversion of the raw materials into final products. Strategic decisions included are the selection and allocation of the plants to a set of N S potential sites, as well as the assignment of transportation links between the selected sites and existing markets. These decisions are performed in terms of a finite number of time periods, N T , constituting the long-range planning horizon during which prices, demands and availabilities of
216 the chemicals, and fixed investment and operating costs of the plants can vary. At the operational level, optimal plant expansion capacities, production profiles and the flows of materials between the various components within the supply chain are determined over the entire planning horizon. Unlike most traditional approaches where only an economic criterion is considered, the model developed here also aims at finding the design that minimises the environmental impact of the entire supply chain. Consequently, the formulation results in a multi-objective mixed integer linear programming (MOMILP) problem, allowing the inherent trade-offs between the conflicting economic and environmental objectives to be explored. As a measure of the profitability of the network, the expected net present value (NPV) of the investment required to install and operate the plants is used as the economic objective function. In contrast, the environmental objective function is based upon the environmental impact resulting from the operation of the entire network over the entire planning horizon. This is achieved by adopting the principles of LCA and using a recently developed method of damage modeling, the E c o - I n d i c a t o r 99 [7], to assess the environmental impact of the network. Guided by the scope and boundary definition step of LCA, the network boundaries are expanded to incorporate a set of N K life cycle stages that includes the (a) production plants within the network, (b) transportation of raw materials to sites and products to markets, (c) generation and supply of utilities (electricity, thermal energy, etc.), (d) production of raw materials and (e) acquisition of natural resources. For each site over the entire horizon, the environmental burdens of the life cycle stages related to its operations are characterised in terms of a set of N E impact categories/indicators representing various environmental concerns. Taken from the E c o - I n d i c a t o r 99 methodology, the 9 indicators incorporated in the model are: (i) carcinogenic effects, (ii) respiratory illnesses caused by organic substances, (iii) respiratory illnesses caused by inorganic substances, (iv) climate change, (v) ozone layer depletion, (vi) ecotoxic emissions, (vii) acidification and eutrophication, (viii) extraction of minerals, and (ix) extraction of fossil fuels. Arriving at the final measure of environmental performance, requires the normalisation of the impacts into the three main categories- Human Health, Ecosystem Quality and Resource Depletion - and then tile aggregation of these normalised categories into a single E c o - I n d i c a t o r 99 Score. These normalisation and valuation steps are performed using tile weighting factors corresponding to one of the three "perspectives" (Hiararchist, Individualist or Egalitarian) provided. Mathematically, the resulting two objective function version of the MOMILP problem can be expressed in a simplified form as: { min U
fl(x) = - N e t Present Value, } f2(x) = E c o - I n d i c a t o r 99 Score
subject t o : x E ~'
(P1)
where U is the utility function and, for notational convenience, x represents the vector of both continuous and discrete variables belonging to the feasible region of both equality and inequality constraints, X. Note that the minimisation of the negative NPV is equivalent to maximising its original formulation. Further details concerning the exact nature of the constraints, variables and parameters defined by x E X are presented elsewhere [8]. Solving the multi-objective optimisation model results in the set of trade-off solutions commonly referred to as the efficient, non-inferior or Paxeto set of solutions. By definition, a point is said to be efficient (or Pareto optimal) such that any other point within the set improves the value of one objective function while compromising at least one other objective. One way of obtaining this set of efficient solutions is by reformulating it as a parametric programming problem where only one objective function remains as the utility function while the others enter as inequality constraints [9]. The advantage of such an approach is that it does not demand setting preference weights or goals before performing the optimisation. Instead, the decision-maker is
217 presented not only one solution, but a set of alternatives from which he/she can then further explore interesting trade-offs.
5 5.1
Illustrative Example Network Description
In the case study that will be used to illustrate the applicability of the proposed model, two chemical products, vinyl chloride monomer (VCM) and ethylene glycol (EG), are demanded at two markets. The possible raw materials from which these chemicals can be produced are ethylene and chlorine via a total of six potential processing technologies. Details concerning the material and energy utilisation of each technology are taken from [5]. In addition, the problem considers three potential sites for the installation and expansion of the potential technologies, while the investment will be evaluated over a period of five years. Concerning the forecasted growth patterns of the chemicals, it is worth noting that three of the chemicals (VCM, EG and chlorine) appear on the Toxic Release Inventory (TRI) of the US EPA. It can, therefore, be expected that governmental legislation and pressure from the public will demotivate their future industrial use. Accordingly, the forecast scenario is used where the availability and demand of these toxic chemicals decrease annually by 2%. 5.2
Two Objective Functions
Solving the two objective function problem (P1) using the Hiararchist perspective, results in the efficient set of solutions as presented in Figure 1. Clearly a conflict exists between a network design that achieves minimum environmental damage and one that achieves maximum net present value. It shows that an improvement in the environmental performance is only possible if the decision-maker is willing to compromise the net present value of the investment. It is also worth highlighting that the discontinuities in the efficient set of solutions originate from the strategic decisions included in the model and corresponds to the optimal solution switching from one network structure two another. This provides the opportunity to achieve a significant improvement in investment performance at a marginal increase in environmental damage by merely reconfiguring the supply chain network structure. For example, at the region on Figure 1 with a f2-value of approximately 1.709 • 109, compromising the environmental impact by only 0.5% can improve the net present value of tile investment by 25%. 5.3
Weighting Factor Sensitivity and Disaggregation
A topic in LCA that continues to receive increasing attention, is the step during which vastly different environmental impacts are weighted into an aggregated score. While this can facilitate the decision-making process, caution must be taken and the practical implications must be noted. Within this context, the E c o - I n d i c a t o r 99 methodology relies upon the concepts of Cultural Theory to manage the subjectivity of valuation and provides three different sets of weighting factors instead of one. The network design can, therefore, be assessed using three different perspectives, allowing the sensitivity of the model to the various weighting factors to be analysed as shown in Table 1. From the results of solving Problem (P1) also with the Egalitarian and Individualist perspectives, it can be concluded that for this particular case study, the set of efficient solutions depends heavily on the chosen perspective. While the differences between the Egalitarian and Hierarchist perspectives are not that pronounced, using the Individualist perspective results in the region over which the solutions are efficient being significantly shifted.
218 x 107 i__~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
r--
s
z
1.74 1.76 1.78 -~ 7-f~ ..... 1.72 f2 - Eco-lndicator 99 Score [Points]
1.8f~81 x 2109
Figure 1: Pareto Set of Solutions- Two Objective Functions Table 1: Sensitivity of Eco-Indicator 99 Region to Different Weighting Perspectives fL [109 Points] f u [109 Points]
Hiararchist 1.706 1.805
Egalitarian 2.143 2.271
Individualist 67.868 71.180
One way of circumventing the model's sensitivity to the weighting factors, is to interrupt the impact assessment procedure before the normalisation stage and thereby avoiding aggregation all together. Consequently, the single score can be disaggregated and the MOMILP problem can be reformulated in terms of the individual impact indicators which are neither normalised nor weighted. While recently developed multi-parametric programming algorithms can be used to solve the reformulation, the computational effort required by a 10 objective function problem (1 Economic and 9 Environmental) is still considerable. Selection of the sources that contribute most significantly to the overall impact can facilitate the reduction of the problem to a more solvable size [10]. Analysis of the results shows that the carcinogenic wastes and emissions emitted from the VCM and EG production facilities and resource depletion resulting from the operation of the network carry the greatest contributions. This allows the MOMILP problem to be expressed accordingly in terms of these environmental concerns as Problem (P2) with the efficient set of solutions presented as Figure 2. { min U
fl(x) = - N e t Present Value, } f2(x) = Carcinogenic Plant Emissions, f3(x) = Network Resource Depletion
subject to: x c ~'
6
(P2)
Conclusions
The foundations for incorporating multiple environmental impact criteria into a mathematical programming based model for the strategic investment planning of chemical supply chains is presented. The model provides an ideal platform for exploring the inherent conflict between the emerging environmental concerns and the traditional economic incentives. Through the application to a case study, the issues surrounding the aggregration and valuation of vastly different environmental concerns into a single score of performance was emphasised. It was proposed that these issues can be effectively addressed using recently developed multi-parametric optimisation
219 algorithms in conjunction with LCA contribution analysis techniques. ....... : , . ~ X 1U
. . ' ~ "" : ,p.,~.'f~,
.---'"
~-
o
....
l
.......
r t.
. ' 'b'k*, . ", ~ , ,
;
.........I'" -..
" ". 9
'"-':-, ".....
- ~ : " ...... ~ ....... ~.......... i .....i:.. " '-... " ' ~ .
-~J
:" ""
" ,
~
9
.... "I
I '
-~'t
.
~..
"~ - . . I2s j z 1
i" -
'I"
I
-
~'b.
--'~ '
~
--~..~~
,-~ ......
. ,-,,,,,-,,,,,.~.
"
- ' ':- "' Y- ~ ' ~ "' ~~- ~ .
......
: _':,,,,,~~'
l
I , " ~,.~'~.':.~-----'r _ 4 _ I 9~ : . - . . . . . I_~.L~- o " - ,.... . -..~ 1 7 6 " ; ~. ~. ~| -.~.,.,-. ,,,, .... ...-"L,,~m,*,~l
...... :
e ~ ~ - ' - .
--
l
~,,,<,...
~
.
t~. . . . .
.
....... ~
:
~
"
::..,... . . . . . . .
......
",..
~
~
Daptmlon [Score]
"",~..
. . - : .... :"
6
"|.:, .......... : : v~0
: '
.,. -.
;
.m0
;
;
... ~
,: ."
.;.
9":" " : ", ",'--"~ ,..'7:~...,~ " . 1" -. " " 9 |'
.........
.
9
I
"
i.,
"..,
.
.-..
. . . . . . . . . . . . . :.,:--: . . . .
9.
,.
9
'
..
o,~"
- ....
'
-
:~.-. - .. . . . .
...|
.
.
": .... ~ - . . -
"
.
.-
.... i ' ~ . , , .
-,
~'
~.5.1
.
9
: ..... .~--N~,,,d~,.,,; ..q. "5..~' " : , . , , _ -" i ~--'...~.~"'~. "" ~ . 2 r ' , - , - ....-,=.-~ ....
_'~_1.. --1
, .
~'
e---,.
-'~l "
-
.....
,-@<"
.?.'-.":
.
"
:
e >
X 10'
, 9
~ :
""
---.-'~..... "'i
"
a
_,-
~
--...
;
"....
,..
" ' - ,.,-, , ,,..,., :.:........ ....... ,~,,
,
.
9
. t.
1~ ,: "..:~
..
:.'.-=,, ..........
....... : : : . : , ~ . - . . i - , , o
"1400 "'" ~ Omm~onu
..... 10~
Figure 2: Pareto Set of Solutions- Three Objective Functions
Acknowledgements The authors are grateful for the financial and administrative support of the Britich Council, the Commonwealth Scholarship Commission and BP p.l.c.
References [1] [2] [3]
[4]
[5] [6]
[7] [8] [9] [10]
GRI. Sustainability reporting guidelines. Global Reporting Initiative, 2000. J. Kallrath. Planning and scheduling in the process industry. OR Spectrum, 24:219-250, 2002. J.M. Bloemhof-Ruwaard, P. van Beek, L. Hordijk, and L.N. van Wassenhove. Interactions between operational research and environmental management. European Journal of Operational Research, 85:229-243, 1995. P. Tsiakis, N. Shah, and C.C. Pantelides. Design of multi-echelon supply chain networks under demand uncertainty. Industrial ~ Engineering Chemistry Research, 40(16):35853604, 2001. D.F. Rudd, S. Fathi-Ashar, A.A. Trevino, and M.A. Stadtherr. Petrochemical Technology Assessment. John Wiley and Sons, 1981. N.V. Sahinidis, I.E. Grossmann, R.E. Fornari, and M. Chathrathi. Optimization model for long range planning in the chemical industry. Computers 8_4 Chemical Engineering, 13(9):1049:1063, 1989. Pr6 Consultants. The Eco-Indicator 99, A damage oriented method for life cycle impact assessment. Second Edition, Amersfoort, Netherlands, 2000. A. Hugo. Environmentally conscious investment planning and design of supply chain networks. Technical report, CPSE, Imperial College, May 2002. V. Dua and E.N. Pistikopoulos. An algorithm for the solution of multiparametric mixed integer linear programming problems. Annals o/ Operations Research, 99:123-139, 2000. R. Heijungs and S. Suh. The computational structure of life cycle assessment. Kluwer Academic Publishers. Dordrecht, Netherlands, 2002
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
220
Challenge Problem Approach to Business Dynamics and DecisionMaking for Process Engineers R. S. Huss, a M. F. M a l o n e , a M. F. D o h e r t y b and M. M. A l g e r c
a Department of Chemical Engineering, University of Massachusetts, Amherst MA 010039303 USA bDepartment of Chemical Engineering, University of California, Santa Barbara, CA 931065080 USA CGE Plastics, 1 Plastics Avenue, Pittsfield MA 01210-3662 USA Abstract We describe challenge problems for process systems engineering to address uncertainties and business dynamics in an integrated approach to decision-making. Experience with the approach shows the value of a variety process systems methods as well as attention to both the process system and the product. For example, simulations are essential to justify proposed process retrofits and conceptual design targets are important in estimating the maximum possible cost reductions. Investments in product innovation and market development are also vital. Keywords
uncertainty, business, dynamics, education
1. INTRODUCTION Studies of flexibility in the design and control of process systems acknowledge the fact that many process parameters depart significantly from their nominal design values over time. Dynamic global economic, market, and environmental realities, as well as the related emphasis on the development of new technologies and products will only increase the importance of seeking robust and flexible process systems. Great progress has been made on optimization approaches to improve robustness and many results have appeared, e.g., [ 1, 2, 3, 4, 5, 6]. We consider problems where uncertainties that are not readily quantified, especially where innovations in products as well as processes are important. 2. UNCERTAINTIES AND DYNAMICS Business changes and uncertainties can be as important as those in physical and chemical quantities in a process system. In addition to the important consequences arising from the uncertainty around nominal values, several dynamic factors drive process operations away from initial design values and provide incentives to modify existing plants.
221
.@ -"
~
r,,)
Capital
0.6 0.4 0.2
Operating -" Total
o 0
0.2 0.4 0.6 0.8
1
Decision Variable
~
~9 0.8 0.6 0.4 0.2 0
0.2
0.4
0.6
0.8
1
Decision Variable
Fig. 1. Optimal design conditions (left) differ from optimal operating conditions (fight). Optimal operation is at or near a constraint, possibly far from the optimal design conditions.
2.1 Process Conditions
Even for optimized designs based on precisely known parameters, design conditions are not optimal for operations [7]. Fig. 1 compares costs related to a decision variable involved in a capital-operating cost tradeoff at the design stage with the same variable in subsequent operations. There, operating costs decrease at lower values of the process variable, e.g., reactor temperature. By changing its set point, e.g., with real time optimization and process control, profits can be increase until a constraint becomes limiting. Additional opportunities range from incremental investments in additional equipment to the design and construction of a complete alternative process system if that provides sufficient return. 2.2 Price, Cost and Innovation
Uncertainty in product price and volume is well-recognized, but difficult to quantify a
priori, except that a decrease in price for a given product is generally expected as volume and competition grow, c.f., Fig 2. (Occasional increases are seen in some circumstances, but margins erode as a product matures.) This offers incentives not only to alter process conditions or equipment, but also to develop and market new products using the same or similar material. In this respect, product and process innovation are coupled. The need for product innovation may diminish the value of process specialization, e.g., a high degree of heat or mass integration, except for large volume products with wellestablished characteristics. There, price and volume can vary because of economies of scale require the construction of a very large plant. The result can be an excess supply which can drive down prices, at least for some time. Other uncertainties such as seasonal variations also arise. When estimates of these uncertainties are possible, several existing approaches can be used to assess or design for flexibility; those processes are not the main focus of this paper. It is the goal of innovation to increase A in Fig 2. Typically, product innovation increases price or volume by improving quality, making or defining new products from the same material, redesigning products, or providing services to enhance the effective use of the product. Along with meeting increasing volume, innovation in process systems can also increase A by decreasing costs. Many process systems engineering technologies have potential: process control, process optimization, process retrofits, new process design, and conceptual design targeting; supply chain optimization can also reduce costs. Collections of similar products often result from successful innovations; eventually product rationalization can reduce costs by eliminating some similar products. We avoid an exclusive focus on either process or product innovation; although one can be dominant at different times in the product life cycle, the dynamics of each are generally important, e.g., Utterbeck [8].
222 $/lb
X•ces Costs
time Fig. 2. Realities of selling price and pressure on process costs
2.3 Business Dynamics The factors above introduce dynamics that are difficult to forecast, and relatively large displacements can occur in factors such as environmental policy and regulation, safety constraints, etc. For all of these reasons, a strong case can be made for a focus on "business dynamics," in general, e.g., Sterman [9]. The availability of data bases and information technology for process systems engineering now allows an integrated approach to decision-making in response to these dynamics. Experience shows that the simultaneous consideration of new business development, marketing and advertising, process control and optimization, base production improvements, research and development, process equipment retrofits, or a completely new process system. It is vital to select the most important areas among these factors and to balance short-term and long-term commitments to each of them. Some technology activities and time scales are given in Table 1. This paper describes an integrated approach to such decision-making, which we have used to teach concepts of process systems engineering to undergraduate students, graduate students, and practicing engineers and scientists; also see Huss et al. [ 10]. Table 1 Process Systems Technology Decision-Making
Process Optimization & Control Process Retrofits New Process Design & Development
Relative Risk Low Low-Medium High
Relative Opportunity Low Medium High
Commitment ~ 1 year ~2 years ~5 years
3. CHALLENGE PROBLEM APPROACH We provide groups a realistic description of a process system and a business scenario, and the challenge to meet an economic goal within a time horizon. The horizon is comprised of several budget cycles that each typically simulate one year of performance; each spans several days to a week in real time. The process system technology and the business factors are
223 described with models and parameters that are both uncertain and incomplete. Some, though not all, of these factors can be refined by making investments in technology or research. An important aspect is that the many alternative process system modifications compete not only with one another for these budget commitments but also with commitments to invest in product development, marketing, etc. For each cycle, performance is evaluated to determine the outcome of decisions, but only when the decisions are soundly justified by the accompanying technical or business basis.
3.1 Example Structure A well-tested example is essential for the business challenge problem. Unlike traditional case studies, a problem can be used repeatedly since many alternatives are generated dynamically. The initial status of the process is typically a significant deficit in net income, and the challenge is to make and justify the decisions required to generate a significant profit. An income statement, Table 2, provides a financial framework for the evaluation (greater detail is used than we show here). This includes variable and base (fixed) costs; the variable costs scale with volume while base costs are independent of volume within a budget cycle. The contribution margin is the revenue available above variable costs to cover all other budget categories including base costs, taxes, and profit. In the long term, technology investments can alter base costs in return for a related benefit, such as a reduced variable cost or increased volume. The sensitivity of profitability to the sales volume illustrated in Table 2 results from the base and variable costs; though not complicated, the result is important and often unexpected. Table 2. Income statement and impact of sales volume. A 33% increase in volume gives a 175% increase in net income. Volume 600 MM
Volume 800 MM
Revenues
600
800
Variable Costs (Direct Materials, Conversion,
325
399
275 200
407 200
Operating Margin Interest Expense Income Before Taxes Taxes
75 0 75 30
207 0 207 83
Net Income
45
124
Distribution) Contribution Margin Base Costs (Management, Depreciation, Other
Selling, General and Administrative Expenses)
A key change is away from a traditional view of profitability determined by revenue and cost to a view of profits that are set by external forces. Then, the difference between income and cost must be managed to achieve the desired goals, i.e., to maximize A defined in Figure 2, over the horizon. Key tasks include (1) analyzing and quantifying opportunities for cost reduction, (2) budgeting funds for business activities such as marketing to increase demand, (3) funding research to develop lower cost manufacturing technology or new products with higher average selling prices and/or new demand, and (4) purchasing additional materials to sell as product as an altemative to investing in increased production in the current process system.
224
Fig. 3. Economic performance by groups and instructor for three budget cycles
The problem statement typically includes: (1) A flowsheet for the production system with noisy and incomplete measurements of species and energy flows (2) the process stoichiometry and rate model for the major reactions (3) a reactor model that simulates the output of the existing reactor given the inputs and the conditions (we use an Excel worksheet including a random error) (4) an initial income statement from the previous year typically giving a significant deficit and (5) background information. The last item contains references to potential investment choices such as ongoing R&D projects in catalysis and process modeling, environmental compliance, sales, marketing and finance. This information is also incomplete, sometimes contradictory, and anticipates results of ongoing studies, e.g., in research and development. Only critical evaluation will lead to useful decisions on investments. The group decides a plan and budget for each cycle. This includes instructions for modifying the process system and estimates of the required investments not only in technology but also in other areas. In a typical example these are: (1) Manufacturing (Process Control, Optimization, or Base Production Improvements) (2) Research and Development (New Catalyst Development, Catalyst & Reaction Analysis, New Process Design, Other R&D) (3) New Business Development (4) Marketing and Advertising (5) Other Program Items. Different or expanded categories will be appropriate for other problems. 3.2 Evaluations and Results
Feedback on each budget cycle is not completely automated, but a database (Microsoft Access) is used to apply rules for evaluation. The database drives an Excel spreadsheet with an economic model of the process to determine the income for each cycle. The database includes rules on "hidden" investments. For instance, a detailed environmental evaluation of the process is mentioned in the background information, but not explicitly listed as a category on the budget sheet. Investment in that evaluation above a minimum level avoids a large expense for compliance in the subsequent cycle.
225 Fig. 3 shows typical results from three cycles for groups and for the instructor (a best reasonable case, but not the absolute maximum). Most groups remain unprofitable in the first year. Maximizing the return in a single year does not necessarily lead to the best long-term results. The best-performing groups combined engineering judgment with a certain amount of luck and common sense. For example, if a group invests sufficiently in an optimization, instrumentation and a control system to improve control on the reactor flow rates, they will produce much closer to the demand. If they realize from conceptual design targeting that some of the raw materials were not being recycled sufficiently, they will significantly reduce the materials costs. Most groups do not meet demand and lose money in the first cycle, because their demand cannot be met with the current process or because they underspecified the flow rates to the reactor. Subsequently, groups often produce more than the demand. The poorestperforming groups either fail to invest in the most important areas or continue investing heavily in an area that has already given a great benefit, such as new catalyst development, or process control and optimization after most of the available benefit is realized.
4. SUMMARY AND CONCLUSIONS A challenge problem approach for process systems engineering accommodates uncertainties and business dynamics in a comprehensive approach to decision-making. In this work we emphasized process technology, but other aspects such as environmental, supply chain, life cycle costs, and product aspects can also be emphasized. Experience with the approach shows the value of a variety process systems methods and tools and attention to both the product and the process system in order to quantify the anticipated benefits of decisions. For example, simulations are useful to justify proposed process retrofits and conceptual design targets are important in estimating the maximum possible cost reductions.
REFERENCES
[1] [2] [3] [4] [5]
K.P. Halemane and I.E. Grossmann, AIChE J., 29 (1983) 425. Q. Xu, B. Chen and X. He, Chinese J. Chem. Eng., 9 (2001) 51. R.E. Swaney and I.E. Grossmann, AIChE J., 31 (1985) 621 and 631. M.J. Mohideen, J.D. Perkins and E.N. Pistikopoulos, J. Process Control, 7 (1997) 371. S. Ahmed, N.V. Sahinidis and E.N. Pistikopoulos, Comput. Chem. Engng., 23 (2000) 1589. [6] C. G. Raspanti, J.A. Bandoni and L. T. Biegler, Comput. Chem. Engng., 24 (2000) 2193. [7] W.R. Fisher, M. F. Doherty and J. M. Douglas, Ind. Eng. Chem. Res., 27 (1988) 611. [8] J. M. Utterbeck, Mastering the Dynamics of Innovation, Harvard Business School Press (1994). [9] J.D. Sterman, Business Dynamics, Irwin McGraw-Hill (2000). [10] R. S. Huss, M. F. Malone, M. F. Doherty, M. M. Alger, and B. A. Watson, in Proceedings Fifth International Conference on Foundations of Computer-Aided Process Design, M. F. Malone, J. A. Trainham, B. Carnahan (eds.) AIChE Symposium Series, no. 323, v. 96, AIChE, NY (2000) pp. 163-175.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Publishedby ElsevierScienceB.V.
226
Incorporation of Flexibility in Scheduling Decision-Making Z h e n y a Jia, M a r i a n t h i I e r a p e t r i t o u 1
Department of Chemical and Biochemical Engineering, Rutgers University A b s t r a c t Uncertainty is a very important factor in process operations. In this paper, a systematic framework is developed to address the problem of accounting for uncertainty in the scheduling decision-making process. The objectives are to increase the schedule flexibility prior to its execution and identify the important parameters and their effects into the scheduling performance. Two approaches are proposed: robust optimization and inference-based sensitivity analysis. The proposed formulation incorporates the consideration of solution robustness and model robustness. Examples are presented to illustrate the applicability of the proposed approach in batch plant scheduling.
1.INTRODUCTION In chemical process industries involving batch processes, the scheduling problem becomes of great importance given the inherent flexibility of the plant. It involves the determination of the order in which tasks use units and various resources and the detailed timing of the execution of all tasks so as to achieve desired performance. However, in real plants, parameters like raw material availability, processing times, and market requirements vary with respect to time and are often subject to unexpected deviations. A significant amount of work has been done to address the issue of robustness in process design and operations. Different approaches include: the scenario based techniques that account for possible future outcomes through the use of multiple scenarios as proposed by Reinhart and Rippin [1], Shah and Pantelides [2], and the stochastic optimization approaches that attempt to utilize the probability information by employing the Gaussian Quadrature integration techniques as proposed by Straub and Grossmann [3], Ierapetritou and Pistikopoulos [4,5], and Harding and Floudas [6] or the Monte Carlo integration employed by Liu and Sahinidis [7], Diwekar and Kalagnanam [8]. Mulvey et al.[9] presented a formulation, called robust optimization (RO), to handle the tradeoff associated with solution and model robustness, which is also used in this work. Daniels and Carrillo [10] addressed the problem of/%robust scheduling in single-stage enviroments with uncertain processing times. Vin and Ierapetritou [11] proposed a multiperiod programming model to improve the schedule performance of batch plants under demand uncertainty. A method of sensitivity analysis for mixed integer linear programming is presented by Hooker [12], based on the idea of inference duality. This method is utilized in this paper to examine how schedule varies with parameters, such as prices and demands, in batch plants. 2.DETERMINISTIC
MODEL
In this paper, the mathematical model for batch plant scheduling is based on the model proposed by Ierapetritou and Floudas [13]. It mainly involves the following constraints: 1Author to whom correspondenceshould be addressed
[email protected]:(732)4452971
227
maximize
E E price(s)d(s' n) 8
subject
to
(I)
rt
E wv(i' n) = yv(j, n)
(2)
ielj
st(s,n) = s t ( s , n - 1) - d(s,n) b ( i , j , n - 1 ) + ~-~pC(s,i)~--~b(i,j,n)
+ ~--~ pP(s, i ) ~ ieIj
jeJi
ieIj
(3)
jeJi
Vmin(i, j ) . wv(i, n) < b(i, j, n) < Ymax(i, j ) . wv(i, n) a(~, n) > ~(s)
(4) (5)
n
T f (i, j, n) = Ts(i, j, n) + a(i, j)wv(i, n) + ~(i, j)b(i, j, n) Ts(i,j,n + 1) > T f ( i , j , n ) - H(2 - wv(i,n) - yv(j,n)) Ts(i, j, n) >_ T f (i', j, n) - H(2 - wv(i', n) - yv(j, n)) Ts(i,j,n) > T f ( i ' , j ' , n ) - g ( 2 - w v ( i ' , n ) - yv(j',n))
(6) (7) (8) (9)
In general, the objective function is to maximize the total profit as shown in (1). Constraints (2) represent the allocation of the task to the unit, whereas constraints (3) represent the material balances for each material at each event point. The capacity limitations of production units are expressed by constraints (4). Constraints (5) to (9) represent time limitations due to task duration and sequence requirements in the same or different production units. Parameters a(i,j) and /~(i,j) are defined as: a(i,j) = 3T(i,j), ~(i,j) = 3T(i,j)/(Vmax(i,j) - Vmin(i, j) ), where T(i, j) is mean processing time of task (i) in unit
(j).
3.PROPOSED METHODOLOGY 3.1 R o b u s t O p t i m i z a t i o n A solution to an optimization is considered to be solution robust if it remains close to optimal for all scenarios, and model robust if it remains almost feasible for all scenarios. When uncertainty arises in some of the parameters, the optimal schedule of deterministic model could become infeasible. We introduce slack variables in the demand constraints to measure infeasibility, and penalize the unmet demand in the objective using a penalty coefficient w. Morever, we modify the objective function to minimize the expected makespan since penalizing the slack variables will result in demand satisfaction.
minimize
~-~'pkHk + w ~~pk ~ k
k
slackk(s) s
E d(s, n) + slackk(s) >_ r(s), slackk(s) >_ 0 n
To incorporate the consideration of solution robustness, we use the idea of upper partial mean (UPM) introduced by Ahmed and Sahinidis [14], defining the variance measure 5 as follows: 6 = ~]kpk6 k, 5k = max(O,H k - EkpkH k)
228 where 6~ corresponds to the positive deviation of makespan at scenario (k) from the expected value. We used 5 to penalize the variability of the objective function utilizing a penalty coefficient A. In summary, our proposed model has the following form:
+ k
+ Er k
k
E s
subject to c o n s t r a i n t s ( 2 ) - (6) T s ( i , j , n + 1) > T f ( i , j , n ) - U(2 - wv(i,n) - yv(j,n)) T s ( i , j , n ) > T f ( i ' , j , n ) - U(2 - wv(i',n) - yv(j,n)) T s ( i , j , n ) >_ T f ( i ' , j ' , n ) - U(2 - wv(i',n) - yv(j',n)) E
d(s, n) + slackk(s) >_ r(s)
n
6k >_ H k _ E P k H k
5k >_ 0, slackk(s) >_ O,H k <_ U
k
where U is an upper bound of the makespan. 3.2 Sensitivity
Analysis
The inference-based sensitivity analysis consists of two parts: dual analysis determines how much the problem can be perturbed with keeping the objective function value in a certain range, while primal analysis gives an upper bound on how much the objective function value will increase if the problem is perturbed by a given amount. For the general mixed integer problem:
subject
to
minimize z = cx Ax >_ a,O <_ x <_ h, xjinteger, j = 1,...,k
(10)
If there exist s~, ..., s~ that satisfy the following set of inequalities, the constraint z > z* - Az remains valid" - for the perturbations AA and Aa in the parameters involved in the left and right hand side of the constraints:
AP ~-~ AijuP + ~ sj(uj P =.P - u ~ ) - )~fAai <_ r, d=l d=l 4 >- A~AAij, 4 >-- - ~ , J = 1, ..., n n
r, = - E
~u='3P"+ A ' a - zp + Az,
(11)
j=l
- for a perturbation Ac of the coefficients of the objective function: n
E
- s,
-
>
-r,
j=l
.~ >_ - A c j , sjP ~_> - q )P, j = 1, ..., n
(12)
where ~ = /~iPAij - A~cj; p corresponds to the leaf node where the dual variable of the objective function (A~) equals to 1, while AP is the vector of dual multipliers at leaf node p;
229 u~ and ~-'Pu#denote the lower and upper bound of xj at node p, respectively, zp is the objective value at node p, and Azp -- z* -- Zp. The information that needs to be extracted from the Branch and Bound solution procedure of the deterministic problem is the optimal dual solution at each node for the case that sensitivity analysis is required for all the parameters involved. The computational requirements are reduced if the parameters for which sensitivity analysis is needed are known a priori. In this case only the dual information of the nodes that correspond to nonzero dual variables of the required parameters in needed. The information obtained from the sensitivity analysis is essential for understanding the effects of uncertainty in the scheduling problem and is obtained by exploiting the information that has already been obtained from the solution of the primal problem as it will be illustrated in the example that follows in the next section. Thus, no additional computational time is required. 4.CASE STUDIES 4.1 R o b u s t O p t i m i z a t i o n Example 2 described by Ierapetritou and Floudas [13] is considered here. Uncertainty arises in the processing times of reactions in reactor 1. The mean processing times of T ( r e a c t i o n l , reactorl ), T(reaction2, reactorl ), T(reaction3, reactorl ) are 2hr, 2hr and lhr, respectively. Assume that those parameters vary by 50% about their nominal values following a uniform probability distribution. Maximum, nominal and minimum values of each processing time are considered, thus 27(3 x 3 x 3) scenarios are generated. The proposed model is solved with different values of/~ and w. The assignments of tasks and units to event points are illustrated in Table 1, whereas the optimal values of expected makespan, solution robustness and model robustness are shown in Table 2. Different sequences of tasks executed in units are obtained, when the weights of model and solution robustness terms change, thus leading to more robust scheduling in terms of model and solution robustness. 4.2 S e n s i t i v i t y A n a l y s i s Example 1 in [13] considers a single production line consisting of a mixer, a reactor and a purificator. For this example the Branch and Bound tree is constructed and the dual information is stored at each node. Based on these data and the corresponding system of inequalities, (11), (12) the following interesting results are obtained. First, the problem is solved with the objective of maximizing the production within the time horizon of 8 time units. It is found that the most critical unit of the production line is the purificator. By increasing its capacity, production will increase accordingly. The same information can be used to determine the allowable purificator size reduction that will result in limiting production decrease. For example it is found that the purificator can decrease by up to 4.24 units without reducing the profit by more than 5%. Important sensitivity information can be obtained by a simple primal analysis of the MILP problem that results in an upper bound on the optimal value given a perturbation of the problem data. In this way it is determined that a 10% increase in purificator size from 50 to 55 units, will result in a maximum of 2.75% increase in production profit from 71.158 to 73.118 units (upper bound on the objective function). The results of the sensitivity analysis can be also used to analyze the effects of product price in the objective function in order to investigate the relative importance of different products as well as to take decisions regarding the production levels. For this example it is
230 Table 1 Values of binary variables of optimal schedules .................................... (task, unit) nO nl n2 n3
.(heating, . . . . . . heater) . . . . . . . . . . . ]-..... ( -
n4
n5
(task, unit)
nO
n1
n2
n3
n4
n5
0 -6
0
0
1 0
! 1
0 0
1 0
0 0
0 0
1 0
0 0
0 1
0 0
0 0
0 0 0
(reaction 1, reactor 1)
!
0
0
0
0
0
(heating, heater) (reaction 1, reactor 1)
(reaction 1, reactor 2) (reaction 2, reactor 1)
1 0
0 1
0 0
0 0
0 1
0 0
(reaction 1, reactor 2) (reaction 2, reactor 1)
(reaction 2, reactor 2)
0
0
1
0
1
0
(reaction 2, reactor 2)
I
0
0
0
1
(reaction 3, reactor 1) (reaction 3, reactor 2)
0 0
0 0
0 0
1 1
0 0
0 0
(reaction 3, reactor l)
0
0
0
l
0
0
(reaction 3, reactor 2)
0
0
l
0
0
0
(seperation, still)
0
0
0
0
1
0
(seperation, still)
0
0
0
0
1
(Deterministic Model)
0 ........
( ~, = 10, co = 0.1)
nO
nl
n2
n3
n4
n5
nO
nI
n2
n3
n4
1
0
1
1
0
0
(heating, heater)
1
1
1
1
0
0
0
0
0
0
0
0
(reaction 1, reactor 1)
0
1
0
0
0
0
(reaction 1, reactor 2)
1
0
0
1
0
0
(reaction 1, reactor 2)
1
0
0
0
0
0
(reaction 2, reactor l)
0
0
1
0
1
0
(reaction 2, reactor 1)
0
0
1
0
1
0
(task, unit) . . . . . . -iheating,heater) (reaction 1, reactor 1)
(task, unit)
n5
(reaction 2, reactor 2)
0
l
0
0
l
0
(reaction 2, reactor 2)
0
1
0
1
0
0
(reaction 3, reactor 1) (reaction 3, reactor 2)
0 0
0 0
0 1
1 0
0 0
0 0
(reaction 3, reactor 1) (reaction 3, reactor 2)
0 0
0 0
0 I
1 0
0 0
0 0
(seperation, still)
0
0
0
0
1
0
(seperation, still) 0 0 0 0 I 0 . . . . . . . . . . . . . . . . . . . . . . . . . .
( Z. = 0.5. co=5)
()~=0.5, co = 0.1) Table 2 Objectives o f deterministic model and proposed model with different weights o f robustness Deterministic Model E(makespan)
~, =10 co =0.1
~, =0.5 03 =0.1
~ =0.5 03 =5
8.40
9.26
7.28
9.23
Solution Robustness
--
0
0.1
0.32
Model Robustness
--
3.94 0.74 ............................
0
found that there is one to one correspondence between price increase and objective function increase. In particular, a 5% price increase would result in also 5% increase in the objective function. The question of how demand fluctuation affect the profit function and the production time is also analyzed. In order to consider this factor, the required dual information involves the dual variables associated with the demand constraint at the nodes that this constraint is active. It is found that a demand change from 60 to 65 units (8.3%) can be accommodated within the production line resulting in an equivalent increase of production profit but also in an increased, by 2.6%, makespan, 11.593 time units from 11.291 units. 5. S U M M A R Y A N D F U T U R E D I R E C T I O N S In this paper the problem of scheduling under uncertainty is addressed through robust optimization and inference-based sensitivity analysis. The objectives of solution and model robustness are incorporated using the proposed robust optimization model, while the importance of the variables and constraints is determined through sensitivity analysis. Work is under progress to develop an integrated framework for the efficient consideration
231 of uncertainty at the scheduling stage in order to determine a more flexible schedule and provide a priori guidance of the optimal decision when uncertain parameters change. ACKNOLEDGMENT The authors gratefully acknowledge financial support from NSF under the CAREER grant 9983406. NOMENCLATURE Indices i = tasks; j = units; n = event points; s - states; k = scenarios Parameters Vmin(i,j) = minimum capacity of unit j when processing task i Vmax(i,j) = maximum capacity of unit j when processing task i pP(s, i), pC(s, i) = proportion of state s produced, consumed from task i, respectively r(s) = market requirement for state s at the end of the time horizon price(s) = price of state s H = time horizon pk = probability of scenario k Variables wv(i,n) = binary variables that assign the beginning of task i at event point n yv(j,n) = binary variables that assign the utilization of unit j at event point n b(i,j,n) - amount of material undertaking task i in unit j at event point n d(s,n) = amount of state s being delivered to the market at event point n st(s,n) = amount of state s at event point n Ts(i,j,n) = starting time of task i in unit j at event point n Tf(i,j,n) = finishing time of task i in unit j at event point n REFERENCES [1]H.J.Reinhart and D.W.T.Rippin. AICHE Annual Meeting, 1987. [2]N.Shah and C.C.Pantelides. Ind.Eng.Chem.Res.,31(1992) 1325. [3]D.A.Straub and I.E.Grossmann. Comput.Chem.Engng., 17(1993) 339. [4]M.G.Ierapetritou and E.N.Pistikopoulos. Ind.Eng.Chem.Res.,18(1994) 163. [5]M.G.Ierapetritou and E.N.Pistikopoulos. Ind.Eng.Chem.Res.,35(1996) 772. [6]S.T.Harding and C.A.Floudas. Ind.Eng.Chem.Res.,36(1997) 1644. [7]M.L.Liu and N.V.Sahinidis. Ind.Eng.Chem.Res.,35(1996) 4154. [8]U.M.Diwekar and J.R.Kalagnanam.Comput.Chem.Engng., 20(1996) $389. [9]J.M.Mulvey, R.J.Vanderbei and S.A.Zenios. Oper.Res.,43(1995) 264. [10]R.L.Daniels and J.E.Carrillo. IEE Trans., 29(1997) 977. [ll]J.P.Vin and M.G.Ierapetritou. Ind.Eng.Chem.Res.,40(2001) 4543. [12]J.N.Hooker. In Lectures Notes in Computer Science 1118, Springer, 1996, pp.224-236. [13]M.G.Ierapetritou and C.A.Floudas. Ind.Eng.Chem.Res.,37(1998) 4341. [14]S.Ahmed and N.V.Sahinidis. Ind.Eng.Chem.Res.,37(1998) 1883.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier Science B.V.
232
Design and Optimization of Pressure Swing Adsorption Systems with Parallel Implementation Ling Jiang a, Lorenz T. Biegler a, V. Grant Fox b a Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA b Air Products and Chemicals, Allentown, PA, USA
Abstract We implement a Newton-based approach with accurate sensitivities to directly determine cyclic steady states with design constraints. We also design optimal PSA processes by means of state-of-the-art SQP-based optimization algorithms. The simultaneous tailored approach can incorporate large-scale and detailed adsorption models and is robust and flexible. In order to improve the computation efficiency, we apply parallel computing technique with Message Passing Interface (MPI). The implementation of parallel computing is based on the fact that sensitivity evaluation with respect to each parameter is independent and thus can be separated. Applications of several non-isothermal bulk gas separation processes are illustrated. Keywords
pressure swing adsorption, optimization, parallel computing, sensitivity
1. INTRODUCTION Over the past twenty years, Pressure Swing Adsorption (PSA) processes have gained increasing commercial acceptance as an energy efficient separation technique. With extensive industry applications, there is significant interest for an efficient modeling, simulation and optimization strategy. PSA systems are distributed in nature, with spatial and temporal variations that are mathematically represented by Partial Differential Equations (PDEs). These include Langmuir equilibrium isotherms, mass and energy balances, as well as detailed thermodynamic and transport models. Fixed bed systems are typically operated in a cyclic manner with each bed repeatedly undergoing a sequence of steps such as pressurization, adsorption, pressure equalization, blowdown, desorption and pressure equalization. After a start-up time, the system reaches Cyclic Steady State (CSS), at which the conditions in each bed at the start and end of each cycle are identical, and lead to normal production. Due to the intrinsic dynamics of PSA systems, the convergence rate to CSS is very slow. The traditional way to approach the cyclic steady state is to simulate a series of complete cycles until the bed conditions change very little from cycle to cycle. This successive substitution method simulates the true operation of a real plant but can be time-consuming. Several studies have attempted to optimize the PSA systems. Most of them can be classified either as Black
233 Box or Equation-Oriented approaches. The Black Box approach is robust and capable of dealing with complicated models but the repeated CSS convergence makes it very time consuming. The equation-based approach using gPROMS tll is efficient for simple models but non-differentiable terms and steep fronts from more complex bed equations often lead to failure of the Newton solver. In our work we develop a flexible and reliable optimization strategy that incorporates realistic, detailed process models and rigorous solution procedures. 2. SOLUTION STRATEGY 2.1 PDE discretization In order to ensure the accurate approximation of the adsorption models, the selection of discretization method is essential. Finite volume method E21 is used to spatially discretize PDEs into differential algebraic equations (DAEs). It is a conservative method, which preserves the mass and energy balance in the spatial direction. In order to mitigate the numerical noise and avoid the physically unrealistic numerical smearing or osciallation near steep adsorption fronts, we use the modified Van Leer flux limiter [3]. 2.2 DAE Solver and Sensitivity Evaluation DASPK 3.0 [4,5] is used as the DAE solver and sensitivity package. DASPK solves initial value problems of the form F(t,z,z,p)=O using a combination of Backward Diffentiation Formulae (BDF) methods and a choice of linear system solution methods. It also performs a sensitivity analysis upon the parameters dz/dpi contained within the DAE system simultaneously. In the direct approach, the sensitivity equations and the state equations are solved together. The disadvantage is that the computational cost increases linearly with number of parameters and can be quite time-consuming when the number of parameters is large. Therefore we adopt parallel computing to speed up the sensitivity evaluation. 2.3 CSS Convergence Acceleration In the so-called direct determination approach, CSS is written as a two-point boundary condition: zo- Z(tcycte) = 0 and Newton-based method is used to solve for z0. The Jacobian comes from the sensitivity evaluation in 2.2. Theoretically, a Newton based approach attains quadratic convergence rates near the solution. It has been applied successfully by LeVan E61 and Kvamsdal E71to solve relatively simple cycles. For more complicated systems with steep adsorption fronts, multi-layer adsorbents and non-isothermal effects, a trust region approach with proper scaling should be applied to ensure robust convergence tal. Furthermore, the direct determination approach can be easily extended to find operating parameters to satisfy design constraints. The design targets are met at the same time when the cycles are converged. 2.4 PSA Optimization We develop the simultaneous tailored approach for optimization. In this optimization framework, the convergence of CSS is incorporated as a constraint in the optimization problem while the detailed bed models are solved in an inner loop, in order to obtain values of
234 the constraints and objective function. Also accurate gradients with respect to parameters are obtained through 2.2. The CSS is not converged until the optimal solution is reached, thus the time consuming CSS convergence loop is eliminated. The optimization algorithm in the simultaneous tailored approach is reduced space Successive Quadratic Programming (rSQP) [8,9]. Since the number of decision variables is relatively small, rSQP is well suited because it exploits the problem structure. In rSQP, the variables are partitioned into independent and dependent variables and the search direction is a combination of these two directions, dk = Y~r + Zkpz. Here pr updates the dependent variables, which serve to improve the solution of the equality constraints, while pz updates independent variables, which act in the null space of the constraints and serve to optimize the objective function. 3. PARALLEL COMPUTING WITH MESSAGE PASSING INTERFACE (MPI) 3.1 Algorithm In our case studies, the sensitivity calculation is a very time-consuming process and is the dominant cost of design and optimization. However since the sensitivity variables are independent of each other, this makes it possible to use parallel computing. Zhu and Petzold [10] compare several parallel implementation schemes for sensitivity calculation and find the distributed parameter only (DPO) approach is the fastest. In DPO approach, the sensitivity parameters are divided into different sets and distributed to different processors, with each processor running a copy of the state equations and computing a subset of the sensitivity variables. In our implementation, we use the master-slave paradigm and give the master the maximum control over the calculation process. The master is responsible for function evaluation, and new point tests for line search or trust region. Whenever the master needs to do expensive computation such as sensitivity calculation, it broadcasts the variables and all other necessary information to the slaves and gathers the results from slaves when they are done. The slaves are mostly working independently, with very little synchronization and communication cost.
3.2 Message Passing Model The message-passing model [11] posits a set of processes that have only local memory but are able to communicate with other processes by sending and receiving messages. The data transfer from local memory of one process to the local memory of another requires operations to be performed by both processes. The message passing model has the advantages of universality, expressivity, ease of debugging and high performance. MPI addresses the message-passing model and remains a collection of processes communicating with messages. 3.3 Computer Facility The parallel computing work is performed on the Beowulf computer cluster, which is built and maintained by the Chemical Engineering department at Carnegie Mellon University. Currently, the Beowulf cluster has 32 computers, with 2 servers and 30 computing nodes. Most nodes have dual 1 GHz Pentium III processors with between 0.5 and 2 GB RAM.
235 4. CASE STUDY AND COMPUTATIONAL RESULTS Applications of a single-bed, 3-step 02 VSA cycle (Case 1) and two single-bed 6-step 02 VSA industrial cycles (Case 2, 3) are used for illustration. For details on adsorption models and design and optimization problem formulation, see Ref. 3. For Case 1, we choose to maximize 02 recovery (RC) at desired purity (YP) at cyclic steady state. Product tank pressure (Ptank), valve constant (CV), step time of step 2 and 3 (T2, T3) are chosen as decision variables. The optima achieve 18% more recovery than the base conditions and 27% more recovery than the design conditions while satisfying the purity constraints. For Case 2 and 3, we choose to minimize the specific work while maintaining 95% O2 purity at cyclic steady state, subject to some pressure constraints. Flow rates F1, F2, step times Tl and T4, valve constants CV2 and CVt are chosen to be the decision variables. Table 1 and 2 show the data under base, design and optimization conditions of Case 1-3.
Table 1 Optimization results for case 1 [3] Base Design Opt.
CV 1690 834.5 966.3
Ptank
T2 (s)
T3 (s)
1.124 1.124 2.03
15.0 15.0 100
5.0 5.0 98.26
YP (%) 30.55 35.00 35.00
RC (%) 71.94 62.24 89.83
Table 2 Optimization results for case 2-3 [3]
FI
F2
CV2
CVt
Tl (S)
T4 (s)
YP (%)
Work
Design, Case 2
9819
14354
244.5
41.66
25
10
95.00
1.965
Opt., Case 2 Design, Case 3 Opt., Case 3
8322 9426 8272
14610 14424 14610
257.5 238.1 242.2
36.6 32.29 29.48
25.6 25 26.40
4.97 10 7.25
95.00 95.00 95.00
1.808 2.346 2.286
Table 3 Wall time (speedups) with multiple processors (in hours) No .of processors Design, Casel
1 1.86
2 0.86(2.2)
4 0.38(4.9)
8 0.18(10.3)
16 0.11(17.1)
32 0.07(26.4)
Design, Case 2
2.91
1.61 (1.8)
0.81 (3.6)
0.50 (5.8)
0.32 (9.2)
0.23 (12.9)
Design, Case3
6.04
3.19(1.9)
1.95(3.1)
0.93(6.5)
0.52(11.6)
0.31(19.6)
Opt., Case 1
40.8
18.0 (2.1)
8.7 (4.8)
4.7 (9.6)
1.9 (17.0)
1.3 (31.2)
Opt., Case 2
38.0
21.2 (1.9)
8.4 (4.1)
3.7 (8.7)
2.6 (14.5)
1.6 (22.8)
Opt., Case 3
68.6
37.2 (2.1)
14.8 (4.6)
9.6 (7.7)
5.2 (14.9)
4.2 (24.8)
236 Table 4 No. of iterations with multiple processors for optimization No .of processors Case 1 Case 2 Case 3
1 55 34 51
2 52 36 57
4 56 31 50
8 61 29 55
16 43 33 58
32 54 33 77
Table 3 shows the wall time (speedups) with multiple processors for design and for optimization. The speedups for optimization are calculated based on time per iteration. With varying number of processors, the step sizes for the integrator are slightly different, which results in the different levels of accuracy of sensitivities and leads to the variance in the number of the rSQP iterations that the optimizer takes, as shown in Table 4. Design performance isn't affected that much, mainly because design is less sensitive towards the accuracy of Jacobians as long as the constraints are converged well. We find that two factors affect the speedup factor simultaneously. With fewer variables to work with, each local integrator takes fewer time steps and nonlinear iterations and larger step sizes. Thus, the sensitivity calculation exhibit super-linear speedups. On the other hand, the non-parallelized parts such as successive substitutions, Newton iteration and function evaluation limit the potential for parallelization. The percentages of non-parallelized parts in design and optimization are 1.72% and 0.57% for case 1, 4.86% and 1.03% for case 2 and 1.91% and 0.86% for case 3. For each case, the speedups for optimization are typically larger than for design, due to the smaller percentages for optimization. For the same reason, the speed-up factors among the case studies are: Case 1 > Case 3 > Case 2. All speedup factors are sub-linear when the number of processors is large enough because the super-linear effect from integrator is limited and the non-parallelized effect dominates. Since the integrator takes different number of time steps with varying number of processors, the Amdahl's law shouldn't be used here to estimate the potential for speedup factor. Amdahl's law requires that the serial and parallel programs take the same number of total calculation steps for the same input [~2] 5. CURRENT W O R K Currently we are incorporating an industrial HYCO process into the optimization framework. HYCO is a five-bed system and each bed is undergoing eleven steps. There are five gas components, H2, CO, CO2, CH4 and N2. H2 is the primary product and all the others are traces. We use both uni-bed and multi-bed approaches to simulate the bed behavior. The uni-bed approach consists of only one set of bed equations but with different inputs corresponding to all of the steps in the cycle. To simulate the interactions between steps among the different beds, storage buffers are used. On the other hand, multi-bed approach considers only a portion of the cycle with several beds solved simultaneously and matching conditions imposed among
237 exit and entrance conditions among beds. Multi-bed approach requires a larger work space but is easier for sensitivity purposes. For optimization, the objective is to maximize the H2 recovery within purity and pressure constraints. The decision variables include the bed geometry, adsorbent diameter, valves, flow rates and step times. Fig. 1 and Fig. 2 show the mole fraction profiles of H2 and CO at the end of each process step in the cycle at cyclic steady state.
Fig. 1. Mole fraction of H2 at CSS
Fig. 2. Mole fraction of CO at CSS
Acknowledgment The authors gratefully acknowledge the financial support of the NSF/GOALI program and Air Products and Chemicals Inc. The authors would also like to thank Prof. Steinar Hauan for the support on the computing facility and Dr. Shengtai Li for his inputs to this research.
REFERENCE [1 ] S. Nilchan, Ph.D. dissertation, Imperial College, London (1997) [2] P. A. Webley, J. He, Computers and Chemical Engineering, 23 (2000) 1701 [3] L. Jiang, L. Biegler, V. Fox, submitted to AIChE J. (2002) [4] S. Li, L. Petzold, Design of New DASPK for Sensitivity Analysis (1999) [5] S. Li, L. Petzold, W. Zhu, Applied Numerical Mathematics, 32 (2000) 161. [6] D.Croft, M. LeVan, Chemical Engineering Science, 49 (1994) 1821. [7] H. Kvamsdal, T. Hertzberg, Computers and Chemical Engineering, 21, (1997) 819 [8] P. Boggs, J. Tolle, Journal of Computational and Applied Mathematics (2000) [9] D. Temet, L. Biegler, Computers and Chemical Engineering, 22, (1998) 963 [ 10] W.Zhu, L. Petzold, Concurrency: practice and experience, 11, (1999) 571 [11] W. Gropp, E. Lusk, A. Skjellum, Using MPI: portable parallel programming with the message-passing interface, Cambridge, MIT press (1999) [12] Y. Shi, Computer and information sciences department, Temple University (1996)
238
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Risk Analysis of Chemical Process Using Multi-distinction Equipment Screening Algorithm Ku Hwoi Kim 1, Sung Joon Ahn 2, Myung Wook Shin 3 and En Sup Yoon 2 lInstitute of Chemical Processes, Seoul National University, Korea 2School of Chemical Engineering, Seoul National University, Korea 3KOGAS, Korea ABSTRACT In this research, we has been trying to suggest how to reduce both the factors behind an accident and the damages caused by all kinds of accidents which might take place in Chemical Process, in other words, by using a number of data in Process such as material, operation condition, flow rate, failure rate, we suggest an algorithm to decide ranking about degree of danger per device in Process. Ultimately we can obtain the hazard ranking of all devices in Process by an algorithm above mentioned. The risk ranking getting through qualitative method will be able to contribute to the following field. Firstly, it uses for mending and maintenance in real industrial field. Secondly, suggesting several ways to distinguish the initial hazard devices from all devices in Process in order to estimate both qualitative and quantitative hazard evaluation in itself industrial facilities. Thirdly, reducing the total performance evaluating time as providing the right guideline for distinguishes risk. And finally contributing to establish the foundation of the risk management system, that is MESA (Multi-distinction Equipment Screening Algorithm), which should be able to reduce the danger to a minimum in artificial disaster and a natural calamity.
Keywords: MESA, QRA, Risk Ranking 1. Introduction In this research, we have tried to focus on these purposes. Firstly, by using item to distinguish the degree of risk in Process from facilities point of view such as material, operation condition, flow rate, failure rate taking its age into account, we'll realize which device is implying the worst danger. Secondly, after combining all conditions and distinguishing degree of danger per device in Process, it'll supply good information to make a decision for users. And finally, after distinguishing the worst dangerous device through the degree of risk according to accident rate, it'll help user to be prepared to cope with any situation. All of this purpose has been interested in real industrial field and operation step. Here we assume that the possibility of danger which was caused by the following devices and anything else don't give consideration to it.
239 1.Devices which even though it might have a little bit of danger in itself, it might have a farreaching influence to another equipments when it spreads rather than danger in itself (Pump, Pipe and Compression). 2.Devices which try to solve danger for itself as some kind of safeguard (Control valve) 3.Devices report the danger in advance (Alarm, Control variable Monitoring system (including sensor)). 4.Devices, which minimize the danger when the accident takes place (Mitigation Equipments - Rupture Disc, Safety Valve, Safety barrier, Emergency By-pass System). 5.Devices, which prevent the source of accident from happening (Prevention device - reaction restrainer and counteractive injection device). 6.Elements, which might give outside dangerous influences (Operating Air-Pressure of Compressor). 7.Natural calamity (Earthquake, Landslide and Flood) and extraordinary danger (terror, war and strike). The main purpose of this research is to find out the risk having itself in each device and especially which devices could have a far-reaching influence to another equipments when it spreads. But of course, not mention to preparations for this and another effectible devices. The reason which is used Screening Algorithm in order to distinguish between devices with latent maximum degree of danger is.
2.MESA (Multi-distinction Equipment Screening Algorithm) The evaluation of the risk we have been interesting was omitted from Screening Process because danger having in it is much less serious than what we try to consider it in Pump and many other things. Each one in Material Property, Flow Rate, Operating Condition and Failure Rate (Age) was ranked as three levels like (A: dangerous, B: cautious, C: safe) and then each device was gotten the risk from Matrix explaining the risk, namely Expert System DB. After resorting the results according to the standard of the risk in Analysis Algorithm list, we can find out which equipment is the worst dangerous as final results and then input this data as the accident propagation effect of the reasoning model. (Fig 1. MESA (Multidistinction Equipment Screening Algorithm) outline) The Matrix related to the influence of the risk and the association of each devices would be made on the basis of above mentioned MSA and then according to the association of the risk per device, Screening method in the general of the risk was deduced the following conclusion, as an example, from Matrix DATA BASE based on the Expert System. (Table 1. The Matrix related to the influence of the risk)
2.1. Material Property The judgment of the risk about material property consider both Nf(Flammable hazard rating) and Nr (Reactivity hazard rating) in NFPA (National Fire Protection Association) code and then make a Matrix about raking it as A, B and C. The content is like this.
240
Fig 1. MESA (Multi-distinction Equipment Screening Algorithm) outline Table 1. The Matrix related to the influence of the risk Eq3
Eq4
Eq5
Eq6
R
R
R
(~
A
I Eql ]Eq2 Classification Of the risk
~A
A
Eq7
N
(~
A
R
A
(~
R
F1
~
~
~
A
("
R
R ("
F r (A'~
~
~
A
A
A
O,
C'
Tc~t~I
A
~
A
A
R
A
R
2.2. Flow Rate Calculate the risk index in flux flow rate with following steps.
1. Calculate a correspondent radius after converting a device volume into a spherical volume. 2. Calculate a circle area with it 3. Calculate Heat Propagation Flux (H.P.F. = (Mass of storage + Flow rate * 60 second)/Circle Area) in which the standard of Heat Propagation Flux is based on the KOSHA (Korea Occupational Safety Health Agency) Code. Heat Propagation Flux is calculated by flow rate and storage. So, the magnitude of material affects risk index in flux flow rate. Above deduced virtual spherical explosive Heat Propagation Flux is divided like the following thing, which is on the basis of two dangerous situations prescribed in KOSHA. That is, Anatomical injury standard which exposed to Heat Flux as much as 37.5[kW/m 2] for 10 minutes results in the death and exposed to Heat Flux as much as 4[kW/m 2] for 10 minutes results in getting burnt. But we use 1 minute for calculating with SFPE (Society of Fire Protection Engineers) Guidance 2.3 Operating Condition With regard to the range of Temperature and Pressure which is one of operating conditions in
some Process, Temperature is divided into three steps on the risk, which is based on FP (Flash Point) and BP (Boiling Point) and Pressure is distinguished from the risk, which is based on 10Kg/cm2 and 0Kg/cm 2 as a standard. In case of distinguishing Temperature of the risk Matrix,
241 the standard is based on FP (Flash Point) and BP (Boiling Point), which is about to change phase. And In case of distinguishing Pressure, the standard is based on An Method of management Enforcement Ordinance of High pressure Gas Safety heralded in 1984. And for Pressure, a vacuum range, -1 < P < 0, include the risk in itself during operating according to material property and phase (High explosive reaction possibility with air if reactant suddenly happen to leak under vacuum condition, Nfis and 1 Nr is 2"--3). Thus let this condition be "caution needed". And as each the range of Temperature and Pressure is proportioned to the risk, that is, the more the dangerous, Matrix related to it is like this. (Only if Input is two phases (Gas-Liquid), have to apply to Liquid phase Matrix included all of outbreak danger.)
2.4. Failure Rate (Age) With regard to the standard distinguishing Risk from Failure Rate per device, Failure Rate totally seems like data including pressure durability (inner, outer), causticity and the stress durability of the outside-air-temperature change. Here the accident expectation results from multiplying this Failure Rate by Age (Guidelines for Process Equipment Reliability Data, CCPS of AIChE). Assuming that Risk is proportional to Failure Rate per device with time as the standard of setting up scope related to the accident expectation and considering that the standard of Failure Rate per device in 106hour(about 114 years) based on Guidelines for Process Equipment Reliability Data in CCPS of AIChE are divided into 0.01 -3000, let the standard of main equipment set as the expectation of happening just one time over 106hour and also the distinguishing standard of danger index was set with being based on the expectation of happening just one time over 100 times. Matrix related to the distinguishing standard of danger index is like this. And the formulation of Failure Rate per device is like this. FR = (Failure Rate per device) X (Expected Life span of device)
2.5. Rule base incentive or penalty We try to suggest in this research is basically weight factor based method, which firstly adds up all points, given according to the weight factor, and then append incentive or penalty to it based on several rules. This method has got some advantage to make the ranking of the risk easily and intuitionally according to some rule base instead of spending waste of time in order to get the weight factors which can explain all several cases. 3. Case study (LPG underground storage and shipping) Fig 2. is Process map which is being received LPG from a pelagic-fishing vessel and keeping it on underground storage space and then forwarding through the ground tank. And we tried to evaluate the risk per device on the basis of the theory introduced in main.
242 Unloncrmg/urn
-'~ . . . . . . . . %.,, Unload
~PiG
Boosting
Loacl
Heater I
t . _ . .
Inhibitor
. . . . . . . .
Ground
Water stripper
Air
N~ Storaoe Tank
P-e
.......... " ~ " = = r ' : :
............
Fig 2.LPG underground storage and shipping processing outline and detail map Materials used to this case study are Propane (C3Hs), Nitrogen (N2), Methanol (CH3OH), Dimethylsulfide ((CH3)2s), and Water (H20). After examining Nf and Nr of each one, we determine the hazard rank. In flow rate, fundamentally in case of storing material in equipment to some degree, this amount should be calculated firstly and then flow rate is multiplied by combustible heat. And devices used in Entire Process divided into pump, heater, tank, T-2(Dry Tank), valve and so on and then calculation of the cut area of volume or flux per each device, that is, calculation of imaginary sphere radium and surface area and finally each is divided by flow rate multiplied by combustible heat. And, after examining Temperature and Pressure on the basis of material condition, which is flowing in each device, the hazard rank was made a decision. Because there isn't any information related to Failure Rate about each device in LPG Process, Failure rate hazard rank was determined from assuming that LPG Process DATA is similar to LNG Process DATA. Here let each rank A (25 point), B (15 point) and C (5 point) and then Failure rate hazard rank was determined after giving the weight factor to each one according to Heuristic rule. Viewed in deciding overall ranking with this way, it seems that inner pump in carven, T-2(Dry Tank) and T-2(Dry Tank) tank are the most dangerous element and thus quantitative estimation about each device should be done above all. 4. Conclusion
It appears that the risk in P-2 (pump), T-2(Dry Tank) and T-3(Dry Tank) are high according to applying Multi-Distinction Equipment Screening Algorithm for LPG storage equipment. After analyzing the influence and the cause resulted from derivative operation per some special equipment mentioned in case study, that information will be able to be made use of the accident scenario related to quantitative risk analysis. Therefore, in considering this information on Process Design Steps, saving effect of human and material resources could be achieved, and also Scenario Decision Method based on equipment and material property could be built up on the part of Quantitative Process Safety
243 Evaluation System, and also this Scenario Decision Method will be able to apply influence evaluation in outside of industrial zone. (Table 2. The dangerous ranking per equipment) Table 2. The hazard ranking per selected equipment
References
EPA. (1996). RMP Offsite Consequence Analysis Guidance. EPA. Murphy, J.F., & Zimmermann, K.A. (1998). Making RMP hazard Assessment Meaningful. Process Safety Progress, 17(4), 238-242. CCPS. (1994). Guidelines for Evaluating the Characteristics of Vapor Cloud Explosions, Flash Fires, and BLEVEs. CCPS of the AIChE. CCPS. (1992). Guidelines for Hazard Evaluation Procedures. CCPS of the AIChE, (2 nd ed.). AIChE. (1994). Dow's Fire & Explosion Index Hazard Classification Guide. AIChE, (7 th ed.), New York. Catino, C.A., & L.H., Ungar. (1995). Model-Based Approach to Automated Hazard Identification of Chemical Plants. AIChE J., 41, 97-109. Durkin, J. (1994). Expert Systems-Design and Development, Macmillan, Publishing Company, New York. Oh, Y.S, K.H., Kim, Y.H., Yoon, & E.S., Yoon. (1997). The Changing Trend of Gas Safety Management System in Korea. Proceeding of the 1st International Symposium on Gas Safety, 1-9. Society of Fire Protection Engineers. (1995). The SFPE Handbook of fire Protection Engineering, National Fire Protection Association, (2nd ed.), 5, 111.
244
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Quality Improvement in the Chemical Process Industry using Six Sigma Technique Minjin Kim, Young-Hak Lee, In-Su Han, Chonghun Han Department of Chemical Engineering, Pohang University of Science and Technology, Pohang, Kyungbuk 790-784, Korea ISYSTECH, Ltd, Pohang, Kyungbuk 790-784, Korea
Abstract Although Six Sigma has been so successful in many organizations, the successful applications of Six Sigma are rare for the chemical processes due to highly nonlinear causality. This paper provides a successful case for the chemical process industry in which degraded quality has been improved by performing the proposed Six Sigma strategy. The strategy consists of required techniques in each phase and the techniques include not only existing tools such as 5M1E approach, partial least squares (PLS), and design of experiment (DOE) but also a new methodology to discover exact key sources of quality degradation. This case study has been a great success; from 3.5 initial sigma to 5.5 sigma has been achieved by applying the proposed methodology to the quality degradation problem. Keywords Quality Improvement, Six Sigma, Multivariate Statistical Analysis, Design Of Experiment 1. INTRODUCTION In the last century, there has been a tremendous evolution of the ideas to improve quality in any organization. Conventional tools for product quality improvement, such as quality control (QC), total quality control (TQC), and total quality management (TQM), concentrate on reducing off-specialties of the final products rather than on removing defects during a process of production. It has cost a great deal grounded in fact that the amount of total produced products is two times or more the amount of a final shipping products. Six Sigma is so successful when other traditional methods often fail, since Six Sigma focuses on bottom line success and looks directly at the tools required to achieve this success [ 1]. The concept of implementation of Six Sigma methodology was pioneered at Motorola in the 1980s with the aim of reducing quality costs. It has since been adopted and generalized by a number of companies, such as Texas Instrument, Allied Signal, GE, Sony, ABB, Nokia, LG, and SKC [2]. It is a business decision making (BDM) strategy used to improve profitability, to drive out waste in business processes and to improve the efficiency of all operations that meet or exceed customers' needs and expectations. It is clear that Six Sigma is a great strategic framework for a quality innovation. However, it is poor to investigate the detail cause of defects and to guide the direct ways to improve process and quality. Actually, it is rare for Six Sigma to be applied to the large and complex chemical processes successfully because of its highly nonlinear causality. In the chemical process, many kinds of factors have been shown to influence quality of the final products,
245 but definition of quantitative relationships involving factor parameters has been elusive. Montgomery asserts that the next crucial goal of Six Sigma is to expand the base of problems and projects [3]. In the long run, to really accomplish the simultaneous objectives of Six Sigma, practitioners will not only have to gain a solid understanding of additional statistical tools, but knowledge of process engineering and operations research techniques [3]. The traditional quality improvement methods of chemical process can be enhanced by the use of Six Sigma strategy. Many of the chemical processes are already being used by systems engineers including cost modeling, multivariate statistical analysis, DOE, and design for manufacturing and assembly. Integrating these and other standard tools into an effective cost reduction discipline is a job best performed by systems engineers. The proposed methodology is also organized into five parts like a general Six Sigma, namely the definition phase, measurement phase, analysis phase, improvement phase, and control phase [4-5]. The method begins by defining that aspects of the process are "critical to quality (CTQ)". The second phase covers Six Sigma measurement using basic tools such as filtering, sensor reconciliation, normality test, and so on. The logical order within this phase is as follows: translate the problem into a measurable characteristic; and validate the measurement system. The analysis phase pursues two objectives: 1) identify variation sources through seasonality analysis, feed analysis, disturbance analysis, and alarm/shutdown analysis; and 2) screen potential causes that are taken along in the experiments in the next phase using MSPC, PCA, PLS, and correlation analysis. The order of the improvement phase is as follows: 1) discover variables relationships by DOE, plant implementation, analysis of variation (ANOVA), response surface method (RSM), and core group interviews; and 2) establish operating tolerances using the validated model. If the result does not meet the objectives, a return to a previous step is required. Finally, monitoring and control procedures are put in place to ensure continuing quality. The target process of this paper is a purified terephthalic acid (PTA) process of Samsung Petrochemical Corporation. Customers have complained about narrow and inconsistent particle size distribution (PSD) of PTA, and thus our goal specification is decided to increase the current PSD by six percent and to maintain that consistently. After improving the quality, the sigma level of the target quality has been changed from 3.5 sigma to 5.5 sigma. Finally, 92 percent of the goal has been achieved by applying the proposed quality improvement methodology. 2. THEORITICAL BACKGROUND 2.1. Six Sigma Six Sigma is a disciplined and highly quantitative approach to improving product or process quality. The original motivation for Six Sigma at Motorola was centered around manufacturing improvement, and this was also hos Six Sigma was originally introduced in GE. However, important as manufacturing cost reductions may be, they often do not readily translate into improvements that are transparent to the customer. GE management came to realize this [2]. It typically involves the four stages: Measure, Analyze, Improve, and Control, with and up-front stage (Define) sometimes added (DMAIC). In brief, these steps are as follows [2]: 1) Define the problem to be solved, including customer impact and potential benefits. 2) Identify the critical-to-quality characteristics (CTQs) of the product or service, verify measurement capability, and baseline the current defect rate and set goals for improvement. 3) Understand root causes of why defects occur, identify key process variables that cause defects. 4) Quantify influences of key process variables on the CTQs, identify acceptable limits of these variables, and modify the process to stay within these
246 limits, thereby reducing defect levels in the CTQs. 5) Ensure that the modified process now keeps the key process variable within acceptable limits, in order to maintain the gains long term. 2.2. Partial Least Squares Methods Partial least squares method is a sort of the multivariate statistical projection methods, which are recognized as state-of-the-art analysis and modeling techniques and are quite popular in chemical industry [6-7]. In this study, PLS method will be mostly used among these multivariate statistical projection methods. PLS method has been widely used as a powerful tool for constructing empirical models from lab and field measurement data because the resulting models are typically more robust and reliable than those using other modeling tools, such as ordinary least squares method, particularly when the data are noisy and highly correlated with each other [7]. 3. PROCESS DESCRIPTIONS The PTA manufacturing process is used to purify terephthalic acid produced by the bromine-promoted air oxidation ofp-xylene. The main impurity in the oxidation product is 4-formylbenzoic acid (4-CBA) and the process removes this to less than 25 ppm. Metals and colored organic impurities are also almost completely removed by the purification. Fig. 1 shows a simplified process flow diagram of the process consisting of major five unit processes. Crude terephthalic acid and water are fed to a mixing tank to form a slurry of at least 15 wt% terephthalic acid. The slurry is pumped to heat exchangers, which raise the slurry temperature sufficiently for the terehpthalic acid to dissolve. The solution flows through a hydrogenation reactor that contains a palladiumon-carbon support catalyst. Reactor temperature is held above the partial pressure of steam to maintain a liquid phase. In the reactor, 4-CBA is hydrogenated to p-toluic acid, and various colored impurities are hydrogenated to colorless products. The catalyst is highly selective; the loss of terephthalic acid by carboxylic acid reduction or ring hydrogenation is less than 1%. The overall effect of the hydrogenation is conversion of impurities to forms which remain in the mother liquor during the subsequent crystallization step. The terephthalic acid is purified by crystallization in a series of vessels where the pressures and therefore the temperature is sequentially decreased. As was noted above, impurities remain in the mother liquor for the most part. The purified terephthalic acid crystals are recovered by centrifugation or filtration, followed by drying of the wet cake. Overall yield of a whit, free-flowing powder is greater than 98%.
Fig. 1. Simplified process flow diagram of the purified terephthalic acid process.
Fig. 2. Quality improvement procedure with Six Sigma techniques.
247 4. QUALITY IMPROVEMENT The process is defined by five major steps like a typical procedure of Six Sigma: Define, Measure, Analyze, Improve, and Control. These steps are set up by combining several techniques into an effective quality improvement tool (Fig. 2). 4.1. Definition Associated with the wider scope of application is customer focus. This is emphasized repeatedly in Six Sigma in terms of CTQ; improvements will make sense only if they are directly related to some CTQs. Defining what is required from the target process is the key to quality improvement. In this study, the CTQ is identified as the particle size distribution (PSD) which is one of the important quality variables and which affects the product quality in the down stream processes that manufacture poly ethylene terephthalate, polyester, filament, fiber or film from PTA. From the analysis of historical data, it has found that the performance of the current process is around 3.5 sigma. By investigating the voice of the customers, it was determined that we have to improve the product quality up to at least 5 sigma. 4.2. Measurement Various types of information such as qualitative data, observational quantitative data, and experimental quantitative data are exploited in the identification step. A useful tool for the collection of the qualitative data to identify possible sources of quality degradation (without any prejudice to the level of their effect on output) is to interview the relevant personnel for the possible inputs that may affect the output. A good approach for a chemical process is to use the 5M1E (Man, Materials, Manufacture, Machine, Measurements, and Environment) format. Interviews have been done with a total of fourteen people including operators and engineers. The real-time database (RTDB) system and the laboratory information management system (LIMS) have been running to collect the observational quantitative data on the process and quality variables for the whole process. 4.3. Analysis After completing the measurement phase, the key sources that significantly affect the degradation of PSD should be identified to improve PSD. First, core sources have been qualitatively identified using a fishbone diagram. The process is analyzed to generate clues about degradation sources, thereby exploiting results from the interview. Fig. 3 indicated the inputs which can potentially affect the output. Note that the nine variables (DP 1, DP2, TA, TS, P1, P2, DP, DP5, and TF) with large score values greater than 2 are selected as potential input variables. Second, a quantitative identification of degradation sources has been performed using multivariate statistical analysis methods such as regression, variable selection, discriminant analysis, correlation analysis, histogram, cycle-time analysis and variable trend analysis. To identify the correlations between the PSD and a total of 22 process variables for the slurry preparation tank, the purification reactor and the six crystallizers, PLS models are built from the collected quantitative data in the measurement step. After performing cross-validations using the data matrices, the final PLS model has five latent variables and explains 78.8% of the variance in PSD. Note that the PLS modeling results are very good if we consider the fact that there should exist lots of sensor faults and measurement errors in the process and quality variables of actual processes and that the PLS models attempt to explain the "common-cause" variations [9] in the process data and to exclude the random variations and measurement errors that are uncorrelated
248 with other process and quality variables. We can select the candidates for key sources among the total variables by using a variable importance on the projection (VIP) plot (Fig. 4). Note that the eight variables (DP2, DP, TA, TS, DP1, DP5, TF, and P1) with large VIP values greater than 1 are selected as the candidates since they are most relevant to explain Y. To identify as many as possible sources of degradation, some selection of the major sources of degradation is made on the basis of prior knowledge and statistics. The resulting two factors (DP1 and TS) are obtained by comparing the results of qualitative identification (5M1E), quantitative identification (VIP), and controllability, which are shown in Table 1. 4.4. Improvement and Control Two kinds of small single factor experiments were performed after designing a DOE. Obviously, interaction terms are not considered in these experiments. The first experiment was carried out to determine if there was any possible relationship between the pressure difference variable of the first crystallizer and the PSD. Decompression of the first crystallizer was carried out at five different pressures of the first crystallizer and at the average condition of the other factors for the past one year. Fig. 5 shows that the results of an experiment to correlate the first crystallizer decompressed versus the change of the PSD. I M~.,act~e I
I M. . . . . eme.ts I
t,,a. \ - -
+.c,~,,,-, . k
//
/'00
6.Catalyst
8.
/,
/
Pre- e a t e r CrystalIizersNN~
9.c~,,.,,m,,
M / - ,~ ..... /
l ,,,,,+,o+,,sII E.~+o . . . .
tI
/
I ,,,,o<:,,,-,e j
Fig. 3.5M 1E approach to the identification of quality degradation sources.
0.5
oo '=' ' : ' <
'=' ':'
m ~ ~ v~ ~
~ m m ~ m m ~ ,: ~
,~
Fig. 4. VIP plot for PSD
Table 1. Results of qualitative identification, quantitative identification, No. Description Abbr. 5M1E VIP 1 pressure difference of the 2 na DP2 2 2.067 crystallizer DP 2 pressure difference of the preheater 1 1.777 3 density controller 2 1.776 TA 4 percentage of total solid 2 1.476 TS 1st DP1 5 pressure difference of the 3 1.229 crystallizer 6 pressure difference of the 5 th DP5 1 1.151 crystallizer 7 slurry feed total flow TF 1 1.029 8 pressure of the 1st crystallizer P1 2 1.017 9 pressure of the 2 nd crystallizer P2 2 0.821
and controllability Controllability secondary C V l 2 uncontrolled variable MV2 independent CV2 primary CV1 secondary
CVll
independent CV3
MV1 MV1
249 32
o 9 ao
ze
-
26
. . . . . . .
24
22
. . . . . . .
,~-
,o
.
.
.
.
.
.
.
.
.
.
,~-
.
--t
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
........
30
. . . . . .
~
.
.
Ze
.......
2~
..............
~
....
~._ ...............
9 ~
9
9
.....................
. . . . . . . . . . . . . . . . . . . . . . .
24
.
22 23
21
pressure
d ifference
Z5
of
the
first
27
crystallizer
Fig. 5. Single factor experiment; pressure difference of the 1st crystallizer versus PSD.
24
26 to ta I so
28 lid
30
percentage
Fig. 6. Single factor experiment; total solid percentage versus PSD.
The results indicated that the pressure difference of the first crystallizer was a significant variable, with a correlation coefficient, R 2 of 0.95. The second experiment was carried out to study the effect of the total solid percentage in the feed slurry to the PSD. Change of the solid percentage was carried out at five different densities of the feed by controlling density controller variable (TA). Fig. 6 represents that the total solid percentage was not a significant variable, with a correlation coefficient, R 2 of 0.29. From the experiments results, a new control system was designed to control the decompression of the first crystallizer and to maintain the improved PSD automatically against external disturbances. 5. CONCLUSIONS The proposed quality improvement strategy is successfully implemented for the problem of degrading quality in a chemical process though it is hard to improve the product quality successfully due to highly nonlinear causality in the processes. Among conventional Six Sigma tools, the most relevant ones are selected for each phase of Six Sigma and new techniques are developed to effectively discover key sources of the quality degradation. Therefore, great improvement of the target quality has been achieved from 3.5 sigma into 5.5 sigma. ACKNOWLEDGEMENT
The authors deeply appreciate the support provided by the process engineers, Eui Chul Noh, Woo Chang Lee, and Kyung Hoon Lee, at Samsung Petrochemical Co., LTD, Korea. REFERENCES [1] L. W. Flott, Quality Control, December (2000) 43. [2] G. J. Hahn and N. Doganaksoy, Quality Engineering, 12(3) (2000) 317. [3] D. C. Montgomery, Quality and Reliability Engineering International, 17(4) (2001) iii. [4] T. N. Goh, Quality and Reliability Engineering International, 18(5) (2002) 403. [5] J. D. Mast, W. A. J. Schippers, R. J. M. M. Does and E. R. V. D. Heuvel, Quality and Reliability Engineering International, 16 (2000) 301. [6] H. Fujii, S. Lakshminarayanan, and S. L. Shah, Preprint of IFAC Symposium on Advanced Control of Chemical Processes, (1997) 529. [7] D. Neogi, and C. E. Schlags, Ind. Eng. Chem. Res., 37 (1998) 3971. [8] B. M. Wise, and N. B.Gallagher, J. Proc. Cont., 6 (1996) 329. [9] J. F. MacGregor, C. Jaeckle, C. Kiparissides and M. Koutoudi, AIChE J, 40 (1994) 826.
250
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Process integration framework for business decision-making in the process industry Ichiro Koshijima a, Akio Shindo b and Tomio Umeda a
Dept. of Project Management, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba 275-0016, Japan bDept. of Management information systems, Nagoya University of Commerce and Business Administration, 4-4 Sagamine, Komenoki, Nissin, Aichi 470-0193, Japan a
Abstract An integration framework of plant operation process and business decision-making process is proposed with a methodology for synchronous operation along the plant life-cycle. In this framework, concurrent tasks belonging to the both processes can be synchronously grouped in a certain amount of time-slice by matching with precedence among tasks. The grouped tasks, then, are work-shared by plant operation managers and business operation managers for increasing accuracy of decision and for decreasing idle times. The present research will make clear what parts are most important for the process integration in the chemical industry. Keywords MES, synchronous operation, work-sharing
1. INTRODUCTION Since PSE82 held in Kyoto, the process systems engineering society has been continuing its vigorous activity in the field of process design optimizations, energy saving methodologies, advanced control systems, and proposing new technologies to plant owners. Very few applications, however, were successfully adopted in actual plants, because of their complicated integration for engineering decision-making and their uncertain economics for business decision-making. In recent years, ERP (Enterprise Resource Planning), SCM (Supply Chain Management) and CRM (Customer Relationship Management) systems have been installed for increasing business decision capabilities in a highly competitive market not only on the Business-to-Customer relationship but also on the Business-to-Business relationship. The advancement of information technologies enables this business trend to be considered practically. The chemical process industry is not an exception to this business trend. The process systems engineering society should perform a key-role to extend the system boundary from chemical process systems to business process systems.
251
Fig. 1 REPAC Model mapped with typical applications for the process industry In PSE91, one of the authors presented a methodology for value-system design, where critical success factors with product portfolio management discloses design alternatives and AHP (Analytic Hierarchy Process) is applied for screening the alternatives. [11 In this methodology, the author intended a fusion of process engineering with business decision-making to involve higher degrees of uncertainty and large investment. The authors also pointed out a critical risk of cyber-terror against networked chemical plants, and proposed an approach to potential risk analysis to assure plant operation with business decision-making through a network system. [2] Through these work, the authors extended application region of process systems engineering to some extent in the business field. A concept of MES (Manufacturing Execution System) is proposed as an interaction buffer between bottom-up plant operation and top-down business decision-making. This concept, however, allows unnecessary time allowance caused by an asynchronous interchange between plant operation and business decision-making, because both today's time-based and global competition require new method to improve synchronicity for reducing unnecessary time-lag behind conservative operations. In this paper, the authors propose an integration framework of plant operation process and business decision-making, and propose a methodology for synchronous operation along the plant life-cycle. The present research will make clear what parts are most important for the process integration in the chemical industry. 2. BACKGROUND AND PROBLEM STATEMENT Since AMR Research introduced the three-layer model in 1992, various formal frameworks have been reported to define integration architecture between plant operation and business operation. When Heaton TM mapped typical applications on the AMR's REPAC model as shown in Fig. 1, he identified the two holes in the architecture. One hole is in "Analyze" process and the other is in "Coordinate" process between the plan-centric ERP and the enterprise ERP/APS. Both processes strongly relate to decision-making processes performed by managers and engineers.
252
Order1 ~] Plan L.,L~ Plan ~.~ Plan ~Pr~ OrderN__~ Source I ] [ Make I I [ Deliver ProductN ~---Discipline A~Discipline B~Discipline C
Fig. 2a Discipline oriented Source, Make, Deliver and Plan processes Order 1
Project 1 (Plan,Source, Make, Deliver)
Product1
Order N
Project
N (Plan,Source, Make, Deliver)
ProductN
Fig. 2b Prqiect oriented Source, Make, Deliver and Plan processes Even in the future plants that computerize all plant operations, we cannot exclude human decision-making for the responsible production and business management. It is, however, necessary to reduce overlapped decision-making between plant operation and business operation, because the enterprise has to be efficiently coordinated to meet the time-based competition. In order to reconstruct decision-making processes, it is necessary to introduce a new framework, which promotes a structural improvement by elimination of bureaucracy. The most important objective for the enterprises are to deliver products (services) for their client's order through Source, Make, Deliver and Plan processes. In the traditional organization, each process is supported by a business oriented discipline as shown in Fig. 2a. This, however, requires a communication channel between various disciplines that make unnecessary delay in exchanging information and obtaining results. In order to survive today's severe competition, the enterprise project management (EPM) concept was proposed to apply the results-oriented principle of project management. [41 Fig. 2b shows the EPM based approach where each order starts its own project and the delivery of the product closes the project. Under the EPM concept, every activity inside the enterprise should be formalized by a work breakdown structure (WBS). A work package, that is the lowest level in the WBS, organizes actual activities to be performed and specifies the required decision type, level and procedure for controlling schedule and responsibility. The objective of the problem stated in this research is to provide a new integration method under the EPM operation in the area of decision-making process between the business operation and the plant operation where the following constraints should be shared in both sides. 9 Precedence in decision-making between plant operation managers and business operation managers 9 Work-load in decision-making between plant operation managers and business operation managers
253
Fig. 3 Process integration through virtual managers 3. PROCESS INTEGRATION F R A M E W O R K
3.1 Precedence sharing in decision making process According to PMBOK, project processes are categorized into two processes, such as project management processes that are common to most projects and product-oriented processes usually defined by a type of product and its product life-cycle. Project management processes and product-oriented processes overlap and interact throughout the project. The traditional decision-making processes composed of business operation processes and plant operation processes, therefore, should be specified one of the above two categories under the EPM operation. In the previous work, the authors proposed a new approach to allocate human resources for multi-project execution. TM In this approach, all project activities are defined as a precedence of work packages. Grouping of work packages is formulated as a line balancing problem. After a super engineer is a virtually defined to perform a grouped work package, real engineers are assigned to support the scope of works through the virtual engineer. This approach enables concurrent process tasks to be sequenced and grouped in a cycle-time by matching the constraints in the precedence orders and the associated technical levels in the work packages. By applying the above method, business operation processes and plant operation processes can be integrated as shown in Fig.3 where two virtual managers should be assigned for the business operation and three virtual managers are assigned for the plant operation. Interface activities are also extracted as an overlapped region covered by the virtual managers. For example in Fig. 3, Real-Manager-A has to share Decision-or with Real-Manager-1 through Virtual-Manager- 1.
254 Q f ...............................>-~.......................; ::::........ [I ManagerL :: ....... /I Class (ip l -'7 .... "
i
; /
/
/
/i
:"
/! /i/
'
z',,"
~.~_ ....... :
~.
T ~
~'~ [/
r~
Manager]
i,o.,o,i i
O L ................. / .............. "",~,~~:,
Q)
.....
[-,
Manager ] i l CZass a) l
4-9
o Qnt, r~ r~
Manager [ Class O)]
i !
: :
Time
,
r,
Time
vj
Fig. 4 Work Performance Curve Fig. 5 Work load integration 3.2 Work-load sharing in decision making process In the previous section, the decision-making processes are broken down into multiple sub-tasks in accordance with the associated technical levels. The selected cycle-time in the above method may prove the enterprise's competency against the time-based competition. Shorter cycle-time shows better operation management because it allows a feed-back control for increasing accuracy of decision and a synchronous control for decreasing idle times. The cycle-time incurred by processing a sub-task depends on the class of managers assigned. In a practical situation, managers in the business operation may show a better performance for the business decision-making and a poorer performance for the plant operation than managers in the plant operation. Therefore, the following assumption may be added to divide the task to shorten schedule by work-shared managers. 1) Progress of a task can be defined as an S-shaped curve as shown in Fig. 4. In Fig. 4, a class (i) manager has a higher performance than class (j) manager, and the class (i) manager is more expensive than the class (j) manager. 2) The class (i) manager takes over the class (j) manager's task, after the class (j) manager performs the task up to a certain progress, Q,. Concept of the above assumptions 1 and 2 is shown in Fig. 5. When the class (j) manager supports the class (i) manager, total cost of the task may be reduced under the following condition. >
it,1/.,/ .
(1)
where T,. = f/-' (Qn)
(2) (3)
Tj = f;l(Qn)
A, =
f/-1
(Qn)
_ f/-1
(Q,)
(4)
C,., Q: Activity based unit cost of the class (i) manager and the class (j) manager, where
C/>G.
255 N,., ~ : Number of the class (i) managers and the class (j) managers f,~,: Work efficiency function of the class (i) manager and the class (j) manager The condition Eq.(1) defines a constraint to specify the take-over point Q,. In the actual situation, the number of manager's ratio is (N/Ni) > 1 and their cost ratio is (C/Ci)
3.3 Applying procedure In order to integrate decision-making processes between the business operation and the plant operation, the above mentioned methods are summarized as the following procedure. Step 1: Decision-making processes are defined as a work breakdown structure (WBS). Step 2: Precedence of decision-making tasks in the WBS is defined with task-time. Step 3: A cycle-time is specified for synchronization of decision-making tasks. The initial value is set by (the longest task-time)/( the worst work efficiency). Step 4: The Precedence sharing method is applied to integrate processes. Step 5:The work sharing method is applied to share tasks and reduce activity duration time in each task. Step 6: Reduce the cycle-time based on the duration time reassigned by Step 5, and return to Step 3 until there is no room to apply Step 5. 4. CONCLUDING REMARKS In this paper, the authors proposed an integration methodology for synchronous operation along the plant product-cycle. This methodology, however, can be extended to support the overall plant life-cycle, when we can define decision-making processes along with the plant life-cycle. Though the proposed method mainly focuses on the human business rules that cannot be automated by present technologies, it may suggest a continual improvement of enterprise organization under the enterprise project management concept. REFERENCES [ 1] M. Terashi and T. Umeda, PSE'91, Montebello, Canada, (1991) III.22.1. [2] A. Shindo, H. Yamazaki, A. Toki, R. Maeshima, I. Koshijima and T. Umeda, Computers Chem. Engn, 24 (2000) 721. [3] J. Heaton, Next Generation Plant Systems: The Key to Competitive Plant Operation, AMR Consulting, Chelsea, MI, 1998. [4] E Dinsmore, Winning in Business with Enterprise Project Management, AMACOM, New York, NY, 1999 [5] A. Shindo, I. Koshijima and T. Umeda, J of Project Management Society, 2, No.4 (2000) 19 (in Japanese)
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
256
Application of multivariate statistical process control to supervising NOx emissions from large-scale combustion systems Young-Hak Lee, Kyong-U Yun, Minjin Kim, and Chonghun Han Department of Chemical Engineering, Pohang University of Science and Technology, ISYSTECH Inc., Pohang, Kyungbuk, Korea, 790-784
Abstract Computer modeling of NOx formation in combustion systems provides a tool that can be used to investigate and improve understanding of the systems. In this work the systematic method based on the data-driven model was proposed to supervise nitrogen oxides emitted from a large number of fired heaters which go through one common stack. The relationships between process variables in the fired heaters and the NOx were investigated by employing projection to least squares (PLS) regression in a hierarchical manner. Multivariate statistical contributions of fired heaters and process variables in a fired heater to NOx emissions were introduced as a key means to cope with the NOx overemissions. The proposed approach was evaluated by using data collected from an industrial combustion system.
Keywords MSPC, NOx emission, contribution plot, large-scale 1. INTRODUCTION The control of NOx emissions is a worldwide concem as the utilization of fossil fuels continues to increase. The conventional approach is to monitor these emissions using on-line analyzers and troubleshoot depending on operators' knowledge. Computer modeling of NOx formation can be used to reduce the maintenance cost incurred in continuously using the online analyzers. Extensive research is being carried out On developing simple and accurate kinetic models for NOx formation that can be coupled to computational fluid dynamics (CFD) results obtained with global molecular combustion reactions. However, the researches require the complicated mathematical models based on kinetic mechanisms, CFD equations for turbulent flow. The data-driven modeling works have been reported using fuzzy logic, neural network and regression models for avoiding the complex mathematical models [ 1]. Multivariate projection methods such as PCA and PLS are ideally suited for the analysis
257 of complicated process data. Supervision of large-scale combustion systems is undertaken by monitoring incoming process data and referencing these against a model built using historical data. In this study the relationships between many process variables in the heaters and the NOx emissions were investigated by using PLS regression and systematic method based on the data-driven model was proposed to supervise nitrogen oxides emitted from a large number of fired heaters. 2. NOx EMISSION FROM COMBUSTION SYSTEMS Combustion systems in the industry are generally classified into two types: a stack with single heater and a common stack with plural heaters. In case of the latter, it is not easy to construct the model due to the too large-scale processes, different time delays between plural heaters and NOx emissions. In this work, large-scale combustion systems are treated as a supervising target. NOx emissions from combustion systems are generally dependent on the fuel of three types, gase-fired, oil-fired, and coal-fired combustion. NOx from combustion systems results from three main processes: thermal NOx, prompt NOx, and fuel NOx. Thermal NOx is formed from oxidation of atmospheric nitrogen, prompt NOx is formed by reactions of atmospheric nitrogen with hydrocarbon radicals in fuel-rich regions of flames, and fuel NO is formed from oxidation of nitrogen bound in the fuel. Thermal NOx is the main source of NOx in gaseous combustion system, and fuel NOx is the main source of NOx in coal combustion systems. 3. MULTIVARIATE STATISTICAL PROCESS CONTROL OF NOx EMISSIONS
3.1. Multivariate projection methods Multivariate projection methods (e.g. PCA, PCR, PLS) are often used to evaluate large process data sets. These methods can be used to supervise and improve existing processes as well as to develop new processes. Principal component analysis is designed to highlight the dominant patterns and trends in a data matrix X in terms of a limited number of latent variables. The regression extension of PCA is called partial least squares (PLS). PLS works with two matrices, X (e.g. process measurements) and Y (e.g. product quality, environmental responses) and has two objectives, namely to well approximate X and Y and to model the relationship between them. An estimate of single response variable, Yh, is Yh = Xb. We assume that the data matrix X ~ R Nx K, where N is the number of observations and K is the number of variables, is divided into G blocks based on prior knowledge. PLS regression coefficients for single response variable can be partitioned based on blocking:
258 bT= [bl T b2 T ... bgT ... bG T]
(I)
3.2. Monitoring and diagnosis using hierarchical contributions to a NOx emission In large-scale system multiblock and hierarchical PCA and PLS methods have been proposed in the recent literature in order to improve the interpretability of multivariate model. The use of multiblock analysis methods for process monitoring and diagnosis can be obtained directly by regrouping the contributions of regular PCA and PLS models [4]. It is important to block or group the variables based on the process knowledge such as different operating units or groups. It's a critical point to detect the heater and process conditions which are influential to the overemissed NOx. In hierarchical manner NOx emissions from each heater or heater group can be supervised by grouping the contributions to the estimated NOx of PLS model. Also, the contribution plot indicates which variables are most affected by the NOx peak. The contributions to the heaters have a role as a virtual analyzer to each heater, which are valuable to directly diagnose the overemitted NOx concentration from a common stack. To systemize the effective monitoring scheme of large-scale combustion systems, the following constraints are accompanied. (i) The combustion systems should have a similar structure and sensor configuration. In general combustion systems in a specific industrial site are very similar although the difference of the capacity exists. This constraint is essentially required for performing the roles as the virtual analyzer of NOx for each block. (ii) Since the flue gas from heaters to stack flows very speedily, time delay of NOx emissions among heaters or between heaters and a stack can be ignored. NOx emissions from large-scale combustion systems are detected and diagnosed abnormal conditions using the virtually analyzed NOx from hierarchical block contributions and variable contribution to the block. Each block Xg~R N•
has Kg variables. The response
data matrix Y ~ R N. For a new sample, x T= [ xl T x2T "" x J ], the contribution of a block to NOx concentration, g, is CNOx,g "- xgbg
(2)
If we use regular PLS for supervising, the NOx estimate for all variables is NOx,pred = xb
(3)
Therefore, from Eq. (2) and Eq. (3) G NOx,pred = Z CNOx,g g=l
(4)
259 The CNOx,g can be divided by the square root of the number of variables in the block to achieve block scaling, although additional scaling factors can be introduced to scale up and down the importance of various blocks [5]. The CNox,g can be adjusted to estimate acceptable NOx emission level to each heater. The virtual NOx emission level is predicted by calibrating the different number of variables according to each heater, which is resulting from the difference of heater capacity. The estimate of NOx,g emission level for block g is obtained as B
N Ox,g =
Xgbg 9A,
ZXgbg A --- g=l
(5),
g=l K :
where ot is the term related with the redundant extent of variables. In case of no redundancy, is zero, while one when the variables have a maximal redundancy. A is the calibration term to reducing effect, which arises from division by Kg ~ for considering the variable redundancy. The contribution plots used to identify root causes often indicate multiple suspects, which makes it difficult to identify the real fault. Also, the contribution plot does not have confidence limits, making it difficult to determine what is abnormal. It is not until recently that control limits on contribution plots are discussed. The confidence limits for contribution plots are helpful to monitor the NOx overemissions for each heater. Moreover, they make up for a weak point that the virtual NOx estimate for each heater is inaccurate when the variable redundancy is ambiguous. Based on the assumption that the data are independent and identically distributed and follow a normal distribution, a 100 x (1-(x ) % confidence interval for a variable x is given by x + z~/2 6 x , where 0t is the significance level and typically takes a value of 0.01 or 0.05, resulting in 99% or 95% confidence intervals respectively, z,,/2 is the corresponding standard normal deviate and 6 x is the standard deviation calculated from the data. A similar measure can be defined for a contribution of heaters, CNOx,g, to a NOx emission [2]:
CNOx, g "[- Zot/2 (YC
(6),
where t~c is the standard deviation of the contribution. 4. APPLICATION TO REFINERY FIRED HEATERS
The process to be studied is refinery fired heaters which consists of four heater groups: two preheating groups for main distillation of crude oil (four heaters), a heating group for
260 hydrocracking of naphtha and kerosene (three heaters), and a feed heating group in front of reforming reactors (four heaters). Gas and oil are supplied to each heater depending on the inventory of the fuels produced from the refinery plant. The NOx is emitted to a common stack from each heater in parallel mode. The process variables in the fired heaters are divided into eleven blocks which are same as the number of heaters. The heaters are composed of the principal variables related with (i) feed load, (ii) heater outlet conditions, (iii) inlet conditions of fuel and air, (iv) flue gas conditions, and (v) fired box conditions (temperatures, excess 02 composition, etc). A number of symptoms of NOx overemissions can occur in the heaters. The typical overeimssions arise from the sudden injection of oil fuel with a large amount of a nitrogenous compound, the increase of air supply resulting from heavy load, and so on. The data collected from the fired heaters consisted of 187 variables and 1 NOx concentration in a stack, and 8430 samples in model data. Then, the 3144 samples were used to detect and identify the overemissions. Fig. 1 shows the softsensofing result of NOx concentrations emitted from the eleven fired heaters. NOx concentrations increase near 2500 sample number and go out of 95 % confidence limit, about 170 ppm. Block contributions for each heater which are fitted for performing the function as virtual analyzers are illustrated in Fig. 1. Confidence limits for contributions are calculated from Eq. (6). Number 1 block makes a large contribution to a NOx emission of common stack. We could see that its contribution has a largest variation from control limit of first bar in Fig. 2. Also, the contribution of number 7 block deviated from confidence region. To find out the root cause of the NOx peak for the heater 7, the detail contribution of variables was investigated. Fig. 2 represents the relative contribution of variables to the block contribution and means of the variables in process flow diagram. In this case, oil flow rate was increased due to the sudden injection of oil fuel including a large amount of a nitrogenous compound. Therefore, the concentration of fuel NOx in the heater 7 gets thicker, which happen to the NOx overemission of a common stack. 1.... ..,.
190
~
,
..,...
,
:00
180
50
170
. . . . . . . . . . . . . . . . . . . . . . . . . . .
160
150
00 ]
i;,~
;~l~,J~, ~,t~"~j ~ I,~t~,~
~ '~
' ~'~J~"
'~tt
140 i
130 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 500 1000 1500 2000 Sample number
2500
3000
1
2
3
4
5
6
7'
8
9
10
11
Block ( H e a t e r ) n u m b e r
Fig. 1. Real-time monitoring of NOx emission and block contribution for each heater
261 -
-
1
-~
2
.
.
.
3
.
,.
.
4
.
5
.
.
.
.
6
r
.
7
-
-
-~
.
.
.
.
~
.
.
.
.
.
.
.
.
.
.
.
.
1--
.
.
.
.
.
.
8 9 10 11 12 13 14 15 16 17
Variable number
Fig. 2. Variable contributions to number 7 block contribution (principal variables: 9, 10, 12) 5. CONCLUSIONS PLS is a very useful modeling approach which can supervise the NOx emission from largescale combustion systems. Virtual analyzer for a heater could be designed based on the block contribution to the NOx estimate of PLS model. Virtual analyzers are a cost-saving alternative in that the on-line analyzers are very expensive and difficult to maintain. Also, the root causes can be efficiently diagnosed by analyzing the abnormal NOx peaks in hierarchical manner. This approach is expected to extensively apply to complex processes for wastewater treatment. ACKNOWLEDGEMENT The authors deeply appreciate the financial support by Brain, Korea 21 project REFERENCES [1] Chakravarthy, S. S. S., Vohra, A. K., Gill, B. S., Predictive emission monitors (PEMS) for NOx generation in process heaters, Comp. Chem. Eng., 23, 1649-1659 (2001). [2] Conlin, A. K., Martin, E. B., Morris, A. J., Confidence limits for contribution plots, Journal of Chemometrics, 14, 725-736 (2000). [3] Hill, S. C., Smoot, L. D., Modeling of nitrogen oxides formation and destruction in combustion systems, Proc. Ene. Comb. Sci., 26, 417-458 (2000). [4] Qin, S. J., Valle, S., Piovoso, M. J., On unifying multiblock analysis with application to decentralized process monitoring, Journal of Chemometrics, 15, 715-742 (2001). [5] Westerhuis, J. A., Kourti, T., Macgregor, J. F., Analysis of Multiblock and Hirarchical PCA and PLS models,, Journal of Chemometrics, 12, 301-321 (1998).
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier Science B.V.
262
Study of Heat Storage S e t t i ng & I t s Re I ease Time f o r Batch
Processes* L i Zhihong Ben Hua Institute of Chemical Engineering, South China University of Technology, Guangzhou 510641 Abstract
This paper approaches the problem about the heat storage setting and its release time during the energy synthesis for batch processes, points out that the economy analysis must be considered according to production scale. The paper also proposes the economy criterion to decide heat storage to be set or not. Moreover, the paper presents the optimization strategy for confirming the heat storage setting and its release time based on the method of artificial intelligence (AI). Compared with literatures, the above strategy can deal with not only the depreciation of energy quality, but also the effect of practical engineering factor of batch process, such as the distant match and its investment of pipeline and the flow exergy dissipation. This paper also proposes the steps of the heat exchanger networks synthesis for batch process. Further, the paper also proposes the conception of the period of investment reclamation, which can provide the economy criterion for investment decision-marker. A case study illustrates the method presented by this paper is practical. Keywords: batch process, heat storage, artificial intelligence 1
I nt r oduct
i on
To realize maximum energy recovery and minimum energy consume during energy synthesis of batch processes, heat storage is of special important significance. This is because of not only prodigious imbalance between supply quantity of utility and energy requirement quantity of process streams but also difficulty of direct energy recovery due to time limited. Indirect energy recovery is the best method to solve the above problem by heat storage. Heat storage and its release can decrease or eliminate the energy no-matching between supply and necessary, improve performance and dependability of energy system, increase and maintain run efficiency of energy supply equipment. It is quite important to save energy and improve product competition ability. It is just about importance of heat storage of batch process energy synthesis that many researchers
such as Vecchietti AR Ill , Eva Sorensen [21, Musier, R.F.H.
[3] N.Sadr_Kazemit4]
Y.EWang [51, I.C.Kemp [61 Jane A. Vaselenak tvl have paid more attention to this field. The research methods proposed by above researchers are based on the sequence process "pinch point technology" theory, in which the energy degrade in heat storage is not taken into
* The project is supported by the State Major Basic Research Development Program (G20000263)
263 account while residual heat put from one time interval into next time interval. The investment cost of heat storage equipment and its assistant equipment is not taken into account while heuristic approach is used to deduce the heat storage setting and its release time. The method of AI integrated with economy analysis is proposed in this paper to determine the heat storage setting and its release time. 2
The heat storage setting
Not every process stream heat need to be stored at the time interval 9 In a general way, it is unnecessary to set heat storage while short time or less quantity of heat recovery or low level temperature is available. But, in the batch process practical operation, economy analysis depending on production scale and condition is necessary to decide heat storage is set or not. It is tendency to increase investment because of heat storage. The overall investment cost involves not only the manufacture cost of heat storage equipment and its assistant equipment such as pipe, but also the cost of heat exergy dissipation and power dissipation etc. The cost formula is as following: Overall investment cost ( O z ) = heat storage equipment cost ( 0 investment cost (Op)+ heat exergy dissipation cost (O s
) +
z
)+ pipeline
power dissipation cost ( O/~ )
According to practical instance, different cost formula with several items from above formula can be chose for different batch process. For example, the pipe investment cost can be omitted while heat storage equipment is not far from heat exchange unit. Supposing the batch process studied in this paper is a isolated system, namely, it is unnecessary to transport energy to other exterior system, or there is no exterior system can accept the system energy. The procedure of heat storage setting is as following: 1)
Firstly, according to the priority principle of heat exchange directly, hot and cold streams
can be matched to exchange heat directly at time interval 9 then the residual heat at time interval 9 can be treated in advanced based on AI, namely, judging each residual heat temperature level high or low, if one is lower, the residual heat of this temperature level can not be set, otherwise, judging the time interval x long or short, if it is short, the residual heat of this temperature level can not be set, otherwise, judging the stream of this temperature level match with cold stream after the time interval ~, otherwise, the residual heat of this temperature level is set possibly. 2)
For the possible heat storage, the criterion of heat storage setting is demanded to propose.
This paper presents the procedure of this criterion as following:
264 (1) Calculating overall investment cost:
01 = 0 z + Op + 0 s + 0 D
According to literature [8], for single heat storage equipment, the cost formula is follow: 0 z = CpowNa + CcapAQs,
Pipe cost (Op) is calculated by pipe length based on plant fixing plan: Dispersal heat exergy dissipation cost ( 0 s ) :
Powerdissipationcost(OD):
OD=Cf•
Op--Cp • L
0 s =Ch • QR(1- To/Tb )
1 2 _v2)
+
(P2 _ P~) +(z2 P
_z,)g+~-,h
s 1•
P
1 g,
(2) Calculating annual economy benefit 0/4 (3) Calculating the period of investment reclamation to decide whether to set heat storage or not. 3 Study for the heat storage release time by AI method
It is very important to choose reasonable chance of heat storage release time. The energy degrade in heat storage because of temperature difference of heat transfer must be taken into account so that the energy recovery object can be realized in engineering applications.(The temperature levels is only considered during the matching in this paper. Other practice factors are omitted here.) Supposing: the number of batch process time interval is N, and heat storage is set at time interval ~. The reasonable release time of the heat storage is determined by AI method as following: 1) To place all cold streams from time interval 9+1 to N in an arrangement in according with the order of the temperature. Supposing the sequence is as following: Tcl> Tc2> Tc3>--. >TcR'">TcM. 2) According to the priority principle of direct heat exchange and the concept of the greater temperature difference of heat transfer the more exergy dissipation, judging whether there is hot stream HI or not at the time interval with cold stream C 1 can be matched with C 1, if yes, heat storage release is forbidden, go to 3); otherwise, judging whether the heat storage can be matched with stream C 1, if yes, heat storage release is allowed at the time interval with stream C 1; otherwise, judging stream C2, C3, ..., CM in turn at the same rule. 3) Matching the cold stream C1 with the hot stream HI obtained from above step 2), supposing the residual cold stream is CR, To place cold stream CR and all cold streams from time interval 9 +1 to N in an arrangement in according with the order of the temperature. Supposing the sequence is as following: Tcl> Tc2...>TcR.-.>TcM, go to 2).
265
The step of heat storage release is shown in figure 1 place all cold streams from time interval x +1 to N in an arrangement in according with the order of the temperature. Supposing the sequence is as following: Tcl > Tc2> Tc3>...>TcR...>TcM.
no
,
,
[ ~
yes
~"''"r""'~ 1 yes "
with CI? (AT~15~
]
[ Match C1 and HI, place residual ] c o l d stream and all cold streams ~ from time interval x +l t o N in nt in according with f the temperature. Supposing the sequence is as
no
C1>
Heat storage I release at the ] time interval[ with cold stream [ CI Id
~
~ ..........
[>TcR
-n-v [ ~ ~ I ~ a t storage stream can be ~
~ i t h
~
Tc3 > ...
~
[ erd
Tc2 >
>TcM
2 and HI, place residual ] cold s!ream and all cold streams "] from time interval x +1 to N in t in according with ] the order of the temperature. sequence is as
J--
C2? (ix T ~ 1 5 ~ 1 7 6
i>
Tc2 >
Tc3 > ...
[ >TcR'''>TcM
Heat storage release at the time interval with cold stream C2
end
no
Heat storage relea the time interval with cold stream CM.
end
yes
I, place residual ,. cold stream and all cold streams from time interval 9+I to N in an arrangement in according with the order of the temperature. Supposing the sequence is as following: Tcl> Tc2> Tc3> ... >TcR'">TcM
Figure 1 Sketch o f the heat storage release
266 4 I I I u s t r a t ions The problem data is shown in table 1. Time series sketch of process streams is shown in Fig.2. At time interval [0, 0.5], there are hot stream H 1 and cold stream C2. They are matched directly as shown in Fig.3. At time interval [0.5, 1], there are cold stream C3 required 250 kWeh heat load and hot stream H2 and H3 with 710 kW.h heat load together. So, hot stream H3 can be matched directly with cold stream C3. In the view of economy, heat storage can be set for hot stream H2 based on the proposed method. The heat exchanger network at this time interval is shown in Fig.4. At time interval [1, 2], only cold stream C1 is available. The above heat storage releases matched with cold stream C 1 is shown in Fig.5. Table 1 The problem data Stream No. 1 2 3 4 5 6 t(~
Stream Original type Temperature(~ HI 180 H2 300 H3 250 CI 200 C2 40 C3 120 A
Target Temperature('C) 60 210 150 280 160 220
Heat capacity flowrate(kW/~ 4 8 7 4 6 5
HII80~
H2
300 280
Start time (h) 0 0.5 0.5 1 0 0.5
End time (h) 0.5 1 1 2 0.5 1
Heat load (kW.h) 240 360 350 320 360 250
~ Steam
Q=240kWeh O=120kWeh
CI H3 C3
250
60~ Fig. 3 Heat exchan~:er network at time interval [0, 0.5]
HI 220
Heat storage~ 260"C C2
180 160 0
.. 9 0.5
1
2
T (h~
Fie.2 Time series sketch of nrocess streams C~ 120~
.L Cold water
210~ Fig. 5 Heat exchanger network at time interval [1, 2 H2 , 300~ COld190 ~
220 ~
~ Steam
Q=200kWeh Q=120kWeh
~Q=360
kW.~ 280Hot ~
210~ Fig.4 Heat exchanger network at time interval [0.5, 1]
5 Discussion
C o m p a r e d with the p r o p o s e d m e t h o d , the m e t h o d presented in literature [8] d e t e r m i n e s heat storage setting and its release time by heuristic m e t h o d before heat e x c h a n g e n e t w o r k synthesis, and the effect o f batch process practical factor is not taken into a c c o u n t in the method. For instance, the pipe i n v e s t m e n t cost, dispersal heat e x e r g y dissipation cost and dissipation cost m a y be increased while heat storage is far from the heat e x c h a n g e match, so,
267 g e n e r a l i n v e s t m e n t c o s t w i l l be i n c r e a s e d a n d the p e r i o d o f i n v e s t m e n t r e c l a m a t i o n will be p r o l o n g e d . A t the p r o p o s e d m e t h o d , h e a t e x c h a n g e m a t c h is d o n e in a d v a n c e at e v e r y t i m e interval. H e a t s t o r a g e s e t t i n g a n d its r e l e a s e t i m e are d e t e r m i n e d b a s e d o n A I m e t h o d a n d the economy analysis criterion for residual heat. Then match heat exchange networks taking heat storage stream as accessory stream. The proposed approach considers not only the energy degrade but also the effect o f b a t c h p r o c e s s p r a c t i c a l factor. F u r t h e r , the economy analysis criterion and the period of investment reclamation are taken into account to provide the economy criterion for investment decision-marker.
NOTATION
Ccap Cf Ch Cp Cpow F g L N Na
OD On Oi Op Os Oz
Expense coefficient related with the capacity of heat storage equipment Electricity price, u 9h Unit price of heat exergy dissipation, Y/kW Unit price of pipeline, Y/m Expense coefficient related with the release and sorb power of heat storage equipment flowrate, kg/s Acceleration of gravity, m/s 2 Pipeline length, m General number of time interval the release and sorb power of heat storage equipment ,kW Power dissipation cost, u Annual economy benefit, u General investment cost, u Pipeline cost, u heat exergy dissipation,, u General cost of heat storage equipment, u
P~ P2 QR t T To Tb v~ v2 w z~ z2 ]E hf A Qst p x
Inlet pressure, MPa Outlet pressure, MPa Heat dissipation quantity, kW Ordinal number of time interval Temperature, K Benchmark temperature, K(288K) Liquid temperature inside pipe, K Inlet velocity of flow, m/s Outlet velocity of flow, m/s Pump efficiency Upright distance of inlet flow from datum plane, m Upright distance of outlet flow from datum plane, m Energy dissipation of unit mass flow, kW The capacity of heat storage equipment, kWeh Density, kg/m 3 Time interval
Reference 1. 2
VecchiettiAR, Montagna J., Comp. & Chemical Engineering, 1998, Vol.22:$801--4 Eva Sorensen, Sigurd Skogestad. Chemical Engineering Science, 1996,51(22):4949----4926
3
Musier,R.F.H., UB.Evans, Batch process management, Chemical Engineering Progress, 1990, 86(6):66-77
4
N.Sadr-Kazemi,G.T.Polley, Chemical Engineering Research and Design, July 1996 Voi.74, Part A: 584--596
5
Y.P.Wang,R.Smith, Chemical Engineering Research and Design, Nov. 1995 Vol.73,Part A: 905--914
6
I.C.Kemp and A.W.Deakin, The cascade analysis for energy and process integration of batch processes, Part 1: Calculation of energy targets, Chemical Engineering Research and Design, 1989 Vol.67:459--509
7
JaneA. Vaselenak, Ignaclo E. Grossmann and Arthur W. Westerberg, Heat integration in batch
processing, Ind. Eng.
Chem., Process Des. Develop., 1986, Vol.25(2): 357--366 8 Zhang Zhaoxiao [Ph.D.thesis], The theoretical and application researches of energy diagnosis and optimization in batch processes, 1998,Xi'an Jiaotong University.
268
Process SystemsEngineering2003 B. Chenand A.W.Westerberg(editors) 9 2003Publishedby ElsevierScienceB.V.
Debottlenecking and Retrofitting for a Refinery Using Marginal Value Analysis, Sensitivity Analysis and Parametric Programming Li Wenkai, Chi-Wai Hui Chemical Engineering Department, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Abstract A new analytical method called "marginal value analysis" is used in this paper to provide economic information for all stream flows inside a refinery. Three types of marginal values are defined that represent respectively the de-bottlenecking effect, production cost and product value of an intermediate material flow. Important insights are generated using the analysis to find production bottlenecks, assist in decision making and price intermediate materials, etc. Sensitivity analysis and parametric programming are studied in this paper to provide more comprehensive information for pricing, retrofitting and investment evaluation. Several case studies are used to illustrate the research on marginal value analysis, sensitivity analysis and parametric programming. Keywords Refinery, Marginal value, Sensitivity Analysis, Parametric Programming 1. INTRODUCTION Marginal values have been used in various economic evaluations and accounting activities for a long time. To get information for debottlenecking, retrofitting, pricing and investment evaluation by detailed analyzing of the additional economic information forms the subject of Marginal Values Analysis (MVA). Ranade, Shreck & Jones (1989) took into account the paths through which the stream was generated and used to calculate marginal values of a particular stream. This localized method may yield incorrect results since production and utility plants are closely interacted. Hui, C.W. (2000) proposed two novel definitions of marginal values: MCF and MCp that have important significance in bettering the understanding of the economic structure of the system. 2. DEFINITION OF MARGINAL VALUES There are several definitions of marginal value in literatures, whose abilities of revealing the economic structure of the system are different. 2.1 Traditional Definition of Marginal Value Traditional definition of marginal value (MV) is defined as the change in overall profit caused by a small variation of a stream flow (Ranade, Shreck & Jones ,1989). That is, MV =Aprofit/Astream flow. The MVs are usually reported by the standard output of a LP solver.
269 2.2 New Definitions of Marginal Values New definition of marginal value is defined as the variation in overall profit by adding or taking away a small amount of a stream (Hui, C.W., 2000). According to the above new definition, there are two types of marginal values as following: I. The marginal value of adding a small amount of a stream flow (MCl~). II. The marginal value of taking away a small amount of a stream flow (MCp). 3. P R O B L E M DESCRIPTION
The configuration of the problem studied in this paper is shown in Fig. 1. Fig.1 contains three production units: CDU (Crude Distillation Unit), GB (Gasoline Blending) and DB (Diesel oil Blending). Crude oil is separated into three fractions by CDU. Gasoline and MTBE enter GB to produce two products: 90# gasoline and 93# gasoline. Diesel oil and naphtha enter DB to produce another two products: -10# diesel oil and 0# diesel oil. The market prices of crude oil and MTBE are 1400 and 3500 Yuan/ton respectively; the market prices of 90# Gasoline, 93# Gasoline, -10# Diesel Oil and 0# Diesel Oil are 3215, 3387, 3000 and 2500 Yuan/ton respectively. The CDU transfer ratios of crude oil to gasoline, diesel oil and naphtha are fixed at 0.2, 0.3 and 0.5 respectively. The market demand for each product is 200 ton/day. A small amount of material (DF1) is added to the intermediate stream "gasoline". The material of DF1 is the same as the intermediate stream and its upper bound is set at a very small number: 1E-6 or zero. So are DF2, DF3. A small amount of material (DP1) is taken away from the intermediate stream "gasoline". The material of DP 1 is the same as the intermediate stream and its lower bound is set at a very small value: 1E-6 or zero. So are DP2, DP3.
MTBE
G
90 # GASOLINE
GASOLINE
CRUDE OIL v
C D U
B
"~
93 # GASOLINE
D
~
- 10 # DIESELOIL
v
DIESEL OIL DP2L
DF
DP3~
DF~
~/~,-
w,-
]bottleneck
Fig. 1. Basic Configuration
O# DIESEL OIL
270 Table 1. The solution results of case 1 and case 2 Overall Profit (Yuan) MCF of Diesel Oil NO TANK (Case 1) 308,772.7 3710.2 WITH TANK (Case 2) 337,008.3 3590.1
MCF of Naphtha -611 0.0
4. CASE STUDIES FOR MVA 4.1 Single Period Two cases are used for single period scenario. Case 1 is the same as the problem described in section 3. One intermediate tank (naphtha tank) and four final product tanks are added in case 2. The capacity of each tank is 200 ton. After solving the model, we found that the MV of 0# diesel oil in case 1 is 1544 Yuan/ton because its production rate has reached its upper bound. Here exists the bottleneck. The results of case 1 and 2 are shown in Table 1. Some interesting results were obtained. MCF of diesel oil is 3710.2 Yuan/ton, which is higher than the prices of all raw materials and all products. It means that this intermediate stream is even more valuable than the most expensive final product. On the contrary, MCF of naphtha is -611.0 Yuan/ton, which is lower than the prices of all raw materials and products. It is negative which means that we can increase the overall profit by taking away naphtha from the system. If the market price of diesel oil is lower than 3710.2 Yuan/ton, then more profit can be made by buying diesel oil from external suppliers. For naphtha, on the contrary, no matter how low the market price is, the company should not buy it. MCF and MCp can give very important information for business decision of a refinery. From Table 1, it can be seen that the overall profit in case 2 increased by 9% compared to that of case 1. The reason is that some of the naphtha is stored in naphtha tank in case 2. In casel, MCp of naphtha is negative, so putting some naphtha inside the tank will increase the overall profit. Notice that the profit making by adding intermediate naphtha tank in the system is only valid for a short period because the capacity of the naphtha tank is limited. 4.2 Multi-Period Considering the changing factors comprehensively may obtain better profit. Thus it is important to do marginal value analysis in multi-period case. In this section, case 3 and case 4 are introduced. The system configurations of case 3 and case 4 are the same as in case 2 (with tank) except that 7 days are considered comprehensively. Multi-period models are used to get the optimal solutions for these two cases. Table 2. The solution results of case 3 and case 4 case 3 case 4
Overall Profit (Yuan) 2,251,130 3,077,820
MCF of Diesel Oil 3677.6 3590.2
MCF of Naphtha -445.1 0.0
271 Naphtha Tank
200 ~/ ,..150
-----__
~_1oo
50
~--
.....
----
...... ~ . . . .
1
T
T
2
3
~
]
4 day
5
F
l
6 ~
7 case
3
case 4
r
Fig. 2 Inventory of Naphtha Tank (case 3 and case 4) The market demands of products are fixed at 200 ton/day in all days in case 3. In case 4, it is assumed that the market demands for all other three products are fixed at 200 ton/day except that the market demands for 0# diesel oil increase up to 400ton/day on days 3,6 and 7. The results of case 3 and case 4 are shown in Table 2. From Table 2, we can see that the overall profit of multi-period model with changing demand (case 4) is higher because the company can sell larger quantity of 0# diesel oil. The inventory of naphtha tank in case 3 and case 4 is shown in Fig. 2. In Fig. 2, the inventory in case 3 keeps increasing in all 7 days. The inventory level is limited by the capacity of the tank (200 ton). If the capacity of the tank increases, the overall profit will further increase. The inventory curve of naphtha tank in case 4 have peaks and valleys instead of keeping increasing. On days when the demand is large (days 3,6 and 7), the previously stored inventory is sold, thus the inventory decreases. Arranging the inventory level in these tanks maximizes the overall profit. In case 4, different from single period case, the inventory level of product tanks is not zero. Product tanks are useful in increasing the overall profit at changing market demand case. 5. SENSITIVITY ANALYSIS AND PARAMETRIC P R O G R A M M I N G 5.1 Introduction Sensitivity Analysis and Parametric Programming are methods for studying the influence of parameters, such as unit capacity, product specification, raw material and products prices, on overall profit. They provide more comprehensive information for pricing, retrofitting and investment evaluation. In this paper, these two methods are studied using the new definition of marginal values proposed. From section 4, by adding a small amount of material to a process stream or by taking away the material from the stream may increase the overall profit. However, it is impossible to add the material to the process stream or take away from it as much as we want. A refinery may want to know the range within which the influence maintains unchanged and what is the influence beyond this range. The information will of great importance to business decisions. The problem in this section is the same as case 4 except that another two intermediate tanks are added as buffers for gasoline and diesel oil respectively.
272 Table 3 Right-Hand Side Ranges DFlt 1o up tl 0.0 17.42 t2 0.0 17.42 t3 0.0 17.42 t4 0.0 17.42 t5 0.0 17.42 t6 0.0 17.42 t7 0.0 17.42 ,,.
. . ,
..
MCF 3062.13 3062.13 3062.13 3062.13 3062.13 3062.13 3062.13
1o 0.0 0.0 0.0 0.0 0.0 0.0 0.0
DF2t up 14.04 15.71 15.71 14.04 15.71 15.71 21.95
MCF 3173.82 3173.92 3174.02 3173.81 3173.83 3173.93 3173.75 ,
DF3t lo up 0.0 24.50 0.0 24.50 0.0 8.17 0.0 5.41 0.0 5.41 0.0 5.41 0.0 5.41
MCF 2116.27 2116.37 2116.47 2116.33 2116.43 2116.53 2116.63
5.2 Sensitivity Analysis To find out the right-hand range of the dummies added in diesel oil (DF2) and taking away from it (DP2), corresponding constrains should be added in the model. Table 3 lists the right-hand ranges and MCFS of gasoline, diesel oil and naphtha. In Table 3, "1o" column lists the lower limit of right-hand side range while "up" column lists the upper limit. Variables "DF 1t" and "DF3t" represent the dummy added to gasoline and naphtha respectively. The everyday right-hand side range of gasoline is (0.00 to 17.42 ton/day) and the corresponding MCF is 3062.13 Yuan/ton. That means, within this range, adding one ton of gasoline will increase the overall profit by 3062.13 Yuan. Different from gasoline, the upper limits of diesel oil and naphtha have different values on different day. It reflects the influence of changing market demand of 0# diesel oil. 5.3 Parametric Programming Sensitivity Analysis can tell us the limits within which the influences of dummies maintain unchanged. To find out the influence of dummies beyond these limits is part of the task of Parametric Programming. The procedure used in this paper is by setting the RHS value to previous limit to obtain a new limit. Then by fixing the value of the dummy to the limits found, the MCF or MCp of the corresponding stream can be obtained. Fig. 3 shows the MCF of diesel oil vs. the amount of diesel oil added on day 1. From sensitivity analysis, the upper limit of right-hand side range is 14.04 ton/day on day 1. From Fig. 3, we can see that the MCF only has minor change beyond this limit. However, if the flowrate of dummy increases up to 218.0 ton/day, MCF starts to decrease. The MCFs on day 2 to day 7 is almost the same as those of day 1. Notice that to maintain the feasibility of the model, the amount added cannot exceed 664 ton/day. Some detailed works have to be done to determine the MCF or MCp value between two points in Fig. 3. For example, we know that the MCFS at 218.0 ton/day and 260.0 ton/day are 3173.750 and 2499.8 Yuan/ton respectively; adding a small value (1.0E-3 was used here) to 218.0, and then fix the dummy to this bigger value, the MCF can be found to be 2499.8 Yuan/ton. Thus the MCF between two points (218.0 ton/day and 260.0 ton/day) is 2499.8 Yuan/ton. Fig. 3 tells us how the profit making decreases as the amount added increases. If a refinery buys
273 MCF of diesel oil 3200 , , 2800 2400 2000 -
~ 1600 -
~-T--
1200 800 400 0 0
40
80 120 1 6 0 2 0 0 2 4 0 2 8 0 3 2 0 3 6 0 4 0 0 4 4 0 4 8 0 5 2 0 5 6 0 6 0 0 6 4 0 amount
added
Fig. 3 the MCF of diesel oil vs. the amount being added diesel oil more than 621.1 ton/day, the profit making will be zero. Fig. 3 can help a refinery to decide the appropriate amount of diesel oil they should buy. From the figure of MCp of diesel oil vs. the amount being taken away, the refinery can compare the profit earned (the market price) and the profit loss (the value of MCp) to decide whether they should sell diesel oil or not. 6. CONCLUSIONS This paper proposed the research work on MVA, sensitivity analysis and parametric programming for a refinery. Important insights are generated using MVA to find production bottlenecks, assist in decision making and price intermediate materials, etc. The influences of market demand and tanks on the total profit were analyzed using MVA. Sensitivity analysis and parametric programming are also studied. ACKNOWLEDGMENTS The authors would like to acknowledge financial support from the Research Grant Council of Hong Kong (Grant No. HKUST6014/99P & DAG00/01.EG05), the National Science Foundation of China (Grant No. 79931000) and the Major State Basic Research Development Program (G2000026308). REFERENCES [1] Ranade, Saidas M. Shreck, Scott C. Jones, David H. Know marginal utility costs. Hydrocarbon Processing, 68(n9) Sep. pp. 81-84 (1989) [2] Hui, Chi-Wai, Determining marginal values of intermediate materials and utilities using a site model, Computers and Chem. Eng., 24(2-7), 1023-102 (2000)
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier Science B.V.
274
A General Continuous State Task Network Formulation for Short Term Scheduling of Multipurpose Batch Plants with Due Dates Christos T. Maravelias and Ignacio E. Grossmann* Department of Chemical Engineering, Carnegie Mellon University Pittsburgh, PAl 5213, USA Abstract: A new continuous-time MILP model for the short term scheduling of multipurpose batch plants with due dates is presented. The proposed model is a general State Task Network (STN) formulation that accounts for variable batch sizes and processing times, various storage policies (UIS/FIS/NIS/ZW), utility constraints (other than units), and allows for batch mixing and splitting. Its key features are: (a) a continuous, common for all units time partitioning, (b) assignment constraints are expressed using only the binary variables defined for tasks, (c) start times of tasks are eliminated, (d) a new class of tightening valid inequalities is added to the MILP formulation, and (e) a new disjunctive programming formulation is used for the matching of due dates with time points. The proposed model is more general than the previously reported models and is computationally efficient. Keywords: Scheduling, Multipurpose batch plants, Scheduling with due dates
1. INTRODUCTION The problem of short-term scheduling of multipurpose batch plants with release dates for the raw materials and due dates for the final products is a very challenging and practically important problem. Due to its complexity, the models that have been proposed [1,2] do not account for all the features of the general problem. Three commonly made assumptions, for example, are that there are no utility requirements, that each order comprises a single batch and that orders can be pre-assigned to time points/events. In this work we propose a model that addresses the general problem, i.e. it accounts for complex plant configurations (batch splitting and mixing, recycle streams), various storage policies (UIS/FIS/NIS/ZW), variable batch sizes, processing times and utility requirements, and multiple release and due dates. A State Task Network [3] MILP model is proposed with continuous-time representation [4].
2. PROBLEM STATEMENT We assume that we are given: (i) a fixed or variable time horizon (ii) the available equipment units and storage tanks, and their capacities (iii) the available utilities and their upper limits (iv) the production recipe for all tasks (mass balance coefficients, utility requirements) (v) the initial amounts and prices of all states (vi) the deliveries of raw materials and orders of final products (amounts and time) *To whom all correspondenceshould be addressed. E-mail:
[email protected]
275 The goal is to determine: (i) the assignment, sequence and timing of tasks taking place in each unit (ii) the batch size of tasks (i.e. the processing time and the required utilities) (iii) the amount of states purchased and sold Various objective functions such as the maximization of additional production, or the final inventory of intermediate states can be accommodated within the proposed model.
3. M A T H E M A T I C A L FORMULATION The general continuous STN MILP model of Maravelias and Grossmann [5] is used as basis. In the proposed model the time horizon is divided into N time intervals of unequal and unknown duration. Also, tasks that can be assigned to different units are treated as individual tasks for each unit assignment. In this model, assignment constraints are expressed through task binaries Wsi. and Wf.. Binary Wsi. (Wf.) is 1 if task i starts at (finishes at or before) time point n, T.. The batch size of task i that starts at, is being processed at, and finishes at or before time point n is denoted by Bsi.. Bpi. and Bf., respectively, and the amount of state s The amount of state s consumed (produced) by task i at time point n is benoted by Blis. (B~ at time point n is denoted by Ss., and the amount of utility r consumed by various tasks at time point n is denoted by Rr.. The start, processing and finish time of task i that starts at time point n is denoted by T&., Tp~., and Tf., respectively. The parameters of the model include the time horizon H, the minimum/maximum batch size BiM1N/BiMAx, the storage capacity C~, the mass fractions pZjpO, and the coefficients for the fixed and variable term of processing time and utility requirements (ai, fl~, 7~, 6~). The basic constraints of the model of Maravelias and Grossmann are the following:
3.1. Assignment constraints
~_. ~"(Ws,.,-Wf,.,)
Vj, Vn
(1)
te I ( ) ) n'~n
Z Ws,. : Z Wf,. Vi n
(2)
n
ZWs,. <_~ vj, v ,
(3)
Wf,. < 1 Vj',Vn
(4)
~l(j)
3.2. Calculation of start, processing and finish time Tp,. = a~Ws~. + fl~Bs,. Vi, Vn
Tf~.
Ts~. +Tp~.-H(1-Ws~.) Vi, Vn Tsi. =7'. Vi, Vn
(5) (6) (7) (8)
3.3. Time matching constraints
Tf,._, <7". + H(1-Wf,.) Tf,._, >T. -H(1-Wf,.)
Vi, Vn Vie ZW(i),Vn
(9) (10)
276 3.4. Batch size constraints and material balances B iMINW,Sin <- Bsin <- Bi MAXWsi. Vi, V n MIN
MAX
(11)
Vi, V n
(12)
Bs,._ 1 +Bp,._ i =Bp,n+Bfi" Vi, Vn Vi, Vn, Vs ~ SI(i)
(14)
Bi
mfi n <_Bfi n N B i
Wfi n
B~n = p~Bs~.
B~.~ = P,sBf~n Vi, Vn, Vs ~ SO(i) ZBi.~
Ssn " - S . . . . 1-[-
ie
S ~n < C .
O(s)
EB,'.n le
(15) gs, g . > l
:
(17)
Vi, Vr, Vn
YirWZn -1- r
Rrn'-Rrn-l-Z i
(16)
1(s)
Vs, Vn
3.5. Utility constraints R/r. "-TirWSin -~-l~irsBSin Vi, Vr, Vn R,~
(13)
ROirn-, -]'ZRilrn i
V F , VFI
Rr. < R MAx Vr, Vn
(18) (19)
(20) (21)
3.6. Time ordering constraints
T,,=, =0
(22)
T,,=ful = H
(23)
T.+, > T.
Vn
(24)
3.7. Tightening constraints ~" ~" rp,. < H Vj
(25)
tel(j) n
~Tp,., <_H-T. t~l(j)
~_. ~.,(a~Wf~., + fliBf~.,)<_T. ie I
(26)
Vj, Vn
n2n
Vj, Vn
(27)
(j ) n'gn
3.8. Release and due date constraints Let K be the set of orders that must be met, or deliveries of materials. The due date (release date) of order (delivery) k e K is TDk and the amount due (delivered) is ADk. The set of orders that correspond to state s is denoted by K(s) and the set of deliveries that correspond to s is denoted by L(s). Since, in the general case, we do not know at which time point n of the model each order (delivery) takes place, we use binaries Y~ to relate order (delivery) k with time point n; i.e. Y~ is 1 if order (delivery) k takes place at time point n. The following disjunction is used to relate each order (delivery) k with a time point n, ~.TDk = T.
v
7". > O)
Vk e K, Vn
(28)
The convex hull reformulation of disjunction in (28), after the elimination and aggregation of variables, consists of equations (29) - (31): r.~ = T D ~ Y ~ V k ~ K, Vn (29) 7'. = T.k + T.k
Vk e K, Vn
(30)
277 ~k < H ( 1 - Yk.) V k ~ K , V n In addition, each order (delivery) must coincide with one time point n, Z Ykn = 1 Vk ~ K
(31) (32)
n
The material balance constraint (16) of the model of Maravelias and Grossmann (2002) is modified as follows: S,.=S .... ,+ Z A D , - Z A D , . + Z B . ~ E B b . Vs, Vn>l (33) k~ L(s)
k~ K (s)
i~ O(s)
i~ 1 (s)
where ADk. is the amount of order (delivery) k at time point n; i.e. it is non-zero if order (delivery) k takes place at time point n and zero otherwise, and it is calculated as follows" AD,. = AD, Yk. Vk ~ K, Vn (34) 3.9. Objective function If in addition to meeting the due dates the objective is to maximize the revenue from the sales of extra production, the objective function is given by (35) where FP is the set of final products and (s is the price of state s, maxZ = ' ~ (sS,I,vl
(35)
sE F P
If there are orders for which there is no due date, the objective is to minimize the makespan and the objective is given by (36), where MS is the makespan. min MS
(36)
In this case, H is an upper bound on the makespan and it is replaced by MS in constraints (23), (25) and (26). Finally, the model can be used to minimize the inventory level of final products over time (Eq. (37)) or the final inventory of intermediate states (Eq. (38)), where INT is the set of intermediate states: min ~
~ S,,
n
(37)
s~FP
min ~ S,l~vl
(38)
s~ I N T
The proposed MILP model (M) consists of equations (1) - (15), (17) - (27), (29) - (34), and one of (35), (36), (37) and (38), where Wsin, Wfn, Yk. ~ {0,1}, and all the continuous variables are non-negative.
3.10. Remarks In the case where the release and due dates are distinct, constraints (31) and (32) can be simplified into (39) and (40); i.e. fewer constraints and variables. T. = ~ 7".* + T. Vn (39) k
T- < H ( 1 - ~ Y,.)
Vn
(40)
k
The timing and relative order of deliveries and due dates can be used to fix some of the Y,,, binaries, and derive valid inequalities that reduce the feasible space. For the latest order, for example, we can fix Yklul = 1, or if TDk < TDk., inequality (41) is valid, ~--]Y,., > ~--'~Yk,., Vn (41) n'
n'
278 The binary variables Yk~ can be modelled as Special Ordered Sets of type 1 (SOS 1) variables as well. The number of time points, finally, is determined through an iterative procedure in which we increase the time points until there is no improvement in the objective function.
4. EXAMPLE The proposed model (M) was implemented in the example shown in Figure 1, whose data are given in Table 1. There are six units (U1, U2, ...U6) available for the ten tasks. Unlimited storage is available for states F1, F2, INT1, INT2, P1, P2, P3 and WS; finite intermediate storage is available for states $3 (15 kg) and $4 (40kg); no intermediate storage is available for states $2 and $6, while zero-wait policy applies for states S 1 and $5. States F1, F2, and $4 are initially available in sufficient amounts. Furthermore, each task requires one of the three following utilities: cooling water (CW), low pressure steam (LPS), and high pressure steam (HPS). The maximum availability for CW, LPS and HPS is 25, 40 and 20 kg/min, respectively. Constant processing times are assumed. The orders to be met are as follows: two orders of 2 and 4 tons for product P 1 at t=-8 and t=l 2, one order of 2 tons for product P2 at t=10, and one order of 3 tons of product P3 at t= 11. The minimization of the inventory level of final products over the entire time horizon is used as objective function.
Figure 1: Sate Task Network of Example The optimal solution yields the equipment Gantt chart in Figure 2, where the batch size of tasks is shown in parentheses, and optimal value of zero; i.e. the batches that produce the final products finish exactly at the due dates. The optimal solution is found when the 12-hour time horizon is divided into 10 intervals. The MILP problem consists of 3,382 constraints, 240 binary and 1,711 continuous variables. The optimal solution was found in 690 nodes and 22.7 CPU sec on a Pill at 1GHz, using GAMS 20.7/CPLEX 7.5. As can be seen in Figure 2 the orders for all products are delivered exactly on time. Note that tasks T5 and T6 are delayed by two hours in order to minimize the earliness (and thus the inventory) of the orders of products P 1 and P2. Tab! e !: Example data (BMax in tons, a in hr, 7'!n kg/min, Task T1 T2 T3 T4 T5 T6 Unit U1 U2 U3 U1 U4 U4 B uxx 5 8 6 5 8 8 Dur (a) 2 1 1 2 2 2 Utility LPS CW LPS HPS LPS HPS 7" 3 4 4 3 8 4 8 2 2 3 2 4 3
8in ks/min per ton). . . . . T7 T8 T9 T10 U5 U6 U5 U6 3 4 3 4 4 2 2 3 CW LPS CW CW 5 5 5 3 4 3 3 3
279
Figure 2: Equipment Gantt chart of Example
5. CONCLUSIONS The proposed model is, to our knowledge, the first continuous-time STN model that addresses the general scheduling problem of multipurpose batch plants with multiple due and release dates. Previously proposed approaches address simplifications of the general problem by either considering less general plant configurations (e.g. no utility constraints or no batch splitting/mixing is allowed for each order) or by pre-assigning orders to time points/events.
ACKNOWLEDGEMENTS The authors would like to gratefully acknowledge financial support from the National Science Foundation under Grant ACI-0121497.
REFERENCES [ 1] M.G. Ierapetritou, T. S. Hene and C.A, Floudas. Effective Continuous-Time Formulation for Short-Term Scheduling. 3. Multiple Intermediate Due Dates, Ind. Eng. Chem. Res., 38 (1999), 3446. [2] C.A. Mendez, G.P. Henning and J. Cerda. Optimal Scheduling of Batch Plants Satisfying Multiple Product Orders with Different Due-Dates, Comput. Chem. Eng., 24, 2000, 22232245 [3] E. Kondili, C.C. Pantelides and R.W.H. Sargent. A General Algorithm for Short-Term Scheduling of Batch Operations- I. MILP Formulation, Comput. Chem. Eng., 17 (1993), 211. [4] X. Zhang and R.W.H. Sargent. The Optimal Operation of Mixed Production Facilities General Formulation and Some Approaches for the Solution, Comput. Chem. Eng., 20 (1996), 897. [5] C.T. Maravelias and I.E. Grossmann. A New General Continuous-Time State Task Network Formulation for the Short-Term Scheduling of Multiproduct Batch Plants, Submitted for Publication (2003).
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScience B.V.
280
A tool to support the configuration of work teams J. M a r t i n e z - M i r a n d a a, A. Aldea a and R. Baflares-Alc,'intara b
Department of Computer Engineering and Mathematics b Department of Chemical Engineering Universitat Rovira i Virgili, Av. PaYsos Catalans 26, 43007, Tarragona, SPAIN. a
Abstract. One of the initial steps of an industrial project is the configuration of the team(s)
that will execute it. The correct selection of people to integrate a team within a complex engineering project is not a trivial task. Team configuration is a type of Business DecisionMaking typically done by one person (a manager) based on his/her past experience and the available information about the behaviour and interaction between the potential team members. In this work we propose a tool that provides information about the possible overall behaviour of a work team. This tool uses Artificial Intelligence techniques, specifically, Multi-Agent Systems (MAS) technology. We present the fnst results obtained with this prototype tool and discuss some future developments to improve them. Keywords. Process Design, Social Simulation, Multi-Agent Systems, Distributed Artificial
Intelligence. 1. INTRODUCTION When a new complex project is started, the project manager is put in charge of partitioning this project into tasks and selecting the people who will perform them. The correct selection of people to integrate a team within a complex engineering project is not trivial because it should include not only technical competence and availability aspects, but also personal and social characteristics of each potential team member. The success of a project is greatly due to the personal responsibility of each member, but also to an adequate communication, collaboration and co-operation between the individual team members [1]. Often, a good working environment depends on the personal characteristics of each worker, this is even more important in a project, where the interaction and communication between team members are fundamental for the achievement of the final objective. In addition to social and external factors, emotions play a critical role in rational decision-making, perception, human interaction, and human intelligence [2]. The particular emotional state of a person is an additional factor that affects the performance of his/her work and the work of the whole team. The emotional state of a person varies with time; furthermore, given the same circumstances, the reactions of different people can be quite different. Since one of the goals of Artificial Intelligence is to represent and simulate human intelligent behaviour in a machine, we propose that some of its techniques can be very useful to support the configuration of work teams. More specifically, we think that the Multi-Agent Systems (MAS) technology could be very helpful to simulate human social behaviour given
281 its capability to account for characteristics such as autonomy, coordination, and communication [3]. We propose to represent a team member with a software agent that includes not only technical competence and availability aspects, but also some personal and social characteristics [4]. The first results of our prototype implementation are presented and discussed in this document. 2. RELATED W O R K Researchers in areas such as Psychology, Sociology, and more recently Artificial Intelligence have long been working on how to model human behaviour. This objective is a great challenge and the modelling of people's behaviour in a social environment, where every person has different features and react in a different way, is an even greater challenge [5]. In the Artificial Intelligence area, fi'ameworks to model teams have been developed for military applications [6] and to simulate the interaction between software agents and people [7]. However, these works do not take into account the social and emotional characteristics of the team members. Other researchers have studied the relationship between social and cognitive aspects [8]. In the other side of the spectrum, researchers also apply the agents technology to model emotional and personality features. In [9], a model to build agents that represents the human behaviour based on physical, emotional, cognitive, and social characteristics is presented, i.e. the PECS model. This agent model is not based on physiological or social theories, but considers that human behaviour is influenced by different aspects that can be categorised into the four characteristics mentioned above. The model of emotions for our agents is based on this approach. 3. A M O D E L OF W O R K TEAMS We use as a case of study a team in charge of a design problem. In the first step the user (typically a project manager) selects, according to his/her own experience, the members of the initial work team. Once the team is formed, several simulations of its behaviour are performed. If the overall results indicate that the team could possibly complete the project with success, the user has the possibility to save the team configuration in a file for future reference. However, if the simulations do not predict an acceptable performance, the user has the possibility of adding, removing or modifying the team members, until a suitable team is identified. Our prototype tool uses agents technology to represent each person fi'om real life by means of a software agent. Each agent has a specific role within the team, this role is given by the type of tasks that the agent can achieve. There are four types of roles in our model: 9 Project Manager. The agent with this role simulates to have the knowledge of the person in charge of the team and also specialised task knowledge. 9 Engineer or Scientist. These agents simulate having knowledge about specialised tasks of the project, for example chemical engineers, environmental engineers, chemists, etc. 9 Technician. The technician simulates having knowledge about specific technical tasks, e.g. simulation packages, statistics, etc. 9 Assistant. Agents that simulate the work of people who are involved in routine, repetitive tasks, e.g. data acquisition, graphics, etc.
282 The characteristics of a software agent are then matched with the ones of a real person using three basic aspects (notice that these characteristics are independent of the role): 9 Cognition, representing the technical knowledge of a person, i.e. creativity and experience. 9 Emotion and personality, to represent some emotional states. In this prototype we consider the following basic emotions: desire, interest, disgust, and anxiety [10]; and the following personality trends: amiable, expressive, analytical and driver [ 11 ]. 9 Social characteristics to represent the interaction between team members. An agent's behaviour results from the values of all of these internal properties randomly modified around the initial values of each of the properties using a normal distribution. We have introduced these random variations around each value to account for the nondeterministic nature of human behaviour. As a result, these variations (which are generally small) will generate different results in each simulation in spite of executing with the same team members performing the same tasks. Our hypothesis is that we can approximate some of the most plausible behaviours of a work team (e.g. best case, worst case, average case) by averaging over many (maybe several hundreds) simulations where each of the parameters is randomly perturbed. Finally, another modification of the agent's internal parameters is effected by its interaction with its tasks and with other agents. For example, if an agent x has (i) an expressive personality, (ii) a low experience value, and (iii) likes to work in a team, but its assigned task requires that it works by itself, then its interest value would be decreased, and its anxiety or stress values would be increased. These changes are reflected on the quality of the results and time required to perform the task that the agent has been assigned. In our prototype, the agents created with these characteristics do not solve a real design problem but only simulate their interaction with other agents and with their assigned task(s). The representation of these tasks include the following parameters: number of participants for each task; duration of the task (measured in working days); sequence of the tasks (sequential or parallel tasks); difficulty of the task (a task can be complex or not); type of task (generic or specialised task); deadline; priority within the project; and finally, the quality of the task. 4. A PROTOTYPE T O O L TO SUPPORT THE CONFIGURATION OF TEAMS The prototype tool is being implemented using the JADE fi'amework for MAS development (see http://jade.cselt.it/). We chose this framework because it is FIPA compliant (i.e. this tool implements the FIPA's standards for developing Multi-Agent Systems, see www.fipa.org). Our first prototype has the following assumptions and limitations: a) The agents will not solve a real design problem but only simulate their interaction with other agents and with their assigned task(s); b) Given the uncertainty associated with the characterisation of the cognition, emotion, personality and social properties of a person, random probabilities around the fixed values of such properties (representing the intemal state of the agent) will be used. c) The set of global behaviours of a team is obtained by averaging its behaviour over a statistically significant number of simulations. d) The most suitable team configuration can be obtained by comparing the sets of global behaviours for several possible team configurations.
283 The prototype has three main components:
1. Configuration component. In this component the user must configure the initial team. The team is formed by two or more agents, and the user must also configure the internal state of each agent. Once the team is configured, the user must specify the project that the team will perform, i.e. the characteristics of the project tasks. 2. Simulation component. The user decides the number of simulations to perform. 3. Results component. This component shows the results of the team's behaviour. In the future, this component will include some graphics to ease the interpretation of the data generated by the simulations. In the main window, see Figure 1, the user creates all the necessary agents to form a team. First, the user must select the type of agent that he/she wants to create (Project Manager, Engineer, Technician or Assistant). Then all the internal parameters of the agent must be set. Once the agent is created, the interface allows the user to see and modify the internal parameters of the agent, or if necessary, delete an agent. The tool allows saving a configuration for future use. The window shown in Figure 2 allows the user to set the tasks that constitute the project to be executed by the team. When a task is created, all its parameters must be set. Some of these task parameters influence the agent's output, for example the quality of the results and the time required to complete the task. When a task is a successor of another, a link between them records this relationship. The last step before the simulations is the assignment of tasks to the agents. Each task can be assigned to one or more agents. At this point the user can start the simulation step by setting the number of simulations. The first action is made by the Project Manager agent. This agent searches the first task(s) to be developed and sends a "Request" message to the agent(s) responsible.
Fig. 1. Graphical User Interface of the tool. Team Configuration window.
284
Fig. 2. Tasks Configuration The agent's behaviour is reproduced according to the model described in Section 3. The internal state of the agents is used to generate a success or failure result in the achievement of its assigned tasks. When the task is finished the Project Manager searches for the subsequent tasks and their assigned agents and sends the "Request" messages again. This process is continued until all the tasks are finished and is repeated as many times as the number of simulations requested by the user. After all the simulations are finished, the user can see the results for each simulation. The information about each of the agents is presented to the user and he/she can analyse which team has generated the most suitable set of behaviours for a particular project. 5. S O M E I N I T I A L R E S U L T S This section shows the initial results of the prototype. Take for example a team with the following members: 1 project manager, 3 engineers, 3 technicians and 3 assistants, each one with different internal characteristics. The project assigned to these agents has 12 diverse tasks. After 100 simulations with this team configuration we observe that the agents with a high value in their stress parameter and which were in charge of a specialised task were the agents with least success, independently of their personality. Decreasing the stress parameter in one agent and executing the same number of simulations, we observe that the number of failures in its tasks was decreased too, but in the specialised tasks the number of failures was the same. Making another simulation with the same value in its stress parameter, but being assigned only unspecialised tasks, there was a small decrease in the number of failures. The other agents that present more failures than successes were the agents with an amiable personality and working in a task by themselves; when this situation happened, their anxiety and disgust values were increased and so were their number of failures. In contrast, the agents with more success were the agents with a high value for experience and with an analytical personality. We observe that the creativity parameter only increases the number of successes when the agent is in charge of specialised tasks, and this parameter has less influence when the agent works with a generic task or with a non complex task. Although these results are expected a priori given the way in which the agents' behaviour was generated, they are, nevertheless, the basis to add more complexity to the model of a team. In addition we are modifying the agent behaviour to result not only in a success or
285 failure over its assigned tasks, but also in a delay to the task (which in turn may increase the stress value of the agent in charge of the successive task). In any case, each agent must be able to achieve its tasks, otherwise the Project Manager agent reassigns them to other agents. 6. C O N C L U S I O N S A N D FUTURE W O R K We have presented a Multi-Agent System model that generates a set of plausible global behaviours of a team, together with the internal structure of the agents and the tasks. A brief description of the prototype has been given and some preliminary results have been presented. It is worth emphasising that the work does not pretend to predict the real (an unique) behaviour of a team, but the set of most likely global behaviours to be used as guiding information for the decision making process of team configuration. Additional future work includes the implementation of the other two types of team coordination: the tree hierarchical organisation and the one without hierarchies. With these three organisation types, the user will have more information about the most plausible global behaviours of a team and make better informed decisions about its configuration. REFERENCES
[1]
[2] [3] [4]
[5] [6]
[7]
[8] [9]
[10]
[ 11]
L. T. Biegler; I.E. Grossmann and A. Westerberg, (1997). Systematic Methods of Chemical Process Design. Prentice Hall International Series In The Physical and Chemical Engineering Sciences. 1997. Chapt. I 1-21. R. W. Picard, (1995). Affective Computing. M.I.T. Media Laboratory Perceptual Computing Section Technical Report No. 321. M. Wooldridge and N. R. Jennings. Intelligent Agents: Theory and Practice. In Knowledge Engineering Review 10(2), 1995. J. Martinez-Miranda; A. Aldea and R. Bafiares-Alc~intara, (2002). A Social Agent Model to Simulate Human Behaviour. In Proceedings of the 3rd. Workshop on AgentBased Simulation, Christoph Urban (Editor) 2002, pp 18 - 23. R. Conte and C. Castelfranchi, (1995) Cognitive and social action. Institute of Psychology, Italian National Research Council. UCL Press Ltd, 1995. M. Tambe, (1997) Agent Architectures for Flexible, Practical Teamwork. In Proceedings of the 14th National Conference on Artificial Intelligence. Providence, Rhode Island 1997, pp. 22-28. J. Yen; J. Yin; T. R. Ioerger; M. S. Miller; D. Xu and R. A. Volz. (2001) CAST: Collaborative Agents for Simulating Teamwork. In Proceedings of International Joint Conference on Artificial Intelligence 2001. pp. 1135-1144. R. Conte; N. Gilbert; and J. S. Sichman (1998) MAS and Social Simulation: A Suitable Commitment. Lecture Notes in Computer Science 1998 Vol. 1534, pp. 1-9. C. Urban and B. Schmidt. (2001) Agent-Based Modelling of Human Behaviour. In Emotional and Intelligent II - The Tangled Knot of Social Cognition, AAAI Fall Symposium Series, North Falmouth, MA P. Johnson-Laird and K. Oatley, (1992) Basic Emotions, Rationality, and Folk Theory. In Stein, N.L., and Oatley K. Eds. Basic Emotions, Lawrence Erlbaum, Hove, U.K, pp. 169-200. from S. Schubert, Leadership Connections Inc. (1997) in L. T. Biegler, I. E. Grossmann and A. W. Westerberg. 1997 Systematic Methods of Chemical Process Design. Prentice Hall, USA, Chapt. I page. 9
Process SystemsEngineering2003 B. Chenand A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
286
A Quick Efficient Neural Network Based Business Decision Making Tool in Batch Reactive Distillation I. M. Mujtaba a*, M. A. Greaves a and M. A. Hussain b
aEngineering Modelling Group, School of Engineering, Design & Technology, University of Bradford, West Yorkshire BD7 1DP, UK bDepartment of Chemical Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia. Over the last decade the western world has seen marked changes in the way the manufacturing companies are operating. Many world-class bulk chemical manufacturers have significantly shrunk their businesses and or set up the businesses in the third world. This is due to increased global competition and operating cost (mainly labour cost) and strict environmental legislation. Internet has facilitated customers to choose the same products from many different companies at competitive prices in minutes rather than in days. Many UK companies no longer have the luxury to keep the same customers year after year [11. The business planners have to react quickly in such a frequently changing market environment to keep the business running on profit. In this work, a quick and efficient neural network based Business Decision Making (BDM) tool is developed that can be used at all levels of the operations in manufacturing companies. This tool is especially useful at the planning level where ultimate business decision making takes place. It is demonstrated in an environment of manufacturing products using batch reactive distillation. The tool can forecast profitability, productivity, batch time, energy cost and can give optimal operating policy in few CPU seconds for changing product specifications, raw material and energy costs and products prices.
Abstract
Keywords
BDM tool, neural network, dynamic optimisation, batch reactive distillation
1. INTRODUCTION There are mainly three levels of activities in any manufacturing companies: Planning Level, Supervisory Level and Operation Level as shown in Fig. 1E21.In bulk manufacturing companies the interactions between the planners and the supervisors (production managers) might be on a monthly basis and that between the supervisors (managers) and the operators on a weekly basis. This is due to the characteristics of these companies (Fig. 1). On the other hand small scale batch production is usually suitable for low volume, high value products such as pharmaceuticals, or other fine chemicals products for which annual requirement can be manufactured in few days or few batches. The need to produce customer specific products
*All correspondencesshouldbe addressedto Dr. I.M.Mujtaba.Email: [email protected]
287 and the fluctuations or rapid changes in demand (Fig. 1), which is often the characteristic of products of this type, require quick (on a weekly and a daily basis) interactions between planners, supervisors and operators to come up with a quick and feasible production plan, resource capacity plan, production schedule, raw materials procurement plan, adaptation and or reconfiguration of equipment, control, utilities and cleaning and associated safety and emergency plan. Therefore quick and efficient BDM Tools in all aspects of the business are important and absolutely necessary. This paper discusses such tool in the area of batch reactive distillation which is a frequently used unit operation for the production of fine chemicals. 2. PRE DECISION M A K I N G I N F O R M A T I O N F L O W
The essential information flow at all levels of business that is required before the final decision on making and marketing a particular product (as demanded and specified by a customer), can be as seen in Fig. 2. Based on most of the information the planners can make the final decision of go-ahead with the production and marketing of this product. While most of the information shown in Fig. 2 is required very quickly, let us assume (for the sake of simplicity) that all these information are currently available except the information on production time, optimal amount of product per batch, optimal operating condition, utility requirement per batch, etc. (some are highlighted in italic in Fig. 2). To further simplify, let us assume that similar products are being produced in the plant but the product specifications and raw material and energy costs change frequently (because of increased global competition). Following sections describe how these information can be obtained quickly and efficiently with reference to manufacturing products using batch reactive distillation.
Characteristics
Small Scale Seasonal Batch
Demand Production mode of Few Amount products Batches a year Changes Product frequently specification Shared Equipment and multipurpose Equipment Possible adaptation and
Levels of Activities and Interactions
Bulk LongTern3_ Continuous Large and throughout the year Does not change Not shared Not Possible
re-
Weekly interaction
Small
Large
~ I
Monthly interaction (bulk)
I
SUPERVISION Production Scheduling, Raw Materials Procurements, Operators Training and Management r--'"-x" Daily (small)
configuration Manpower
PLANNING Market Planning, Finance and Accounting, Production Planning, Resource Capacity Planning, Inventory
~
T
u
!
Weekly (bulk)
OPERATION Production, Control, Safety and Emergencies, Product specifications
Fig. 1. Levels of Activities and Frequency of Interactions in a Manufacturing Company
288 I CustomerDemand, Product Specifications and Time of Delivery? [ Supervisors
Planners Profitable?
Environmental friendly manufacturing process? Got the right technology? Got enough equipment? Any new investment required? Got enough manpower? Production time?
Health and Safety?
Requires re-scheduling? Equipment adaptation and or reconfiguration? Manpower? Utilities? Technology? Raw materials? Production Time?
Operator training?
Operators Optimal Operation ? Monitoring and Control?
Product Specifications? Production time?
Cleaning in place? Utilities? Emergency and health and safety?
Fig. 2. Pre Decision Making Information Flow 3. EXISTING TOOL: FORECAST OF PROFITABILITY, OPTIMAL PRODUCTION, OPERATION AND ENERGY CONSUMPTION
3.1 Manufacturing Products Using Batch Reactive Distillation Reactive Distillation processes combine the benefits of traditional unit operations with a substantial progress in reducing capital and operating costs and environmental impact TM. Batch distillation with chemical reaction (reaction and separation taking place in the same vessel and hence referred to as batch reactive distillation) is particularly suitable when one of the reaction products has a lower boiling point than other products and reactants. In a competitive environment where the prices of raw materials, products, energy and utility and product specifications (as dictated by the customers) change frequently, the business decision in the area of batch reactive distillation has to be based on a number of key interdependent factors. These are conversion of reactants to main products, amount of unwanted or environmentally damaging side products and associated recovery and disposal cost, raw material and energy cost, product quality and prices, batch size, batch time, reflux ratio, productivity, etc. An optimum combination of all these factors will constitute the most important objective of the business which is often the maximisation of the profit. In the past, many rigorous mathematical models and methods have been developed to simulate, optimise and control the operation of batch reactive distillation systems E4, 5, 61. All these approaches require excessive computation time. Let us assume that Ethyl Acetate (lowest boiling component) is to be produced at a specified purity using the reversible esterification reaction: Acetic Acid + Ethanol <=> Ethyl Acetate + Water. Mujtaba and Macchietto [51 developed an optimisation framework where, for a given ethyl acetate product purity, a series of maximum conversion optimisation problem was solved by varying the batch time. Computationally, the solution of the optimisation problem was time consuming and expensive. A typical solution approximately required 600 CPU sec in a SPARC-1 Workstation ESIand about 120 CPU sec in a SPARC-10 Workstation.
289 These results were represented by mathematical functions leading to algebraic equations as follows: (a) Maximum Conversion: C - gl(t) (1) (b) Optimum Amount of Distillate (kmol): D1 - gz(t) (2) (c) Optimum Reflux Ratio: r = g3(t) (3) (e) Total Reboiler Heat Load (KJ): QR = ga(t) (4) where, gl(t), g2(t), etc. are polynomial functions. The profit P is given by p = product revenue- raw material cost -energy cost batch time
CDID1- C s o B o t
-ChQ R
(5)
where B0 is the amount of raw material (kmol), t is the batch time (hr), CD1 is the product price ($/kmol), CB0 ($/kmol) is the raw materials cost and Ch ($/KJ) is the energy cost for heating (operating cost). The profit function now simplifies to: p = Co, g 2 (t) - CBoB o - C h g 4 (t)
(6)
which is a function of only one variable (t) for a given set of values of (CD1, CBO,Ch). The dynamic optimisation problem would now become a single variable algebraic optimisation and the solution of the problem does no longer require full integration of the dynamic model equations.
3.2. Frequently Changing Market Prices For a fixed product specification but under frequently changing market prices of (Col, CB0, Ch) the method will solve the maximum profit problem very cheaply and will thus determine new optimum batch time for the plant. The functions represented by equations (14) can then be used to determine optimal values of C, D1, r, QR, etc. Note the profit maximisation using the results of m a x i m u m c o n v e r s i o n problem via the polynomial functions does not require solution of the dynamic m a x i m u m p r o f i t optimisation problem, every time the market prices change E51. 3.3 Frequently Changing Product Specifications For every product specification, a series of dynamic m a x i m u m c o n v e r s i o n optimisation problem will have to be solved to obtain the functions (1-4). 4. NEURAL N E T W O R K BASED TOOL
4.1. NN Based Dynamic Model and Optimisation The use of Neural Networks (NNs) in process engineering has considerably increased in the last decade t7]. For a given set of inputs, NNs are able to produce a corresponding set of outputs according to some mapping relationship. This relationship is encoded into the network structure during a period of training (also called learning), and is dependant upon the parameters of the network, i.e. weights and biases. Once the network has been trained (on the basis of known sets of input/output data), the input/output mapping is produced in a time that is orders of magnitude lower than the time needed for rigorous deterministic modelling. Therefore, the resulting NN model is particularly suited for optimisation studies.
290 Most recently Greaves et al. E81 has developed NN based dynamic process model and optimisation framework for batch distillation column with middle vessel. The model was validated using experimental pilot plant data. The NN based maximum profit optimisation was at least 10 times faster compared to rigorous model based optimisation. In this work we use NN based optimisation framework of Greaves et al. tSJ in batch reactive distillation producing ethyl acetate. Also here, the profit maximisation is done using the results of NN based maximum conversion problem and the NN based functions (1-4) instead of a polynomial based functions. This greatly reduces the computation time compared to that by existing tool [5]
4.2 Example The problem is defined by the data ESI in Table 1. The kinetic and thermodynamic models are given in [5]. The input/output specification for the NN based dynamic model is shown in Fig. 3a. A multi-layered feed forward network is used which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, as found in the Mathworks MATLAB Neural Network toolbox. The error between the actual variable value (obtained from DAE Model) and that predicted by the network is used as the error signal to train the network [71. As can be seen (Fig. 3a) the prediction at any time t is only dependent on Z0 and not any other value of Z. A comparison between the predictions obtained by using rigorous dynamic model and those obtained using NN based model within a maximum conversion optimisation problem are shown in Fig. 3b for two product specifications. The solution took 10-12 CPU sec using SPARC-10 Workstation. The optimal operation information for changing product specifications are presented in Table 2. It shows that for all the purity specifications the operation will make a profit except when the purity X*D = 0.6. This makes sense, as it is more profitable to make a higher priced product than a lower priced product and it is cheap to dilute a product than it is to concentrate a product in practical terms. Table 2 shows the optimum values of batch time (tf), amount of product (D1), total heat load (QR), maximum conversion (C) and reflux ratio (r) for each purity specification to achieve the maximum attainable profit for the operation. Table 1 Input Data for Ethanol Esterification using Batch Distillation Column No. of ideal separation stages = 10, Feed composition (acetic acid, ethanol, ethyl acetate, water) xB0 (mole fraction) Column holdup (kmol): Condenser = 0.1, Condenser vapour load (kmol) = 2.5,
Table 2 Optimum Operation Information Product Price, X*D $/kmol tf(hr) 0.6 0.7 0.8 0.85
32 96 166 500
30.00 12.28 13.81 18.03
Profit ($1hr) -0.79 7.20 26.87 35.81
Total fresh feed, B0 (kmol) = 5 = 0.45, 0.45, 0.0, 0.1 Internal Plates = 0.0125 Column pressure (bar) - 1.013
D1 (kmol) QR (kJ/hr) 3.043 2.127 1.686 1.526
27.65 11.12 12.21 15.72
C
r
0.7941 0.7003 0.6991 0.7056
0.9597 0.9310 0.9511 0.9660
291 5. FINAL DECISION MAKING STAGE AND CONCLUSIONS Based on the product demand and specifications (Supplied by the clients), the optimal operation, energy consumption, production time, amount of product, etc. are the crucial information required both at the Planning (Decision Making) level and other levels of the business. This work has developed a quick and efficient NN based BDM tool for such purpose. The tool is demonstrated with an application in batch reactive distillation where product specifications and market prices may frequently change. The proposed tool can be run from a Laptop PC and can be used during Board Meetings to create alternative Production Scenarios in few CPU sec. Also once the decision is made these information are also very essential both at Supervisory level and Operator level activities. NN based BDM tool will be further explored in the context of a total plant in the near future.
NN Inputs r, Zo, tl
ZI
r, Zo, t2
Z2
r, Zo, tF
3.5-
NN Outputs
ZF
|
............................................................................................................................................................. I
3.0-
.... X ...... -X ..... ) < - - - - X . . . . X - -- X - - X
~2.5
i
2.0 /
~'1.5
where:
Zo ={C,D,,xD,QR}],=o
Z, ={C, DI,XD,QR}[t=,, .... ZF
={C,D,,xo,QR}[,=,,.
//
X
.
/ .X'"
+
+
.t_
.]..
X
1.0-
Ha NNet-
-I-
0.5-
-I-
x*D = 060
H a M o d e l - x * D = 0.75
H a N N e t - x * D = 0.75 ....................... ......
• 5
+
Ha Model- x*D = 0~-
....
0
+
.+
15
0.0
+
-I
10
15
20
25
Batch Time (hrs) .................................
[ 30 I J
Fig. 3. (a) Input and output specifications for each ANN; (b) Comparison between the dynamic rigorous model based and the NN based optimisation results ACKNOWLEDGEMENTS The University of Bradford Studentship to M.A. Greaves and the UK Royal Society supports to M.A. Hussain and I.M. Mujtaba are gratefully acknowledged. REFERENCES [ 1] McRobbie, I. (2000), University of Bradford and A.H. Marks Ltd. Joint Research Collaboration Initiative Meeting, 18 December, Bradford. [2] Fogarty, D.W., Hoffmann, T.R. and Stonebraker, P.W., Production and Operations Management, South-Western Publishing Co., USA, 1989. [3] BP Review, British Petroleum, UK, Issue: October-December, pp 15-16, 1997. [4] Cuille, P.E. and G.V. Reklaitis (1986), Comp. Chem. Eng., 10(4), p 389. [5] Mujtaba, I.M. and S. Macchietto (1997), IEC Res., 36 (6), 2287. [6] Sorensen, E. and S. Skogestad (1994), J. Process Control, 4(4), 205. [7] Mujtaba, I.M. and M.A. Hussain (2001), Application of Neural Network and Other Learning Technologies in Process Engineering, Imperial College Press, London. [8] Greaves, M.A., I. M. Mujtaba, M. Barolo, A. Trotta and M. A. Hussain (2002), in Computer Aided Chemical Engineering, Vol. 10, Elsevier, pp505-510.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
292
Design of sensor networks to optimize PCA monitoring performance E. Musulin a, M. Bagajewicz b, J. M. Nougu6s a and L. Puigjaner a* (a) Universitat Polit6cnica de Catalunya, Chemical Engineering Department, Av. Diagonal 647, E-08028 - Barcelona (Spain) ; (b). University of Oklahoma, 100 E. Boyd T-335, Norman, OK 73019 (USA). On sabbatical leave at ETSEIB. (*) Corresponding author.
Keywords: Instrumentation Networks, Principal Component Analysis (PCA). ABSTRACT In this work, a design method for sensor placement for fault detection based on PCA is presented. Cost is minimized, including in it the investment and the cost of not detecting a fault. A case study is used to illustrate the performance of the proposed technique. Genetic algorithms are used. 1
INTRODUCTION
Multivariate Statistical Process Control (MSPC) is an important tool for monitoring plants with a high number of variables. The most widely used MSPC technique is the Principal Component Analysis (PCA)(Pearson, 1901; Hotelling (1933). PCA applications in fault diagnosis were also developed (Nomikos and MacGregor, 1994, MacGregor et.al., 1994; Dunia et. al., 1996). Fault detection and isolation depends on the number and position of the different sensors in a system. Raghuraj et. al. (1999) used signed direct graphs and a greedy algorithm to obtain solutions. An SDG model was used also by Bushan and Rengaswamy (2000) to improve fault isolation. However, no model was presented for PCA applications. In this work, a design method for sensor placement selection for fault detection based on PCA is presented. Genetic algorithms are used to obtain the solution. 2
FAULT D E T E C T A B I L I T Y WITH PCA
Wang et. al. (2002) derived sufficient conditions so that a fault is detectable (i.e. exceed the control limit in the Hotelling statistic (T z) and/or the squared predictive error statistic (SPE), also called Q statistics. When a fault occurs the vector of measured process variables x' can be expressed as x'= x 0 + O f
(1)
where x0 represents the measurements under normal conditions and the second term represent the deviation introduced by the fault. The matrix | depends on the position of the different sensors in the system. The value of T2 is given by:
293
II II211
T2= D f T p r x '
= Dx-TP r Xo+| j
"112
(2)
Based on the triangular inequality and since D~ 2prx 0 < O r , it follows that the sufficient condition for fault detectability using the 7e statistic is,
I1'
Dx-TPr|
> 20 T
(3)
but a more restrictive sufficient condition can be obtained if the following norm is used:
II
Dx-TPr|
II l
=O'max Dx-TPrO/
/
(4)
where O'max (o) is the maximum singular value of a matrix. Thus, a sufficient condition to fault detectability is: 1
lifji[ >-lifriij = 2~
(D? Pr|
(5)
In turn, the SPE statistics is given by:
s~ -Ilex,II ~ = tlex0 + eo,f, II=
(6)
Therefore, a sufficient condition to SPE detectability is:
Ileo, f;ll >_=~x,~
(7)
but a more restrictive condition can be written as follows:
Ilfs..ll,: 2Crm~(CO,)6sPE and IlfseEils are wher~ I1s
(8) said to be the critical fault magnitudes (CFM). They are
approximations to the minimum fault magnitude IIf~ll detectable by these tests. 3
OPTIMIZATION MODEL
One possible objective function to locate sensors is to minimize the cost, subject to the detectability of certain faults above a certain threshold value (Bagajewicz, 2000). Let a certain set of J faults of interest {Fs,~g
j=l'
and let far, be the minimum magnitude of these faults that
one is interested in detecting. Below those threshold values, one is ready to accept the system behavior as normal, even though the fault may exist. For each design (number of sensors and location), one would have to evaluate the behavior of PCA for this set of faults, each occurring one at a time and a feasible solution is the one that can detect faults of the threshold size or larger. One can also add two faults occurring in pairs at the same time and request detectability on those too. This can be done by defining a new fault consisting of the two
294 other faults occurring simultaneously and revert to the single fault model. The model would be Minimize
f
N }
SensorNetworkCost
= ~ " c k " qk
k=l
(9)
s.t.
fm% -< fi%
J = 1,2 ..... J
where qk is a binary variable which is one if the kth sensor is located, ck is the cost of the kth sensor and fmin~is de minimum critical fault magnitude when fault j is introduced alone (eq 10). fmin, = min
{11s IIs }
o).
For small problems, the optimum can be obtained by simple enumeration, whereas for large systems, some numerical procedure to explore the feasible region is needed. The above described model has one important drawback: because very small thresholds may make the problem either infeasible or the sensor network too costly a designer has to determine some reasonable value of the fault thresholds. To do so, one would have to decide what is the economic loss or safety violation corresponding to each fault and determine the threshold accordingly. It seems therefore more straightforward to incorporate the cost of the magnitude of the fault at the time to be detected in the objective function directly. The cost of a fault is therefore modeled as follows:
f
0
if
F a u l t C o s t j = crj (fm,n, -- finfj ) if
oO
if
fm%< f.r, fi,q < fminj < fsupj fmin,>
fsupj
(11)
Above f~upj the cost is considered infinite, that is, the fault is not tolerated. The fault cost assigned to a sensor network that begins to detect the fault at this magnitude is called the superior fault cost and is noted by FCsup . Extensions of this model are easily made. Then the sensors are selected to minimize a weighted sum of the sensor and the fault costs. To solve the optimization problem Genetic Algorithm (GA) are used (Goldberg, Holland, 1970). Each chromosome (individual) is a string of bits representing the absence of not of one potentially located sensor. The length of the chromosome is equal to the number of process variables that can be potentially measured. The fitness of an individual, represented by the chromosome is evaluated using the inverse of the total cost.
1 i = Total
1
m
Cost i - ~
2.,
k=l
J ck 9qk + ~ F a u l t C o s t j
(12)
j=l
For each chromosome, the corresponding matrix |
needs to be calculated. Therefore, faults
are introduced in a simulator and deviations in the potentially measured process variables (fj) produced by a fault (Fj) are studied. Because of the noise, we consider that not all deviations in the variables are due to the fault. If the variable take less than 10% of the total deviation the
295 correspondent element o f |
is set to 0 (i.e. this process variable is considered not to be
affected by the fault), otherwise the element is set to 1. Figure 1 illustrates one instance o f the procedure. ~
. . . . . . . . . . . . . . . . . . . . . . .
, .................
, . . . . . . . . . . . . . . . . . . . . . . . . .
~02 =
..
"i. _ |
!1
[i~ ~17617617617617 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
-,,i I...... _--
Figure 1: (a) Deviation in the process variables (f/) due to Fj (b) O/obtained from this propagation
The values of f,~j and fsupj are obtained as the norm of the deviation in the process variables (fj) caused by faults of minimum size below which the cost will be considered zero (fi.f,), and the maximum size above which the fault will not be tolerated at all (fsupj). In turn, the proportionality constant aj is obtained using the cost o f the maximum size fault FCs~p , that is:
FC~up cr = J "' f~u~,, - f, or,,
(13)
Selection of mating pairs is done by roulette wheel method, the crossover is multi-point and mutations are made applying to the genes the NOT operator (switching a one to a zero or vice versa). The terminating criteria was been chosen as the number of generations or convergence (no improvement after a few generations), whichever is achieved first. 4
CASE STUDY
We consider a plant consisting of a reactor and a stripper (Figure 2). Fresh feed, consisting of reactant A and some o f product P, is fed to the reactor. The reactor is a continuous stirred tank. The irreversible reaction that occurs is of first order: A ~ P. The reactor output is fed to the stripper. Most of unreacted A is separated from product P there. The plant's product, with a small mol fraction of A (XAB) is obtained at the stripper's bottom. The stripper's output at the top is recycled to the reactor. The plant is designed around a fixed feed composition of XA00 = 0.9. Table 1 lists 13 potentially measured process variables. The following fault conditions are considered" F~ = drift in level sensor Vr, F2 = bias in level sensor Vr F3 = Steam Leak in the reboiler and F4 = Pump Failure.
296 Table 1. Possible sensor location and cost.
sP D,X~
:."~).................-. . . . . . . .......
~ I
Fre,.~ Fe~l
. . .J. .I
a,~t~u,,,t
~~
.......
I~-
jj
........... ~
i'-~...1~'~1 ...,
9
,
i .. 4
I/ '1 ''
....
I/ I sP ___~-~:" : ................... ~"'r"~ ~ s t ~
Lcu~,x~-n." sit. C ~ r a l ~ r set point LT Level ~
~
(I:,C ~
.."""q,
rn~t
9L T
~
~
9
Index
SensorDescription
Cost
l
Column input composition
100
2 3
Distillate Flow
10
Opening of condenser valve
1
4
Condenser level
1
,
Product flow
10
Opening of the column base valve
1
67
Column base level
1
Steam flow
10
8
i
9
:
.Product B,X~
9
12 13
Opening of the steam valve
1
Output composition
1
Reactor level
1
Reactor output flow
1
Reactor input flow
10
Figure 2. Chemical plant with recycle
Only some o f the measured variables (index: 2,3,6,8,9,11,12,13) will be affected by the faults. Then the search space can be reduced to take into account only the perturbed process variables because sensors placed in other points do not add information about the faults but noise. Table 2 depicts the values obtained forf~,fj ,fsupj and FCs,p, for each fault. T a b l e 2. f~,r, F1
[0.0011 -0.0071 -0.0017 -0.0019 0.0167 0.0064]
F2
[0.0032 0.0172 0.0032 0.0034 0.0033 0.0018]
F3
[0.0111 0.0025 0.0048 0.0051]
F4
[-0.0077] fsupj
FC.... ($)
F1
[0.0467 0.2593 0.0588 0.0626 0.0336 0.0238]
10000
F2
[0.0195 0.0643 0.0147 0.0157 0.0283 0.0288]
10000
F3
[0.0541 0.0178 0.0314 0.0334]
10000
F4
[1.3448]
10000
The algorithm was applied with an initial population of 130 initial individuals. The best individual o f the last generation was selected as solution. This solution is to put sensors 2, 9 and 11 (i.e. Distillate Flow, Opening of the steam valve, Reactor level).
297 Another case was solved by supposing that the sensors cost are near to the maximum fault cost FCostj. The solution is to locate sensors 4, 7, 9 and 10 (i.e. in Condenser level, column base level, Opening of the steam valve and output composition). After each optimization was finished, the solution was observed for example that the solutions obtained were able when more sensors are added the faults that have influence less efficiently detected. This is because more variance is needed to detect these faults. 5
tested using simulations. It was to detect faults efficiently and that in a small number of variables are introduced by signals that are not
CONCLUSIONS
The problem of locating sensors at minimum cost for the use of PCA was presented in this paper. Because unneeded sensors introduced extra variance, smaller number of sensors, when feasible, behave more efficiently. Therefore, not only the cost is improved, but also the PCA performance is improved. A C K N O W L E D G M E N T S : Support in part by the European community (project n ~ G1RDCT-2001-00466) is acknowledged. Financial support for Dr. Bagajewicz sabbatical stay at ETSEIB, provided by Spain's Ministry of Education is also acknowledged. REFERENCES
[ 1] Pearson, K. "On lines and planes of closest fit to systems of point in space." Phil. Mag. (6),2,559-572 [2] Hotelling, H "Analysis of a complex of statistical variables into principal components." J. Educ. Psychol.,24,417-441,498-520." 1933 [3] Nomikos P. MacGregor Monitoring Batch Process using Multiway PCA AIChE J 40,8.1365 1994. [4] MacGregor, J. F., C. Jaeckle, C.' Kiparissides, and M. Koutoudi, "Process Monitoring and Diagnosis by Multiblock PLS Methods," AIChE J., 40, 827 (1994). [5] R. Dunia, S. J. Qin, T. F. Edgar, and T. J. McAvoy, "Identification of Faulty Sensors Using Principal Component Analysis," AIChE J , 42, 2797 (1996).+ [6] Raghuraj, R.; Bhushan, M.; Rengaswamy R., "Location sensors in complex chemical plants based on fault diagnosis criteria." AIChE J. 1999, 45 (2), 310. [7] Bushan, M. ; Rengaswamy, R., "Design of sensor network based on the signed directed graph of the process for efficient fault diagnosis " Ind. Eng. Chem. Res. 2000, 39 (4), 999. [8] Wang, H. ; Song, Z. ; Ping, L.,"Fault detection behavior and performance analysis of principal component analysis based process monitoring methods" Ind. Eng. Chem. Res. 2002, 41 (10),2455-2464. [9] Bagajewicz M. Process Plant Instrumentation. Design and Upgrade. Technomic Publishing Company. Now CRC Press (2000). [10] Holland, J. Adaptation in natural and artificial systems. The University of Michigan Press - Ann Arbor - 1975.
Process Systems Engineering2003 B. Chen and A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
298
An approximate novel method for the stochastic optimization and MINLP synthesis of chemical processes under uncertainty Zorka Novak Pintari~ and Zdravko Kravanja
University of Maribor, Faculty of Chemistry and Chemical Engineering, Smetanova 17, SI-2000 Maribor, Slovenia Abstract In this work we present a novel central basic point (CBP) method for the approximate stochastic optimization and MINLP synthesis of chemical processes under uncertainty. The main feature of this method is that the expected value of the objective function is evaluated by solving a nonlinear program at one central basic point with lower bounds on design variables, while feasibility is ensured by simultaneous solution of the NLP at critical vertices. The central basic point and lower bounds are obtained through a set-up procedure which relies on a one-dimensional calculation of the objective function's conditional expectations for each uncertain parameter. On the basis of this method, a twolevel MINLP strategy for the synthesis of flexible chemical processes is proposed. Keywords
flexibility, uncertainty, conditional expected value, approximate method
1. INTRODUCTION The main problem in stochastic optimization is to achieve an accurate estimation of the expected objective function, which requires integration over a multi-dimensional space of uncertainty. Different integration schemes have been presented in literature, e.g. Gaussian quadrature formula and Monte Carlo simulation [11. Recently, a specialized cubature technique [z] was developed and a hybrid parametric stochastic approach TM.The purpose of this work is to reduce the problem's dimensionality in order to evaluate the expected objective function efficiently and accurately for large-scale models with a significant number of uncertain parameters, for instance 10 to 30. 2. BACKGROUND In our previous w o r k [41 w e developed a method called the Reduced Dimensional Stochastic (RDS) method, for stochastic optimization with reduced dimensionality. Fig. 1. briefly presents the main steps of RDS strategy. In this RDS method the expectancy of the objective function is approximated by a linear combination of objective functions evaluated at a set of basic vertices, while considering the critical vertices as feasibility constraints. Critical vertices are defined as those which require the largest values of design variables in the sequential scanning of extreme combinations of uncertain parameters. This method was incorporated into a two-level strategy for MINLP synthesis under uncertainty tSl which can handle nontrivial medium-scale optimization problems with a considerable number of uncertain parameters within a reasonable period of time. The selection of basic vertices, however, is not unique and many suboptimal solutions can be generated.
299
['Superstructure[ New structure §I-. Set-up procedure Identificationof basic and critical vertices. Determinationof weights.
NLP Problem
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i
d
Design stage
~
~[ ...................
"~
Operating stage
Selection of design No~__._.___~EC variables by direct ~ search method, I Yes
Selectionof operatingvariables. Optimizationat basic and critical vertices. Estimationof the expected value EC.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .m lai.O tp]. . . . . . . . . d. .e. .s.i.g. .n. . ~. . ,. _. .~. .~. . . . . . . . . . . . . . . . . . . . . . . IFeasibility test. | ~ ..~Yes [ MILPMaster Problem] Fig. 1. Two-level strategy for RDS synthesis, tSJ The aim of this work is to improve and to simplify the set-up procedure so that more accurate approximations of the expected value can be obtained and more uncertain parameters can be handled than by the RDS method. The main idea is to perform the optimization at one central basic point in which the value of the objective function would be close to the expected value. 3. OUTLINE OF THE CENTRAL BASIC POINT (CBP) M E T H O D In the novel CBP method, the expected value of the objective function is evaluated at a point close to the nominal point. This feature eliminates the drawback of the RDS method of estimating the expected objective function at the extreme basic points. The uncertain parameters can be described by any distribution function and the nominal point can have a mean or mode values. The method is performed using the following steps: 3.1. Selection of critical vertices
Optimal oversizing of design variables is crucial for the flexibility and optimality of process schemes. We assume that in the convex problems the largest values of design variables appear at the "worst" combinations of uncertain parameters, i.e. at some extreme points (vertices). In order to determine a vector of critical vertices, the optimization of the original model (NLP 1) is performed sequentially at 2 Up extreme points, where Np is a number of uncertain parameters: min C(xi,di,O i ) s.t.
f(xi,di,Oi)=O g(xi,di,Oi) <_0 d i >_A ( X i , O i )
x i , d i ~R, 0 i
E{0 LO, 0 UP}
(NLP1)i i=I,'",2NP
300 C is the economic objective function, x and d represent the vectors of the operating and design variables, respectively, 0 is the vector of uncertain parameters. The vertex with maximal value of particular design variable is selected after optimization and included in the vector of critical vertices, 0 c. In this way, the maximal number of critical vertices, Nc, is equal to the number of design variables, which often represents significant reduction in problem dimensionality. The scanning procedure, however, could become inefficient when the number of uncertain parameters is really high, e.g. higher than 30. 3.2. Calculation of the conditional expectations for each uncertain parameter A multi-dimensional evaluation of the expected value can be extremely exhaustive. On the other hand, performing the evaluations over a one-dimensional space in order to obtain information about the influence of each uncertain parameter on the objective function is much easier. Conditional expectation of the objective function for a particular uncertain parameter, EC(Oi), is obtained by the simultaneous optimization of the original problem at No Gaussian integration points (usually 5) and Arc critical vertices (NLP2):
min EC(Oi) = E C(xi,di,Oi,ojn]j#:i
-" ~_, C(Xi,k,di,Oi,k,Oj[j.Tt:i). Vi,k k=l
s.t.
fk(Xi,k,di, Oi,k, ojn]j~i ) = 0
fk(Xi,k,di,OCk) -" 0
gk(xi,k,di,Oi,k,ojnff~i) < 0 k = 1,...
gk(Xi,k,4,0Ck) <- O k = 1,...,Nr
di >-fdk(Xi' 'k'Oi' k'Onjlj ~')
ag _ A,k (x~,k,0~,)
(NLP2)i i=I,..,Np j=l,..,Np
Xi,k,di ~ R The left group of constraints (for k= 1,...,NG) in model (NLP2) represent the optimization at Gaussian integration points of particular parameter (0/,k), while other uncertain parameters are held at the nominal values (0jn [j~i). The values obtained are then used for estimation of the conditional expected value by using coefficients, v, obtained from the density functions. The right group of constraints (for k=l .... ,Nc) refer to the critical vertices (Okc) determined as described in section 3.1. Note, that the design variables are optimized uniformly for the whole set of points since, in reality, the dimensions of the installed process units cannot change. 3.3. Determination of the central basic point
The approximate functions, C(0/), can be developed for each uncertain parameter by a simple curve fitting based on the realizations of the objective function at the NG quadrature points obtained by solving (NLP2). The derived regression function C(0/) represents the dependency of the objective function on a particular uncertain parameter 0i. It is then possible from this function to predict a central value of uncertain parameter (0/B) by using a simple back calculation. For this central value the objective function is equal to the corresponding conditional expectation obtained in (NLP2)i:
c(oB)=EC(Oi)
i = 1,...,Np
(1)
When the fitting functions, C(Oi), are developed and Eq. (1) is applied to all uncertain parameters, their central values, 0/B, are determined and arranged as components of a vector
301 0B, called a central basic point (CBP). At this point the NLP optimization of the problem will be finally performed in order to obtain the approximate expected value. Steps 3.2. and 3.3. can be repeated iteratively until the convergence on 0B is achieved. 3.4. Determination of lower bounds on design variables Since CBP method deals with one single basic point instead of many quadrature points, the appropriate trade-off between capital and other costs over the entire uncertain space cannot be obtained. This can be overcome if the trade-off is rebalanced by enforcing design variables to the values obtained at calculation of conditional expectations, where a valid onedimensional trade-off is obtained for each uncertain parameter. The most suitable way of enforcing design variables to these values is to set their lower bounds to maximal values obtained in all previous (NLP2)i problems. A vector of lower bounds, d L~ obtained in this way represents a 'conditional overdesign'. The latter can be interpreted as an approximation of the exact overdesign which would be obtained by simultaneous optimization at quadrature points. 3.5. Approximate stochastic optimization at the central basic point The approximate stochastic optimization of the problem is finally performed at the central basic point with the lower bounds on design variables while considering feasibility constraints at critical vertices:
min C(x,d,O B) s.t. f(x,d,O B) = 0
fk(xk,d, OCk)= 0] /
g(x,d,O B) < 0
gk(xk,d, OCk)< O~ k--1,...,N C
(NLP3)
!
a>_A(x,O B) d> d L~
a>_A,k(xk,O~) J x, xk,d eR
4. ILLUSTRATING EXAMPLE The CBP method is illustrated by the optimization of a small heat exchanger network (Fig. 2.) with a fixed structure t4, 51. Three uncertain parameters are distributed by normal distribution functions with the following means and standard deviations: TIN,el N[388, 3.33] K, T~,H2 N[583, 3.33] K, Tcw N[298, 1.66] K. The total costs obtained at the mean values of uncertain parameters and with feasibility constraints at the extreme points amount to 45 100 S/yr. The problem was solved stochastically by using CBP method through the following steps: a) Problem comprises 8 vertices. Three of them were recognized as critical since at these vertices the largest values of design variables were observed. The critical vertices are as follows: 0c={(378, 573,293) K, (378, 573,303) K, (398, 593,303) K}. b) The conditional expected values of the objective function EC(TrN,cl), EC(TIN,H2) and EC(Tcw) were obtained by solving (NLP2) for three uncertain parameters and amount to (45 210, 45 256, 45 100) $/yr, respectively. c) Regression functions were developed based on the values obtained at the Gaussian quadrature points, and the components of the central basic point were determined: 0~=(T~,cl B, Try,H2B, TcwB)=(387.45, 582.73, 298.00) K. Since Tcw does not affect the objective function, its nominal point is considered as the central value. d) Maximal values of design variables were obtained by calculating the conditional expectations in b) and stated as lower bounds: AL~ 2.433, 6.385, 2.062, 2.343) m 2.
302 CF / (kW/K)
Tcw--298 [-H-~620
350 T<
3493 Steam
6
323
1.5
--
1.0
388[-~]
2.0
313~
3.0
Cost of exchangersand coolers($/yr): 1846.A0.65 (h in m2 ) Cost of heaters ($/yr): 2350.A0.65 Cost of CW: 20 $/(kW-yr) Cost of Steam: 230 $/(kW.yr) UI=U2=U3=U4=0.7 kW/(m2.K) Us=l kW/(m2.K)
Fig. 2. Heat exchanger network of illustrating example. e) Final optimization of the problem was performed by solving (NLP3) at the CBP, 0B, (obtained in c) with the lower bounds on design variables (obtained in d). The approximate expected costs amount to 45 331 S/yr. In the second iteration CBP was determined to be 0B= (387.47, 582.74, 298.00) K and lower bounds AL~ 2.421, 6.393, 2.065, 2.343) m 2 which indicates pretty fast convergence of the method. The expected costs obtained in the second iteration are 45 326 $/yr and vector of optimal design variables Aopt=(15.372, 2.421, 6.393, 2.065, 2.343) m 2. The example was also solved at 125 Gaussian quadrature points and critical vertices simultaneously, yielding the most reliable stochastic result of 45 297 $/yr and the vector of design variables Aopt=(15.333, 2.376, 6.319, 2.043, 2.335) m 2. This result indicates that the result of CBP method is good approximation of the stochastic result. Result of CBP method is a significant improvement of result of RDS method which was 45 970 $/yr with optimal design variables Aopt=(14.393, 3.132, 6.164, 1.920, 2.383) m 2. Several HENs with different uncertain parameters and different means and standard deviations have been also solved by the proposed CBP method to test its accuracy. Examples were solved also with rigorous Gaussian integration and at the nominal point with feasibility constraints. Fair approximations were obtained by the proposed CBP method, furthermore, the CBP results of all problems are better than results obtained at the nominal point (Table 1). 5. TWO-LEVEL MINLP STRATEGY A two-level MINLP strategy for the synthesis of flexible chemical processes is proposed based on the CBP method: 1. level: MINLP synthesis of optimal flexible structure is perfomed simultaneously at the union of the nominal point and the critical vertices. It is assumed that the central basic point will be close to the nominal point, therefore, the first level will generate a good starting structure for the second level and the comprehensive set-up procedure can be avoided for those structures far from the optimum. 2. level: The second-level MINLP starts with the optimal structure of the first-level MINLP. The CBP method is applied to each NLP to obtain approximate stochastic solutions. The master problem then predicts a new structure with improved stochastic solution. Table 1 Expected cost in HEN Problem Nominal point Gaussianint. CBPmethod
$/yr obtained with Gaussian integration and CBP method 1 2 3 4 5 6 45 381 45 680 44 989 44 875 45 057 47 622 45 926 46 480 45 399 45 274 45 700 48 433 46 067 46 733 45 491 45 469 45 825 48 402
for series 7 79 390 80 727 80 787
of HENs 8 79 745 81 102 81 304
303 6. SYNTHESIS OF HEAT INTEGRATED DISTILLATION COLUMNS A synthesis of heat integrated distillation columns for the separation of a four-component mixture was considered in this example. The problem comprises 30 uncertain parameters, however, some of them were combined and described with a single parameter. Finally, 12 uncertain parameters were defined: feed flow-rate, feed temperature, temperature of cooling water, the compositions of the components A, B and C in the feed stream, cost of steam, cost of cooling water, heat and fouling coefficients, stream individual cost coefficients, the cost of the column and the cost of the products. More data are given elsewhere [a' 51. On the first-level MINLP, the synthesis of optimal flexible structure was performed at the nominal values of uncertain parameters and at critical vertices, which were determined for each structure. Optimal solution obtained at the first level has a profit of 10.222 MS/yr. The second-level MINLP starts with the optimal structure of the first level. The optimal solution was obtained in the second main iteration (Fig. 3.) where the expected profit determined at CBP amounts to 10.043 MS/yr. Feasibility of optimal structure and its design variables was tested by Monte Carlo method over 5500 randomly selected points which assure 95 % confidence limits for the result within an error + 0.05 MS/yr. The expected profit obtained amounts to 10.065 M$/yr which confirms good quality of CBP result. It is interesting to note that optimal structure obtained by RDS method was slightly different with the expected profit of 9.976 MS/yr. 7. CONCLUSIONS The novel central basic point (CBP) method is presented, for estimation of the expected value by stochastic optimization under uncertainty. This method approximates the expected value of the objective function using NLP optimization at one single point, called the central basic point. A two-level MINLP strategy is proposed based on the CBP method. The proposed method provides an efficient way for the stochastic optimization and MINLP synthesis of problems with a significant number of uncertain parameters. Although the exact solution can't be guaranteed, favourable solutions can be obtained with moderate computational effort. Studies of different variants of the proposed CBP method are under way in order to obtain more reliable results. REFERENCES [1] [2] [3] [4] [5]
J. Acevedo and E.N. Pistikopoulos, Comput. Chem. Eng., 22 (1998) 647. F.P. Bernardo, E.N. Pistikopoulos, P.M. Saraiva, Comput. Chem. Eng., 23(1999) $459. T.S. Henr, V. Dua and E.N. Pistikopoulos, Ind. Eng. Chem. Res., 41 (2002) 67. Z. Novak Pintari6 and Z. Kravanja, Ind. Eng. Chem. Res., 38 (1999) 2680. Z. Novak Pintari~ and Z. Kravanja, Comput. Chem. Eng., 24 (2000) 195. cw
D
St
/(
~.~st ' ~' ~D
Fig. 3. Optimal flexible structure obtained in the second-level MINLP.
304
ProcessSystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby Elsevier ScienceB.V.
Short-term Scheduling of Refinery Operations from Unloading Crudes to Distillation P. Chandra Prakash Reddy, I.A. Karimi, Rajagopalan Srinivasan
Department of Chemical & Environmental Engineering, National University of Singapore, 4 Engineering Drive 4, Singapore 117576 Abstract: Scheduling of crude oil unloading and processing is a complex nonlinear problem, when tanks hold crude mixes. We present an improved discrete-time formulation and a novel solution approach for a multi-CDU refinery receiving crude from multi-parcel VLCCs through a high-volume, Single Buoy Mooring (SBM) pipeline. Mimicking a continuous-time formulation, our model allows transfers to start inside of some time slots. Furthermore, it handles most real-life operational features and improves on other reported models. Our solution algorithm corrects for the concentration discrepancy arising due to crude mixing in tanks by solving a series of successively smaller MILPs without solving any NLP or MINLP. We illustrate our approach using a refinery with 3 CDUs, 8 crudes, 8 tanks, 2 crude categories, and one 3-parcel VLCC arriving in an 80 h horizon. Keywords: refinery scheduling; crude unloading, crude inventory management
1. INTRODUCTION Short-term scheduling of crude oil operations involves unloading crude oil to storage tanks from arriving ships and ~ crude to distillation units in an optimal manner. The problem is complex because it involves discrete decisions and nonlinear blending relationships. A refinery scheduler's task has become quite complex in recent years. He/she faces an increasing number of time-critical issues such as fluctuating demands, tighter specifications, constantly changing plant capabilities, lower inventory levels, etc. Most schedulers rely on their experience rather than an optimization tool. However, due to its complexity, tremendous opportunity for performance improvement exists in this process. Quantifiable economic benefits of better scheduling are improved options, increased utilization and throughput, intelligent use of less expensive crude stocks, capture of quality barrels, reduction of yield and quality giveaways, improved control and predictability of downstream production, reduced demurrage costs, etc. Crude scheduling has received some attention tll't21'E31't4l in recent years. Discrete-time formulations have been the main approach. However, an effective, general approach for solving the nonlinear blending relationships is still missing in the literature. As rightly pointed out by Kelley et al. [41, since crude transfers from vessels to tanks and tanks to CDUs are disjoint, it is difficult to get a simultaneous solution to all issues using a single MILP, unless the problem involves no mixing of crudes in tanks or involves use of single crudes at a time. In real life, such practices are rare. Pinto et al. Ell suggested that although continuous-time models reduce the combinatorial complexity substantially, discrete-time models are still attractive, as they easily handle resource constraints and provide tighter formulations.
305 However, more importantly, refinery operators prefer to execute major decisions and tasks at the starts of their 8 h shifts. Thus, a discrete-time model with major events happening at 8 h slots is quite reasonable for this problem. In this work, we develop a novel mixed-integer linear programming (MILP) model with following unique features. 1) Despite being a discrete-time formulation, our model allows multiple events in one period, thus it resembles a continuous-time model and utilizes the period durations effectively. 2) It allows a single high-volume SBM or SPM (Single Buoy Mooting or Single Point Mooring) pipeline for transferring crudes from multi-parcel, very large crude carriers (VLCCs) to tanks. 3) It allows multiple tanks to feed a CDU and a tank to feed multiple CDUs simultaneously. 4) Each tank carries a mix of crudes and requires settling time for brine removal Finally and most importantly, we present a novel iterative approach for handling the nonlinear blending constraints without solving a single NLP or MINLP. 2. PROBLEM STATEMENT Figure 1 shows the refinery configuration. Given (1) VLCCs and their arrival times (2) crude types and volumes of VLCC parcels (3) holdup of the SBM pipe line and its initial crude (4) CDUs and their allowed processing rates (5) storage tanks, their capacities, initial inventory levels and compositions in terms of types of crudes (6) modes of crude processing and crude segregation in storage and processing (7) inventory costs, sea waiting costs, and crude changeover costs (8) limits on key component concentrations during storage and processing (9) maximum and minimum flow rates from the docking station to tanks and from tanks to CDUs and (10) minimum and maximum throughput rates of crudes at various times during the scheduling horizon; determine (1) a detailed unloading schedule for each VLCC (2) inventory profiles of storage tanks (3) detailed crude charge profiles for CDUs and (4) maximum gross profit. We assume the following regarding the refinery operation: 1) One SBM pipeline dictates that only one parcel can unload at any instance. 2) Tanks under-receive crude, so a VLCC can unload to only one tank at any instance. 3) Crudes are perfectly mixed in each tank and time to changeover tanks to processing units is negligible. 4) The SBM pipeline contains one crude at any time and crude flows in plug flow. This is valid, as parcel volumes in a VLCC are much higher than the SBM pipe holdup. 5) The sequences in which the VLCCs unload their parcels are known a priori. 6) For simplicity, only one key component decides the quality of a crude feed to CDU. 3. M A T H E M A T I C A L F O R M U L A T I O N Let NT identical periods (t - 1-NT) of 8 h comprise the scheduling horizon. As the first step, we convert all arriving multi-parcel VLCCs into single-crude parcels. For each VLCC, the SBM pipeline with crude from the last parcel of the previous VLCC becomes the first parcel. Repeating the same for all VLCCs, we create an ordered list (order of unloading) of NP single-crude parcels (p = 1-NP) each with an arrival time. Each parcel connects to the SBM line for unloading and disconnects after unloading. To model this process, we define three binary variables: Sept = 1, if parcel p is connected to the SBM line during period t; SEpt - - 1, if p first connects to the SBM line at the start of t; and XZpt - - 1 , ifp disconnects from the SBM at the end of t. The following constraints relate these variables, and define the start and end times for parcel connection: SPpt =Xep(t_l) +SFpt-XZp(t_l) (40,0 ~PT= {(p,t) lparcelpmaybeconnectedint} (1)
306
~_, XF,, - 1 ~
~ XLp, : 1
t
(p, t) 9 PT
(2a,b)
t
ZZp -- Z tXZp t (P, 0 e r (3a,b) t Eqs. 1 and 2a,b ensure that SEpt and .XZpt a r e binary, when XPpt a r e so. To fully utilize a ZFp "" Z ( t - 1)SEpt t
8[.
period, we allow two parcels to connect and disconnect during a period t.
~-' XPp, < 2 & rFq,+, ) > VLp - 1 (4,5) p If ETAp is the arrival time of parcelp, then TFp > ETAp. To effect crude segregation, define PI = {(p, i) [ tank i can store crude in parcel p}. To complete the parcel-tank connection, define XT/t = 1, if tank i is connected to the SBM line during period t and Xpit = XPp~it. To treat Xpit as continuous, we use: Spit >_Sept + X T i t - 1 (p, t) 9 PT, (p, i) 9 PI (6) ~.,Xp,, < 2XPp, & ~ Xp, t < 2XT~t (p, t) 9 PT, (p, i) 9 PI (7a,b) i
p
and allow at most two parcel-tank connections to the SBM line in one period:
~-'X~i,<2
& ~"Xp,,<2
i
p
(p,t) 9149
(8,9)
i
For tank-CDU connections, we define Y~t = 1, if tank i feeds CDU u during t, and allow at most two tanks to feed a CDU and vice versa, therefore, ~--'Y~, < 2 & ]~--'Yj.,_<2 (10a,b) u
i
To prevent CDU-feeding during crude receipt and subsequent settling time (1 period), we use, 2XTit-F riut q- Y/u(t+l) ~ 2 (11) Having modeled the various connections, we regulate parcel-tank transfer amounts by:
FPr~Xpit
<~FPTpit <- FPT~Xp,t
&
9 FPT~ < 1 & ~i.t FPTpit : Pap
(12a,b,c)
where, FPTp~t is the amount of crude transferred from parcel p to tank i during period t and PSp is the size of parcel p. For segregating tanks and CDUs, define 1U= {(i, u) [ tank i can feed CDU u} and 1C = {(i, c) [ tank i may hold crude c}. For controlling crude delivery amounts and composition, we use: FTUiu, = ~-'FCTUiu . & FUu, = ~ FTUi, , (i, u) 9 IU (i, u) 9 IU, (i, c) 9 IC (13a,b) c Yiut FTU,~
i
< FTUiut < YiutFTU~
&
FU~ < FUu, < FU v,
FU,,xcc~ <_~., FCTUi., < FU.txcc~
(14a,b) (15)
i L U xkkuFUut <_~_~~.,FCTU~,c, xkkc <_xk~FU., i
(16)
c
where, FCTUiuct is the amount of crude c delivered by tank i to CDU u during period t, FTUiut is the total amount of crude fed to CDU u by tank i during t, FUut is the total feed to CDU u L and xc~ are the allowable lower and upper limits on the fraction of crude c in the during t, XCc. feed to CDU u, and xk~ is the fraction of key component k in crude c. To minimize the upsets caused by changeovers of tanks (thus crudes) feeding to a CDU, we define a 0-1 continuous variable CTi,t = 1, if tank i changes its feeding status to CDU u at time t. Then, COut = 1, if at least one feed tank changes for u at t, is given by, CTiut~ Y i u t - Y/u(t+l) t~ CZiut>__ Y/u(t+l)- Yiut & C O u t ~ CTiut (17a,b,c)
307 To prohibit a change in composition, when two tanks are feeding a CDU, we use: M[2 - ~ Y~u,+ ~ CTiut] @ FTU,u, > FTUiu(t+l) i
el2 - ~
(18a)
i Yiut + E
i
CTiut ] + F T U i u ( t + l )
(18b)
>- F T U i u t
i
With the above, the constraints for crude balance in tanks are:
VCT~c, = VCT~c,_,) + ~FPTpi, - ~ F C T U i , , p
V~, = ~"VCT~,
&
(p,t)~PT, (t9, i)ePl, (p,c)ePC, (i,u)~IU
(19)
u
V~L < V~, < Vy
(i, c) e IC
(20a,b)
r
where, PC = {(p, c) [ parcel p carries crude c}, VCTict is the amount of crude c in tank i at the end of t, and V/t is the volume of crude in i at t. To control crude fractions in tanks, we use: L U xt,~V~t < VCT~ct < xt~ V~t (21) We can specify production requirements in several forms. However, to integrate the refinery supply chain, we specify demands for products rather than crude itself. Thus, if PDj denotes the total demand for product j during the scheduling horizon, then Z ~ " Z ~)" FCTU,,,c, Y ic,, < PDj (22) i
u
c
t
where, y;c~ is the fractional yield of productj from crude c processed in CDU u. Finally, we account for demurrage costs, safety stock penalty, changeover costs, and compute gross profit for the scheduling objective using, (23) Gross profit = ~ ~ if' ~ FCTUi~,CP~ - ~ DC v - COC O~ ~_~CQ, - ~ SC, i
u
c
t
DCv>_ (TLp -ETAp - E T D v ) S W C v
v
u
t
t
(p, v) ~ P V & SCt > SSP(VS-~"Vi,)
(24a,b)
i
where, CPcu is the gross profit per kbbl of crude c processed in CDU u, SWCv per unit time is the demurrage or sea waiting cost for VLCC v, COCo is the cost per changeover, SSP is the penalty for violating the crude safety stock per kbbl per period, VS is the desired safety stock, ETDv is the estimated departure time for vessel v as agreed in the logistics contract, P V = {(p, v) [ parcel p is the last parcel in vessel v}, DCv is the demurrage cost for vessel v and SCt is the safety stock penalty for period t. 4. SOLUTION ALGORITHM The above formulation is approximate, because the nonlinear blending constraint FCTUiuct = FTUiut (fict = the fraction of crude c in tank i at the end of period t) is absent. This result in a discrepancy between the crude composition in the tank and in the crude delivered to CDUs. To eliminate this, we develop a novel iterative strategy. We observe that each tank has zones of periods with constant composition. If we know f c t in such periods, then F C ~ i u c t = fictFTUiut becomes linear and MILP exact. In fact, this is so for a majority of periods. Then, known tank compositions at the start of scheduling provide a starting point for our algorithm. We use them to make the MILP exact for the first few periods, and use the approximation for the rest. That solution provides us the compositions for the next zone of constant composition periods, so we fix the solution for the first few period, use the new composition to make MILP exact for next few periods, and solve the MILP again. In this manner, we progressively correct compositions, until we reach the end. The MILP size and complexity reduce progressively by at least one period at each iteration. The worst-case scenario would require a
fct
308 solution of NT-1 MILPs. Seeing that solving even the MILP itself is difficult, our approach is extremely attractive for getting near-optimal schedules, as it solves no MINLP or NLP. 5. EXAMPLE A refinery has 8 tanks, 3 CDUs and 1 SBM line. It processes 8 crudes of two categories. Tanks 1, 6-8 and CDU 3 store/process crudes 1-4, while the others do crudes 5-8. The SBM line currently holds 10 kbbl of crude 2. One VLCC with 3 parcels arrives at time zero. The scheduling horizon is 80 h, which we divide into 10 slots. Demurrage costs $25K per period and a changeover costs $10K. A safety stock of 1500 kbbl is desired and the safety stock penalty is 0.2K $/period/kbbl. One key component determines the crude quality. Table 1 lists the problem data, while Table 2 lists the optimal schedule. The schedule shows tanks 2 & 4 feeding CDU2 in periods 7-10 to keep the key component concentration below 0.125%. It also shows tank 7 receiving crude from parcels 2 & 3 in period 2, tanks 5 & 6 receiving from parcels 4 & 3 in period 3, and tank 4 feeding CDUs 1 & 2 in periods 7-10. Note that the flow rates from these tanks are constant in subsequent periods to avoid a feed change upset. Further, no discrepancies in crude mixes delivered from tanks and received by CDUs exist in the schedule. The first MILP had 169 binary and 1532 continuous variables, 2944 constraints and took 11226 s for an optimal solution of $1791.1K, which is an upper bound on the globally maximum profit. Our algorithm solved 4 MILPs, required 11356 s and yielded a final profit of $1747.91K, which is within 2.4% of the upper bound on the globally maximum profit. Using 1% relative gap for the first MILP reduced the total time to 473 s with a profit of $1740.1K. Due to space constraint, we cannot include and discuss all the results for this example and for other examples, which we have solved successfully with 2-week horizons and three VLCCs. 6. CONCLUSION A discrete-time MILP model with some continuous-time features and a novel solution algorithm were developed for the short-term scheduling of crude oil operations. In addition to including several real features such as multiple tanks feeding one CDU, one tank feeding multiple CDUs, SBM pipeline, brine-settling, etc., the proposed model uses fewer binary variables and is different from those reported in previous works. The main feature of our algorithm is that it solves the oil quality, transfer quantity, tank allocation and oil blending issues simultaneously without solving a single NLP or MINLP. The proposed approach helps quicker and near-optimal decision making in refinery operations. REFERENCES [1] J. Pinto, M. Joly, L. Moro, Planning and scheduling models for refinery operations. Comput. Chem. Eng. 2000, 24, 2259-2276. [2] N. Shah, Mathematical programming techniques for crude oil scheduling. Comput. Chem. Eng., 1996.20 (S), S1227-1232. [3] H. Lee, J.M. Pinto, I.E. Grossmann, S. Park, Mixed-Integer Programming Model for Refinery Short term Scheduling of Crude oil unloading with Inventory Management. Ind. Eng. Chem. Res. 1996, 35, 1630-1641. [4] J.D. Kelly and J.L. Mann, Crude-Oil Blend Scheduling Optimization: An Application with Multi-Million Dollar Benefits, 'http://www.acs.honeywell.com/ichome/Doc/0/ GRUFIK9GRI6KR3VCOGKU694A28/SchedOptim.pdf'.
309 Parcel Crude Size (kbbl) 1 (SBM) 2 10 2 3 450 3 4 400 4 5 440 * (SBM) 5 10 Crude Key Component (%v) 1 0.002 2 0.0025 3 0.0015 4 0.006 5 0.012 6 0.013 7 0.009 8 0.015
Profit $/bbl 1.5 1.7 1.5 1.6 1.45 1.6 1.55 1.6
Capacity Heel Initial Vol (kbbl) ofCrudes* Totai'] Tank (kbbl) (kbbl) l o r 5 2 o r 6 3 o r 7 4 o r 8 (kbbl)] 100 350 1 570 60 50 100 100 100 400 2 570 60 100 100 100 100 350 3 570 60 100 100 50 300 950 4 980 110 200 250 200 50 300 5 980 110 100 100 50 20 80 6 570 60 20 20 20 20 80 7 570 60 20 20 20 150 450 8 570 60 100 100 100 *Tanks 1, 6-8 store crudes 1-4; 2-5 store 5-8 CDU Crudes 1 5-8 2 5-8 3 1-4
Key Component Capacity Demand Limits (kbb~ (kbbl/period) 0.001-0.0130 20 - 45 375 0.001-0.125 20 - 45 375 0.001-0.0035 20 - 45 375
Table 1" Data for the example Crude (kbbl)* received from (Parcel) or delivered to (CDU) in periods [ 1 2 3 4 5 6 7 8 9 10 ! Tank 2 -20 (2) -20 (2) -45 (2) -20 (2) -45 (2) -45 (2) -20 (2) -20(2) -20(2) -20(2) 4 -20(1) -20(1) -20(1) -45 (1) -45 (1) -45 (1) -45 (1) -45 (1) -45 (1) -45 (1) 4 -25 (2) -25 (2) -25 (2) -25 (2) 5 +40 (4) +400 (4) 6 +10(1) +60(3)
6 +390 (2) 7 +60 (2) 7 +340(3) 8 -20 (3) -20 (3) -20 (3) -45 (3) -45 (3) -45 (3) -45 (3) -45 (3) -45 (3) -45 (3) * - denotes crude delivered to a CDU, + denotes crude received from a parcel Table 2: kbbl of crudes received from parcels or delivered to CDUs in various periods
VLCC-2
VLCC-1
Multi parcel VLCCs
I tanks Fig. 1 Schematic of refinery configuration
U CDUs
Process SystemsEngineering2003 B. Chenand A.W.Westerberg(editors) 9 2003Publishedby ElsevierScienceB.V.
310
Real Options Based Approaches to Decision Making Under Uncertainty Michael J. Rogers, Anshuman Gupta and Costas D. Maranas Department of Chemical Engineering, The Pennsylvania State University, University Park, PA 16802, USA Most of the research in the process systems engineering literature has focused within an enterprise's boundaries without recognizing the existence of financial markets to mitigate the risks associated with planning decisions made in the face of uncertainty. This paper discusses the incorporation of a market-based technique known as real options valuation (ROV) to models of pharmaceutical portfolio management and pollution abatement planning. The key managerial insights of this approach, that external financial market information can be used for valuing the flexibility of internal planning decisions and that both financial as well as real assets can be used to achieve desired objectives, are presented through planning case studies that demonstrate the capabilities of the models. Abstract
Keywords
real options, risk management, pharmaceutical R&D planning, pollution abatement
1 INTRODUCTION Recent research efforts in options pricing have indicated that a company may adopt a more aggressive risk management strategy by looking to financial markets in an attempt to restructure uncertainty levels. This methodology, known as real options valuation (ROV), is an extension of the options-~ricing principles developed by Black and Scholes for financial options as applied to real assets t 1. As applications of this approach, this paper discusses (a) pharmaceutical pipeline management and[(,b)~j pollution abatement planning. In the first part of the paper, we expand on our prior work to include additional managerial choices into the OptFolio model of pharmaceutical R&D portfolio management, which modeled new product development as a series of continuation/abandonment options. The additional options included are (i) delaying Phase I clinical testing and (ii) conducting indications testing during the FDA approval phase to create further growth opportunities. In the second part of this work, we address the problem of managing the pollution abatement activities of a company using market-based mechanisms. The proposed EnviroFolio t41 model quantifies the benefits of taking an integrated, portfolio management approach to minimizing total pollution abatement costs. A global compliance portfolio view is taken by recognizing that desired environmental objectives can be met either through the adoption of cleaner technology alternatives or by investing in market-traded financial instruments such as emissions permits and emissions options. 2 PHARMACEUTICAL PIPELINE MANAGEMENT Papageorgiou et al. ESI applied stochastic optimization to the problem of pharmaceutical planning and capacity management. Blau et al. [6] developed a probabilistic simulation model of a pharmaceutical product development pipeline to prioritize candidate drugs based on their reward/risk ratios. Maravelias and Grossmann E71 integrated the scheduling of tests and the
311 subsequent production planning decisions through a MILP model. In all these works, the traditional net present value (NPV) metric was used as the financial basis for decision-making, which ignores the flexibility to gather more information about a project's potential and change the course of action in response to external market conditions. The application of a real options approach to R&D management offers a major improvement over the NPV technique, because it recognizes that external market information and managerial flexibility can add significant insight to project selection and resource allocation decisions. A pharmaceutical company needs to choose which candidate drugs to push through clinical testing and how their development should proceed in the face of both technological and market/demand uncertainties. If an initial investment in pharmaceutical research is successful, the company has the option of proceeding with each of the three stages of clinical testing followed by an application to gain FDA approval. The flexibility offered in this staged investment decision framework can be interpreted as options by recognizing that management has the flexibility of (i) continuing with the development, (ii) abandoning the project, (iii) delaying the start of Phase I clinical testing to wait for the resolution of uncertainty, and (iv) conducting additional tests during the FDA approval phase to identify other therapeutic uses of the drug as information becomes available. The modified O p t F o l i o model formulation for portfolio management is subject to max R O V = ~ M .... l,k,=, t Nt.s+l
M,.,,, :
a~
- I,: +
k~+I =1
"
-X, _ _~. -~Z-,7~-T
a
(1)
" Y,~k,
~,l -l- r ' f )
~ Y , ~ k s = x,,ks .<1
.V i e. P, s .e S, ks = 1,
(2)
, N~s
tg
x,sk, <_ x .... 1,,,~, , x,.~+t.,,+, < ~ x,,,s , x .... ks-l <- X,,,, V ks
i e P, s e S, ks = 1, ..., Nis
(3, 4, 5)
N,,
ZZ
P,*,-,k,I':Y,:', w,,' <- Bt
V t
(6)
t,s,a k s
M ,,k, _> 0 , x ,sk, , y o ,sk, e { o , l }
(7,8)
Equation (1) characterizes a stochastic dynamic program beginning from the expected payoff received during commercial launch where M,s,, are continuous variables that represent the value of candidate product i in stage s of development following value scenario ks. The binary variables Y,]k, control the R&D decisions that are available for these value scenarios, where each a corresponds to a managerial choice of c o n t i n u e , delay, or c o n d u c t i n d i c a t i o n s testing. Equation (2) guarantees that only one managerial choice can be selected for each value scenario, with a b a n d o n m e n t chosen if all of the binary variables for a given market scenario are set equal to zero. The stochastic probabilities of market uncertainty and technical uncertainty are given by Pzk, ks., and ~ , respectively, and I,~ is the investment cost of developmental stage s for R&D choice a. Equations (3) - (5) describe drug precedence and value monotonicity constraints while
312 Eq. (6) represents budgetary constraints limiting R&D investment. In the OptFolio decision model, the future value of the drug is discounted to the time when the current stage s begins using the risk-free interest rate r~ and the dynamic program described by Eq. (1) makes the value-maximizing decision subject to the appropriate resource constraints. Figure 1 illustrates the selection decisions available to a candidate drug under consideration to begin Phase I clinical testing and shows how the value of the R&D investment opportunity is impacted by the incorporation of managerial flexibility.
Phase I
Phase II
Phase Ill
Success
Success
Success
Failure
Failure
Product Launch
FDA Success
Failure
Failure
\ (C), (D), or (A).9 I
t=0
(C) or (A)? I
(C) or (A)? I
(1), (C), or (A)? I
Timeline
I
'
I
"
t=6
(a) Figure 1 - (a) Schematic of product selection decisions under uncertainty; (b) Comparison of NPV to the ROV of a project when managerial options are included. In this example Egl, management has the option to continue (C) development at the completion of each clinical milestone, delay (D) the start of Phase I clinical testing for one year, conduct indications testing (I) following Phase III in the hopes of expanding the market potential of the compound, and abandon (A) the project in the event of poor market conditions. Without this flexibility, the NPV of the project is -$3.4 M, which suggests that the project should be rejected. However, the value of the project when all of the options are considered rises to $20.3 M. Furthermore, the value of the abandonment option is $9.4 M, the value of the deferral option is $7.6 M, and the value of the growth option is $6.7 M. 3 MARKET-BASED POLLUTION ABATEMENT PLANNING In recent years, there has been a shift of environmental regulatory policy from a simple "command-and-control" approach to a more flexible market-based approach. Emission markets
313 are setup so that facilities are issued tradable permits (allowances) denominated in amounts of a specific pollutant that are valid for a certain period of time. These permits are transferable, so if a facility can generate excess permits by reducing emissions below its allocated levels, then it can sell the extra permits to other facilities. Emission reports are submitted to the regulatory authority at regular intervals, at which time, sufficient permits must be owned to cover emissions during the compliance period. Failure to submit as many permits as emissions results in substantial non-compliance penalties. In addition to emission permits, a firm may also invest in derivative instruments such as emission options. These are contracts that give the holder the right, but not the obligation, to purchase a certain number of emission permits at a pre-specified price for an upfront premium payment. The key feature of an emission option is the flexibility that it offers. Since it does not imply any obligation to buy the permit, the holder of the option can acquire a permit on an "if-needed" basis. This flexibility, however, comes at the price of the premium charged for the option. So far, the process systems community has focused primarily on the design and operation of appropriate pollution abatement technologies N and has overlooked the existence of external emission markets. In the light of the existence of such markets, an integrated environmental compliance portfolio management model, termed EnviroFolio, is described next. rnin P ~176+ Z qj C~J + Z c/xri + Z co'ho ci,,arYiO,h j
i
i,k t
- Z co,,o co;: p,= c~,, ,~ + Y co;, co;: ,., < o ; - Z co~ co; p ,~c ,,,~ , + Z co,~o,;. ,c <;,. J ,kt ,k2
kt ,~2
Subject to Y,. = 1 ; Yi e {0,1}
J,kl ,k2
kt ,~2
(9)
i
^o 0_
(10)
Ek, -- Z
(11)
fliE.Okt i
0 < Ci,,k: < C~, ; C,,,, = C5,,, =+ C~,,h
(12)
0 < N,e,=
(13)
J
k,k~ = max 0, Ek, - N O- N k,,~ p + Nk, k~ - Z C~e,k~
(14)
J
In the above formulation the various indices are as follows: i (technologies), j (options), kl (demand scenarios) and k2 (market scenarios). Three stages are implied by the EnviroFolio model. In the first stage, the initial compliance portfolio is setup by determining which technology to select (Y3 and how many permits/options to purchase (NO/ Cj~ Only a single technology can be selected (Eq. 9) and the numbers of permits/options are subject to availability bounds (Eq. 10). The first three terms in the objective function account for the resulting total permit and option investment and the fixed capital investment in technology. In the second stage, a particular demand scenario (8,,) is realized based on which emission levels ( E , , ) is observed (Eq. 11). Once emission uncertainty is resolved and prior to the filing of the compliance report to the regulatory authority, the firm rebalances its compliance portfolio. Based on the price realized for the permit ( p < ) and the quantity of permits available for buying (N,,,= ^p )\selling (N)~,,~ ),
314 decisions such as how many permits to purchase ( N klk2 p )/sell ( N k~k2 s ), which and how many options to exercise (Cjk, k2 ) and how many of the exercised options be used for compliance (C~klk2) and speculation (C~k,k~) are made (Eq.12-13). The total option exercise cost, which depends on the strike prices (Ks) of the exercised options, is given by the fifth term in the objective function while the net permit trading charges are reflected in the sixth term. The seventh term accounts for the financial gains realized by speculative trading. The excess emissions (E~'~2), if any, are determined through Eq. 14 and the resulting non-compliance penalty is taken into account through the last term in the objective function. Solution of the
EnviroFolio model, thus, establishes the optimal technology-permits-options mix in the firm's compliance portfolio. One of the key features of the EnviroFolio model is that it can be used to determine the potential environmental liability of a company by estimating the probability that excess emissions will be greater than a specified amount. For, instance suppose we need to determine the probability that the excess emissions will be greater than some specified l e v e l / ~ s . This can be evaluated as
where E]I~ is the optimal excess emission level as determined by the solution of the
EnviroFolio model. In addition to providing a means for assessing environmental risk as described above, the EnviroFolio model can also be used for actively managing this risk. To that end, the model can be augmented with a probabilistic, chance constraint of the form Pr(E '~s >E'X*)
~t
E'~'Z,,,, - M 0 - Z,,,= )< E,~ kl ,ka
/e~ '2 g*~*a --
< E~"*O - Z,,,= )+ MZ,,,2
(16)
; g*~*2
A risk-reward frontier can be generated by including Eq. 16-17 within the EnviroFolio model and parametrically varying the maximum probability level a. Thus, the company can design its compliance portfolio to account for the trade-off between total compliance cost and probability of regulatory violation. Next, the EnviroFolio is applied to a representative case study where four alternate technologies are under consideration. These technologies range from the low cost-high emission to the high cost-low emission alternative. Detailed description of the data used for the case-study can be found on our website Is1. Solution of the EnviroFolio model using the CPLEX solver accessed via GAMS results in a total expected cost of 13,423 with the optimal initial compliance portfolio consisting of technology 2, 50 emission permits, 10 contracts each of options 1 and 2 and 0.87 contracts of option 5. The resulting excess emission distribution for this base case setting, which is shown in Figure 2(a), can be used to assess the environmental liability faced by the firm. For instance, the probability of fully meeting compliance requirements is forecasted to be as low as 67.5%. Figure 2(b) shows the trade-off between total expected compliance cost and probability of regulatory default, where the latter is defined as the probability of having excess emissions greater than 10 units. As expected, lower probability of regulatory default is achieved only at the expense of additional cost. As shown in Figure 2(b), the probability can be reduced from the base case setting of 24% to as low as 6% for a 2% increase in total cost. This significant reduction in
315 risk for relatively modest increase in cost can be attributed primarily to the inclusion of additional emission options in the compliance portfolio, highlighting the importance of the flexibility offered by these contracts for risk management purposes. Subsequently, lowering of the probability below 6% cannot be achieved only through financial instruments as it requires adoption of a cleaner technology alternative. In particular, technology 2 is replaced with technology 3 resulting in a sharp increase in total expected cost (Figure 2(b)) due to the higher fixed cost of technology 3. The probability of environmental default can be reduced to approximately 4% with technology 3, at which point it has to be replaced with technology 4.
80% 70% '.--_-50% 60% 40% 1 2 30% 20% 10% 0% .
. r".
I"I.
"".
I='I .
.
.
Excess Emissions
.
.
. i'=s .
15,000 14,800 ~ 14,600 .~o 14,400 o 14,200 14,000 --13,800 ~o 13,600 [.. 13,400 13,200 0%
5%
10% O~
15%
20%
25~
(a) (b) Figure 2 - (a) Excess emission distribution; (b) Cost-compliance probability trade-off curve 3 SUMMARY In this work, a new real option based approach to valuation under uncertainty was described in the context of two planning settings, pharmaceutical pipeline management and pollution abatement planning. These two planning instances were characterized by a different "distancefrom-market". The distance from market refers to the degree to which an underlying source of uncertainty is tracked in financial markets. Pollution abatement planning is relatively closer to the market since there exist reasonably efficient and liquid emission markets. Pharmaceutical pipeline management, on the other hand, is farther from the market given the limited number of market-traded securities through which the market value of a novel drug can be tracked. Consequently, in the future, the potential advantages of adopting a ROV methodology can be expected to increase substantially as more and more risks are unbundled and securitized in free market settings. REFERENCES [1] F. Black and M. Scholes, J. Political Economy, 81(1973), 637. [2] M. J. Rogers, A. Gupta and C. D. Maranas, I&ECR, 41 (2002), 6607. [3] M. J. Rogers, A. Gupta and C. D. Maranas, FOCAPO 2003, 241. [4] A. Gupta and C. D. Maranas, I&ECR (2003), accepted for publication. [5] L. G. Papageorgiou, G. Rotstein and N. Shah, I&ECR, 40(2001), 275. [6] G. Blau, B. Mehta, S. Bose, J. Pekny, G. Sinclair, K. Keunker and P. Bunch, Comp.& Chem. Eng., 24(2000), 659. [7] C. Maravelias and I. E. Grossmann, I&ECR, 40(2001), 6147 [8] http ://fenske.che.psu.edu/Faculty/CMaranas/ [9] A. Chakraborty and A. A. Linninger, I&ECR, 41 (2002), 4591
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
316
Risk management in integrated budgeting-scheduling models for the batch industry J. Romero ~+), M. Badell ~+), M. Bagajewicz ~++), and L. Puigjaner t+)t#) (+)Universidad Polit6cnica de Catalunya, Chemical Engineering Department, ETSEIB, Diagonal 647, 08028 Barcelona, Spain (++)University of Oklahoma. School of Chemical Engineering and Materials Science, 100 E. Boyd St., T-335, Norman, OK 73019, USA. (#) Corresponding author. Keywords: Batch plants, Scheduling, Budgeting, Planning
ABSTRACT This paper addresses integrated scheduling, planning and budgeting with financial risk management in the batch chemical process industries. A cash flow and budgeting model is coupled with an advanced planning and scheduling procedure using a two-stage stochastic formulation. The results of the integrated model are compared with the results of the sequential use of scheduling followed by budgeting. Finally the model is extended to manage financial risk. 1. INTRODUCTION In the last 20 years, a number of models have been developed to perform short term scheduling and longer term planning of batch plant production to maximize economic objectives (Shah, 1998). On the other hand, budget models for financial control emerged earlier than operation schedules to time payments, arrange loans and eventually to invest in marketable securities the excess of cash at a given instant which is needed later (Orgler, 1970; Srinivasan, 1986). The importance of cash management has been recognized more than half a century ago. For example, Howard and Upton (1953) stated that "The effective control of cash is one of the most important requirements of financial management. Cash is the lifeblood of business enterprise, and its steady and healthy circulation throughout the entire business operation has been shown repeatedly to be the basis of business solvency". However, this knowledge remained confined to the financial management divisions of the companies and has not percolated into the design/operations activities. Scheduling/planning on one hand and budgeting on the other have been treated as separate problems and normally implemented in a sequential way. The output from scheduling/planning (timing and amount produced) obtained maximizing some profit function, is used to determine accounts payable (raw material, labor, etc.) and the accounts receivable (proceeds from sales, etc), which in turn is an input data for the budgeting problem. Thus, with the lack of adequate enterprise computer-aided systems capable of managing optimally the working capital, the financial officers make decisions using out of date, estimated or anecdotal information (Badell and Puigjaner, 1998). The integration of both activities to use a single one-step procedure seems therefore reasonable even at first glance. For example, it is evident that a good schedule may sometimes
317 create financial stress because at the level of budgeting, financing sources are needed, while some other schedule may be more profitable because it does not generate a severe impact in the budgeting side. In this paper such integration of a cash flow management model with an advanced planning and schedule (APS) procedure is performed and tested. Another issue is financial risk. Risk, in financial circles is associated to the variability of profit. In an alternative definition, risk is the probability of not reaching a certain aspiration level of profit. Thus variability, which penalizes equally profitable and unprofitable scenarios, is substituted by the analysis of the cumulative probabilities at different aspiration levels. Some systematic procedures based mostly on mathematical programming have been developed to manage risk at the design stage (Barbaro and Bagajewicz, 2002a,b). In financial circles, the use of decision trees and ad-hoc trial and error is more common. In this article financial risk is discussed and managed. 2. I N T E G R A T E D M O D E L F O R S C H E D U L I N G & P L A N N I N G AND B U D G E T I N G The proposed framework is shown through a specific case study. This case study consists of a batch specialty chemical plant with two different batch reactors. Here, each production recipe basically consists of the reaction phase. Hence, raw materials are assumed to be transferred from stock to the reactor, where several substances react, and, at the end of the reaction phase, products are directly transferred to lorries to be transported to different customers. Plant product portfolio is assumed to be around 30 different products using up to 10 different raw substances. Production times are assumed to range from 3 to 30 hours. Product switch-over basically depends on the nature of both substances involved in the precedent and following batch. Cleaning time ranges from 0 up to 6 hours till not permitted sequences.
2.1 Scheduling&Planning Model The problem to solve is a 13-week period. Equations 1 to 23 show schematically the scheduling&planning model. The first week is planned with known product demands and the others with known (regular) and estimated (seasonal) demands. Here, orders to be produced are scheduled considering set-up or cleaning times. Thus, the sequence of orders to satisfy customer requirements and the schedule assignement that minimises the overall required cleaning time is calculated for the first week. The following weeks after the first are not exact. Indeed, they probably won't be executed as calculated, but their planning is useful to consider if coming orders will be satisfied. Here, no exact sequence is calculated. The amount of raw materials and final products stored at every week-period are also monitored as a function of the amount stored in the precedent period, the amount bought or produced and the amount consumed or sold in that period. With this, the model is able to schedule raw materials orders. (1)
TPfwe = Z Z TOPp nx" .... + ~] CTo,e
(2)
TPfwe < 168 (week production time)
(3)
CTo,o : ~'] ~-] CTp,p,Xp. . . . X p , , o _ l , e
o
p
p
o
0>1
p'
CTo,~ = 0 (initial required cleaning time) (4)
~ x p .... <1 p
o=1
318 (5)
Z ~ . ....
_<1
o
(6)
~ " x p .... <-~ZXp.o_l.e
(7)
xp .... < nxp ....
(8)
M . x p .... > nxp ....
(9)
X,.o,,~. = 0
(10)
xp,o,,2, = 0 andxp,o,,r = 0 i f p r o d u c t p is of type 1
p
(11)
p
i f p r o d u c t p is of type 2
re~,,=Zroe,.~,,~,,
k>l
t
(12)
TPk,, _<( 1 6 8 - 0 k) (week production time)
(13)
w,,k,, < nwp.k,,
k>1
k>1
(14)
M . w,,k, e > n w ,,k, ,
(15)
w,,k,.,l, = 0 i f p r o d u c t p is of type R2
(16)
wp,k.ev = 0 and wp,k0e3. = 0 i f p r o d u c t p is of type R1
(17)
nwp,k__,,, = ~ nxp ....
(18)
WP,k=l, e = Z
k > 1
k>1 k> 1
o Xp .... o
(19) satisfaction, i,,, . . . . off, = 1
(20) P_Stockp,k = P_Stockp,k_, + ~ Be "nwp,k,, e
~
qp, " satisfaction,
i k=D, +4 t prod~=p
P_StOCkp, k > 0
(23)
R_Stock,, k = R_Stock,,k_ ~- ~
~., qrp . nwp,k, , + qb, . rbr, k_ ~
e plRp=r
R_Stock~, k > 0
2.2 Budgeting Model Short-term Budgeting decisions can be taken every week-period. Production expenses during the week consider an initial stock of raw material and products. An initial working capital is considered beneath which a short-term loan must be requested, if needed. The minimum net cash flow allowed in every week-period is set taking into account the variability of cash outflow. Among others, the production liabilities incurred in every week-period are because of the purchasing of raw materials and other fixed costs, while the income comes form the sale of products. A short term financing source is represented by a constrained open line of credit. Under an agreement with the bank, loans can be obtained at the beginning of any period and are due after one year at a monthly interest rate depending on the bank agreement (i.e 5%). This interest rate might be a function of the minimum cash. The portfolio of marketable securities held by the firm at the beginning of the first period includes several sets of securities with known face values in monetary units (mu) and maturity week-period k' incurred at month-period k . All marketable securities can be sold prior to maturity at a discount or loss for the firm. Introducing these equations into the budgeting model presented give an integrated model for production scheduling and planning and enterprise budgeting.
319 (24)
Wcash k > Min_Cash
(25)
R _ L i a b i l i t y k _ ~ = ~_~ q b r . rbr. k . C o s t R a w ,
(26)
Exogenous_cash
r
(27)
k = ~ s a t i s ,.qp, 9SaleP, ,Io, =k
Debt k < Max_debt Debt k = Debtk_ ~ + Borrow k - Out _ Debt k + F . Debtk_ ~ 13
(28)
MS_net_cashflow
k = - ~
(MSinvk.,k - MSsalek..k )
k '=k+l k-I
+ ~, (d,,, ,MSinv,,,, - e,,k,MSsale,, , , ) k'=l
(29)
Exogenous
_ cash k - R_ liability k + Borrow k - Out _ Debt k MS _ net _ CashFlow k + WCashk_ l + others k = WCash k
2.3. Objective function For m = 3, 6, 9 and 12, cash is withdrawn from the system, for example in the form of shareholder dividend emission. Objective function will consist of maximising these dividends as follows: (36)
othersm=3.6,9,12 = - s h a r e _ O.F.
= max
div t
y ] a t 9s h a r e _
I = 1, 2, 3, 4
div t
l
3. RESULTS OF THE INTEGRATION OF MODELS The model was run for a plant product portfolio of 20 different orders using up to 10 different raw substances. The model is implemented in GAMS/CPLEX in a 1 GHz machine using about 190 CPU seconds. The results of solving the integrated model with the sequential application of both models (Scheduling&planning and budgeting) separately were compared. The overall cash withdrawn using the integrated model is of 203196 m.u. while the resolution of the sequenced problems gives earnings of just 185588 m.u. The schedules and products produced are different (not shown for space reasons). Figure 1 shows the profile of marketable securities and debts of the enterprise during the three first months of the plan, period prior to the first dividend emission of the year. The integrated model manages to change production planning to be able to invest more cash on marketable securities and reduce the debt. Figure 2 shows how the planning results are effectively different when considering the integrated model. 70000 > 60000
"1o
Marketable ,~=curities] (Integrated Model) /
.m
.~ 50000
i i i9 -i Debt(Integrated |/
._ L
= 40000
o
Marketable securities| (No integration) / Debt (No Integration) /
30000 20000 E 10000 L
O>
0
J
.
0
.
.
.
.
.
~__
5
10
.=
==
15
Weeks
Figure 1. Comparison of the integrated model for scheduling&planning and budgeting with the use of independent models.
320
Figure 2. Comparison of the planning results for the integrated model (right-graph) with the sequential resolution results (left-graph). kl to k13 are the 13th week-period planning and each gray-scale tone is a different product. 4. STOCHASTIC M O D E L Two essential features characterize the stochastic model: the uncertainty in the problem data and the sequence of decisions. Here, as for the planning model product demand is considered a random variable with a normal probability distribution. As for the short-term budgeting model, the delivered product payments and the 'others' costs of production aside from raw liabilities are also considered random. In the long-term budgeting model, the expected production profit and production cost are random. First stage decision variable are the ones concerning the planning & scheduling meanwhile the variables concerning the budgeting are considered second stage. 5. FINANCIAL RISK The Financial risk associated with a specific planning solution under uncertainty is defined as the probability of not meeting a certain target profit level, referred to as s (Barbaro and Bagajewicz, 2002a, b). The use of the concept of downside risk, D-Risk(x,~), in the way introduced by Eppen et al. (1989), is applied in this work. D-Risk(x,W) is used to control financial risk at different targets W. The details of the implementation are in Barbaro and Bagajewicz (2002). The financial risk curve obtained for the stochastic model is shown in Figure 2.
321
Figure 3. Risk curve when no risk minimization is used. 5. CONCLUSIONS This paper has addressed the importance of integrating scheduling and budgeting models. By means of a comparison using a case study, it has been shown that significant improvements are possible as compared to the use of scheduling models followed by budgeting models. It has also been illustrated how a stochastic model can be used to manage financial risk.
Acknowledgements Financial support from Generalitat de Catalunya (CIRIT) and European Community by VIPNET (G1RD-CT-2000-003181) project is gratefully acknowledged. The support of the ministry of Education of Spain for the sabbatical stay of Dr. Bagajewicz is also acknowledged.
References Badell, M., L. Puigjaner, "Discover a Powerful Tool for Scheduling in ERM Systems", Hydrocarbon Processing, 80, 3, 160, 2001. Badell, M., Puigjaner, L., "A New Conceptual Approach for Enterprise Resource Management Systems". FOCAPO AIChE Symposium Series No.320. (Eds. Joseph F. Pekny and Gary E. Blau), American Institute of Chemical Engineers (AIChE), New York, ISBN 08169-0776-5, V 94, pp 217- 223 (1998). Barbaro A.F., M. Bagajewicz. "Managing Financial Risk in Planning under Uncertainty". AIChE Journal. Submitted (2002). Baumol, W. J., "The Transactions Demand for Cash: An Inventory Theoretic Approach," Quarterly Journal of Economics, Vol. 66, No.4 (1952), pp. 545-556. Funk, G., "Enterprise integration: join the successful 20%", Hydrocarbon Processing, 80, 4, 2001. Miller, M. H., And Orr, R., "A Model of the Demand for Money by Firms," The Quarterly Journal of Economics, Vol. 80, No.3 (1966), pp. 413-435. Orgler, Y. E., "An Unequal-Period Model for Cash Management Decisions", Management Science, Vol. 20, No.10 (October 1970), pp. 1350-1363. Shah, N., "Single and Multisite Planning and Scheduling: Current Status and future Challenges". FOCAPO AIChE Symposium Series 94(320), 91-110 (1998). Srinivasan, V.,1986, "Deterministic Cash Flow Management", Omega, Vol. 14, No.2, pp.145-166.
322
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Modeling Cleaner Production Promotion with Systems Dynamics Methodology: a Case Study of Process Industries in China Lei SHI a, Jing HUANG b, Hanchang SHI a, Yi QIAN a a(State Key Joint Laboratory of Environment Simulation and Pollution Control, Tsinghua University, Beijing 100084,PR China) b(School of Environment and Natural Resources, Renmin University of China, Beijing, 100872, PR China) ABSTRACT By applying systems dynamics methodology, this paper primarily models two key aspects of cleaner production (CP): adoption and diffusion. In the modelling of the adoption of CP, focuses are stressed onto the differences between CP and end-of-pipes (EOP). In the modelling of the diffusion of CP methodology, Bass model is adopted to be the basic model. Based on this model, policy analysis can be carried out by adding peripheral loops to the basic loops. Though this work is just at the beginning, promising roles can be observed to find out what are main factors influencing CP promotion and in what way they influence.
Key words: Cleaner production, systems dynamics, business decision making 1. INTRODUCTION The last decade has seen remarkable progresses in the implementation and promotion of CP in process industries in China. A great deal of stories came forth, no matter what was a success or failure, in state-owned enterprises or SMEs (small and medium enterprises), promoted by governments or carried out voluntarily. Though these implementations and practices have been reviewed and disseminated extensivelyt~'21, further studies especially in a quantitative way are still needed. By introducing Systems Dynamics (SD) which is a methodology for better understanding the interrelationships and complexities, this paper aims to find out what are main factors influencing CP promotion and in what way they influence. Followed by introduction of the state-of-the-art of CP implementation and promotion in China, two key aspects of CP are modelled by using SD methodology. The first one is to model the adoption of CP and end-of-pipes (EOP); the second is to model the diffusion of CP methodology. Both aim to understand and analyse influencing factors and their interrelationships to promote and implement CP. 2. THE-STATE-OF-THE-ART OF CP PROMOTION IN CHINA As early as 1980s, some Chinese enterprises were engaged in technical innovations called 'zero-waste techniques'. These successful cases had provided experiences for China to
323 promote cleaner production later. In 1993, the Second National Working Conference on Industrial Pollution Prevention and Control highlighted cleaner production as a critical measure for harmonizing the environment with economic development, thereby attaining sustainable development. It means that the strategy for industrial pollution control should be shifted from the end of pipe control approach to whole production process management and adoption of cleaner production. Promotion of cleaner production was also documented into some important documents, such as China's Ten Strategic Policies on Environment and Development and China's Agenda 21. At the same year, China initiated its first CP demonstration project, referred to by National Environmental Protection Agency (NEPA) as the "B-4" project under the supports of the World Bank and United Nations Environmental Programs (UNEP). Since then, many efforts have been made in training and awareness raising, policy review, demonstration projects, and international cooperation on cleaner production with various provincial and governmental agencies. Some remarkable progress has been made. For example, many multilateral/bilateral cooperation projects on cleaner production have been carried out with the World Bank, Asia Development Bank, UNEP, and the governments of Canada, the United States, Norway and the Netherlands; National and more than 10 sectoral or local cleaner production centers have been established; Jiangsu, Liaoning province and Taiyuan city have systematically promoted and implemented cleaner production at provincial or municipal level. These early-stage efforts prepared the basic capacity and awareness for wide-adoption of cleaner production in China and led to many significant steps in government policies. In May 1997, SEPA issued the policy documents on promoting cleaner production in environmental protection activities. In May 1999, SETC issued the 'Notice on Promoting Cleaner Production via Demonstration Projects'. Ten cities and five industrial sectors have been identified as national demonstration sites on cleaner production. These cities are: Beijing, Shanghai, Tianjin, Chongqing, Shenyang, Taiyuan, Jinan, Kunming, Lanzhou and Fuyang. The sectors include petrochemical industry, metallurgical industry, chemical industry (nitrogen fertilizer, phosphate fertilizer, chlor-alkali and sulphuric acid), light industry (pulp and paper, fermentation and beer-brewery) and ship building. These important decisions show that China has scaled-up cleaner production strategies from the enterprise level to the regional and sector levels. In April 2000, the Taiyuan cleaner production Law, the first local cleaner production law in China, took into effect. More recently, a law "Cleaner Production Promotion Law" was issued by the National People's Congress on June 29, 2002 and came into effect as of January 1, 2003. Total 6 chapters are included: 1) general Provisions; 2)realization of cleaner production; 3) implementation of cleaner production; 4) inducement measures; 5) legal liability; and 6) supplementary articles. However, the promotion of CP in China is still far from satisfied. The implementation of cleaner production is unbalanced across the country and in different sectors. The majority of provinces and cities in China have not established a mechanism to promote cleaner production. The principles of cleaner production and sustainable development has not been effectively integrated into the policy systems of all government agencies. With "Cleaner
324 ProductionPromotion Law" taking into effects, China's cleaner production is stepping into
a new era where opportunities and challenges coexist. 3. MODELING AND DISCUSSION SD is a perspective and set of conceptual tools that enable us to understand the structure and dynamics of complex systems. Meanwhile, SD is also a rigorous modeling method to help us build computer simulation of complex systems and then use them to design more effective policies and organizations TM. To promote and implement CP is really a complex problem. The 10 years' efforts to implement CP in China have revealed several obstacles to further CP promotion. These obstacles exist both within and out of the enterprise. Generally, these can be categorized into the following types: 1) awareness; 2) management; 3) policy; 4) regulation and 5) technological aspects. To help to understand and analyse influencing factors and their interrelationships to promote and implement CP, two aspects of CP are modelled in this paper by using SD methodology. 3.1 Modelling the adoption of CP and EOP EOP technologies include the use of a variety of technologies and products (chemicals) to treat wastes and liquid and gaseous effluents. The major characteristic distinctions between CP and EOP have been extensively discussed and are not listed here. There exist many influencing factors for enterprises to adopt CP or EOP, such as the technological maturity, expected economic and environmental benefits, fund availability, and environmental regulations, etc. To model the adoption of CP or EOP at enterprise level is a challenging job. One simple SD model is illustrated in Fig. 1 which only considers the effect of cost, benefit and network compatibility. The resulted adoption ratio and market share are shown in Fig. 2. The model is just to illustrate the usefulness of SD methodology and too simple to reflect the real world. A more complex model is expected in our later work. 3.2 Modelling the diffusion of CP Methodology To promote the diffusion of CP methodology (CP audit here), a set of policies has been developed to promote CP in China based on the B-4 project, China-Canada CP cooperation project and other projects. An integrated policy framework has been shaped through surveys, case studies and reviews from China's current environmental and industrial policies and regulations, as well as its technological renovation policies and strategies. This CP policy framework composes of compulsory, incentives, pressure and supportive mechanisms (Fig. 3). To model the diffusion of CP, the Bass model is adopted as a base model. There are two basic feedback loops existing in Bass model. One is market saturation which has balancing
325 effect; the other is word of mouth which has reinforcing effect. Bass model is a widely-used diffusion model and has been used to model the diffusion of new product and technology. Its main advantage lies in that it can solve the startup problem of diffusion models. Fig. 4 illustrates the basic construction for modeling the diffusion of CP methodology. This model can help us to learn how the adoption rate and market share are affected by influencing factors. Say, different demonstration effectiveness can lead to different schemes of adoption rate and market share, which brings a useful tool to evaluate CP projects. Based on this model, then, subsequently, one by one, additional factors can be added and their influence on the dynamics of the process is studied. For example, the supportive policy will change the contact rate between CP adopters and potential adopters; we can learn what is the role of this policy by changing the contact rate. Applying this model to analyze city-level CP actions in Nantong city and Changzhou City in Jiangsu Province [4], we can primary understand how the elements of Jiangsu's CP program takes into effect. The results will be shown later.
.-<>
/
(~) f
Ar/ Initial_CP_Adopt~{~ ~ [ /A'hreshold_for_CP_Comp
f'~'~ C~-Ad~176176 7 ~ _ CP_Compa~_ ~_J' ~ Sensitiyity_of_CP_Attractiveness
CP_Pr~pora~ttracti~~_~
Average_CP_Cost
Total,~eness ...,,- ~ . verage_CP_Benefit Total_D~and (,~~31:=rCtei~S0 ~ i ~ ~ _ _ /~
\
~" ~
"-"~'OP!Adopte~._it~E ~ O P COmp EOP_Adoption ,.~. . . ~ Sensitivity_of_EOP_Comp Initial_EOP_Adopters N
/
I'#
CP_Adopted_Ent Total_Enterprises EOP_Adopted_Ent Figure 1: Modelling the adoption of CP and EOP
326
0.5,
0.4-
.-2
j,,2" ......... 2 ........
"2
2 ........
I
2
I
I
."
4
6
8
Time
1~--1--
2o~ "z\\
-1- EOP_Shal -2- CP_Share
1"--~-~.1~1.,~
ol
,
10
1
-1- CP_Adoptio= 2 EOP_Adopti
2........2~'"\\
400[
0.30
/
1,oo
..... 2
0.70.6-
;
0
2
,
4
? .......... ~
Tim,
6
2
8
10
Figure 2: The market share and adoption ratio of CP and EOP
Consumers ]Supportive[ Policies S ~
CPby
~__~
I Forceful Policies Government
Supply Chains
Figure 3. Integrated policy framework for CP promotion (modified from Ref [5])
. .. "Total_Enterprises J
Potenti
I I I \ / \
rs_P +~ + B Adoptio~ I:~.te AR Market / - ~Saturation/ ~
Adopter~A R Word Of ] Mouth /
ion
@ Demostration_Effectivessness
prises
<2
o,,on_ract,on ,
ContacLRate_C
Figure 4: The basic model of the diffusion of CP
327
300 200
J
100 O, 0
/
/
1
\
--1-- Adoption_From_Word_Of_Mouth
3
- 2 - Adoption_From_Demostration -3-- Adoption_Rate_AR
2
4
6
8
10
Time
Figure 5" The adoption rate of the basic model
73--"--~.-~,3
1,000.
3 ~ ~ - 3 ~ 1 " 3
"'
1
-1-- Adopters_A
500
~ 0I
o
PotentialAdopters P ~2~ --3-- TotalEnterprises
'
'
2
i
4
6
-
8
Time
Figure 6: The changes of the adopters, potential adopters in basic model 4. CONCLUDING R E M A R K S
Modeling the adoption and diffusion of CP is a challenging job. The two illustrative examples priliminarily prove the power of SD methodology. Our future work will be focused on two aspects: 1) study further on the two jobs above, the adoption of CP techniques and the diffusion of CP audit methodology, by taking some real-life examples; 2) policy study on waste management hierarchy including source reduction, waste recycling, treatment and disposal. REFERENCES
[1] J. Wang, Environmental Impact Assessment Review, 19(1999) 437 [2] Y. Qian, H.C. Shi, G.X. Sun, L. Shi, Promoting Cleaner Production, in J. Liu ed., Study on China's Sustainable Development in 21st Century, Beijing: China Agriculture Publisher, 2001 [3] J.D. Sterman, Business dynamics: Systems thinking and modeling for a complex world, Boston: Irwin/McGraw-Hill, 2000. [4] H.H. Yan, O. Leonard, H.C. Shi, Proceedings of ENRICH Conference 2002, Beijing, 2002 423 [5] T.Z. Zhang, Proceedings of Beijing International Conference on Cleaner Production, Beijing, 2001 214
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby Elsevier ScienceB.V.
328
Web-based Application for Multi-Objective Optimization in Process Systems Yoshiaki S H I M I Z U a, J a e - K y u Yoo a and Y a s u t s u g u T A N A K A ~
aDepartment of Production Systems Engineering, Toyohashi University of Technology, Toyohashi 441-8580, Japan Abstract Recently, multi-objective optimization (MOP) has been highly required to deal with complex and global decision environment toward agile and flexible manufacturing. To facilitate its wide application, we developed a novel method named MOON 2 (Multi-Objective optimization with value function mode led by Neural Network) as a Web-based application. By that, everyone can engage in MOP readily and easily regardless of knowledge about MOP and computer configuration of users. In this paper, we introduce MOON 2R (MOON 2 using Radial Basis Function (RBF) networks) that has more flexible modeling ability of value function. After outlining the solution procedure of MOON TM, the proposed system configuration will be explained with an illustration. Keywords multi-objective optimization, Internet, RBF network, pair comparison
1. INTRODUCTION Multi-objective optimization (MOP) is increasingly interested in supporting agile and flexible manufacturing in complex and global decision environment, and expected to solve various problems in chemical engineering t1'21. To avoid stiffness and shortcomings of the conventional methods, we proposed a novel prior articulation method named MOON 2t31, and implemented its algorithm as a Web-based application. It was realized as a client-sever architecture through common gateway interface (CGI) so that everyone can use the system regardless of his/her own computation environment. To facilitate its wide application, in this paper, we have improved the modeling procedure of value function using RBF networks (RBFN) taJ. After outlining MOON TM, configuration and usage of the Web-based application system will be shown illustratively. 2. SOLUTION P R O C E D U R E T H R O U G H M O O N 2n The problem concerned here will be described generally as follows. (p. 1)
Min f ( x ) - {ft(x),f2(x),'",fu(x) }
subject to x ~ X,
where x denotes a decision variable vector, X a feasible region, and f an objective function vector some elements of which conflict and are incommensurable with each other.
329 Generally speaking, MOP can be classified into the prior articulation method and the interactive one. However, conventional methods of MOP have both advantages and disadvantages over the other. For example, since the former derives a value function separately from the searching process, decision maker (DM) will not be worried about the tedious interactions during the searching process as will be in the later. On the other hand, though the later can articulate elaborately the attainability among the conflicting objectives, the former will pay little attention on that. Consequently, the derived solution may be far from the best compromise of DM. In contrast to it, MOON 2 and MOON TM can not only resolve these problems but also handle any kinds of problem, i.e., linear programs, non-linear programs, integer programs, and mixed-integer programs under multi-objectives by incorporating with proper optimization methods. 2.1. Identification of value function using neural networks First we need identify a value function that integrates each objective function into an overall one. For this purpose, we adopted a neural network (NN) due to its superior ability of the nonlinear modeling. Particularly, RBFN whose structure is shown in Fig.l has some advantages compared with the back propagation network, i.e., small computational load, easy dynamic adaptation against incremental operations N. Learning of the network is carried out to minimize the squared sum of the difference between training data y; (i= 1,.., p) and output from
RBFN with respect to weights w / ( j = 1,...,m)
,
i.e.,
Min
(Yi j=l
wjhj(x,)) 2 + )=l
2jwj j=!
where 2~ (i = 1,.-., m) denotes regularization pareameter. Such training data is gathered through pair comparisons regarding the relative preference of DM among the trial solutions. That is, DM is asked to reply which he/she likes, and how much it is between every pair of the trial solutions. Just like AHP (Analytic Hierarchy Process, Saaty) tSl, such responses will be taken place by using the linguistic statements, and then transformed into the score as shown in Table 1. After doing such pair comparisons over k trial solutions ~, we can obtain a pair comparison matrix whose i-j element aq represents the degree of preference o f f compared w i t h f (Refer Fig.4 appeared in the later). After all, the pair comparison matrix provides totally/~ training data for RBFN. The Table 1 Conversion table Linguistic statements
a0
Equally
1
Moderately
3
Strongly
5
Demonstrably
7
Extremely
9
Intermediate judgment 2,4,6,8 between the two adjacent Fig.1.
Basic
Structure
of RBF
t Undermildconditions,total numberof comparisonis limitedto k(k-1)/2.
330 objective values of every pair, say, f i and f / b e c o m e the 2N inputs, and an i-j element aij one output. Using some test problems, we ascertain that a few typical value functions can be modeled correctly by a reasonable number of pair comparisons as long as the number of objective function is less equal to three t~l. By viewing thus trained RBFN as a function VRB• such that: {f(x),je(x)} ~ R2N--~%~ R ~, it should be noticed that the following relation holds.
V~r(f',fk)=a~k > V s s r ( f l , f k ) = a j k r f ' ~ f J
(1)
Hence we can rank the preference of any trial solutions easily by the output from RBFN that is calculated by fixing one of the input vector at an appropriate reference, sayfR.
VRSF(f(x); f R) :a .R
(2)
Since the responses required for DM are simple and relative, his/her load in the tradeoff analysis is very small.
2.2. Incorporation with optimization methods Now the problem to be solved can be described as follows. (p.2) Max
Ve~r(f(x),f R)
subject to x ~ X
Since we can evaluate any solution from VRBFunder multi-objectives once x is prescribed, we can apply the most appropriate optimization method for the problem under concern, i.e., nonlinear programming, direct search method, and even more meta-heuristic method like genetic algorithm, simulated annealing, tabu search, etc. Also we can verify the optimal solution of (p.2) locates on the Pareto optimal solution set as long as Eq.(1) holds t61. If we try to use the algorithm that requires gradients of the objective function like nonlinear programs, we can calculate conveniently them by the following relation.
aV~F ( f (x)' f R ) = ( c3V~F ( f (x) (x)' f R ) )( afaX (x)
(3)
ax We can complete the above calculation by applying the numeric differentiation for the first term in R.H.S. of Eq.(3) while deriving the analytic form for the second.
3. IMPLEMENTATION AS WEB-BASED APPLICATION Due to the various reasons such as little knowledge about MOP, computer environment, etc., it is not necessarily easy for everyone to engage in MOP. To deal with such circumstances, we implemented MOON TM on the Internet as a client-server architecture that enables us to carry out MOP readi ly and effectively. Core of the system is divided into a few independent modules each of which is realized using the appropriate implementation tool. The optimizer module solves a single objective
331 optimization problem through incorporating the identified value function, specifying the problem as a Fortran programming format, and compiling it by Fortran compiler. Though only sequential quadratic programming (SQP) is implemented presently, various methods are possibly available as mentioned already (GA was applied elsewheret~l). The identifier module provides a modeling process of the value function based on the neural network where a pair comparison is easily performed just by mouse click operation on the Web page. Moreover, the graphic module generates various graphical illustrations for easy understanding about the results. The user interface of the MOON 2a system is a set of Web pages created dynamically during the solution process. The pages described with HTML (hypertext markup language) are viewed by users' browser that is a client of the server computer. The server computer is responsible for data management and computation whereas the client takes care of input and output. That is, users are required to request a certain service and to input some parameters. In turn, they can receive the service through visual and/or sensible browser operation. In practice, the user interface is a program creating HTML pages and transferring information between the client and the server. The programs creating HTML pages are programmed using CGI programming languages named Ruby. As the role of CGI, every treatment is carried out on the server side, and any particular tasks are not assigned to the browser side (See Fig.2). Consequently users are not only free from the maintenance of the system .such like update, issue, reinstall, etc. but also are regardless of their computation environment like operating system, configuration, performance, etc. Though there are several sites serving (single-objective) optimization library (e.g., http://www-neos.mcs.anl.gov/), none is known regarding MOP except for NIMBUS t71 (http://nimbus.math.jyu.f'~ so far. However, since NIMBUS belongs to an interactive method, it has the disadvantages mentioned already. On the other hand, the articulation process of MOON TM is separated from the searching process, DM can engage in the interaction at his/her own paces, and will not be worried about by the hurried/idle responses like the interactive methods. Also it should be noted that the required responses are simple and relative, and DM needs not any particular knowledge about the theory of MOP. Such easy usage, small load in the tradeoff analysis, and maintenance-free features will be expected to facilitate the decision making from a comprehensive point of view that should be required for agile and flexible problem-solving in chemical engineering. The URL of the system is http://scmoon2.tutpse.tut.ac.jp/cgi-bin/moon2v2/. (Presently Japanese version only )
Fig.2. Scheme of task flow through CGI
Fig. 3. Beam form design problem
332 4. ILLUSTRATIVE D E S C R I P T I O N OF USAGE As a demonstration of the Web-based MOON TM, we provide a bi-objective design problem regarding decision on the strength of material t8]. To grasp the whole idea and the solution procedure of MOON TM,this demonstration is most valuable. We also provide another entry of the Web page for the original user problem. Below we will explain about the demonstration of the example problem. Moving to the target Web-page, we will find a formulation of the problem.
64
D241
f2(X) ~
1
- X 24
DI4-X, '
l3
D,4-x, '
subject to
gl (x) =180
9.78•
x!
4.096x 10 7 - X
(4)
4 >0 2
g2(x) = 75.2-x 2 >0 g3 (x) = x 2 -40_> 0
(5) (6) (7) (8)
g4 (X) = XI > 0
hI(x) = x i - 5x2 = 0
where x, and x2 denote the tip length of the beam and the interior diameter respectively as shown in Fig.2. Inequalities Eqs.(4)-(8) represent appropriate design conditions. Moreover, objective functions f and f2 represent volume of the beam and static compliance of the beam respectively. Then input page for problem description is provided to input the objective functions and the constraints under the format similar to Fortran language. After repeated processes of input and confirmation, a set of trial solutions for the pair comparisons is generated arbitrarily within the hull convex spanned by the utopia and the nadir solutions 2. Now for every pair of the trial solutions, DM is required to make a pair comparison through mouse click of radio button indicator. After showing the pair-comparison matrix thus obtained (See Fig.4), and checking its inconsistency from the AHP theory, the training process of RBFN will start. Its training results are presented both numerically and schematically. The subsequent stages proceed as follows: select an appropriate optimization method (presently only SQP is available); input the initial guess of SQP for the optimization search; click start button. The result of the multi-objective optimization is shown schematically compared with the utopia and the nadir solutions (See Fig.5). If DM would desire further articulation, additional search might be taken place before a satisfying solution has been found. In this case, the same procedures will be repeated within a narrower searching space around the earlier solution to improve it.
2 For example, a utopia is composed off(x,*) whereas a nadir of the problem such that "maxf(x) subject to x ~ X."
minjfj(xfl), (i=l
"", N) where x;* is the optimal solution of
333 5. CONCLUSION Introducing a novel and general approach for multi-objective optimization named MOON 2R, in this paper, we have implemented its algorithm as a Web-based application. It is unnecessary for everyone to have any particular knowledge about MOP, and to prepare the particular computer environment. They need only a Web browser to submit their problem, and to indicate their subjective preference between the pair of trial solutions generated automatically by the system. Eventually, it can facilitate the decision making from a comprehensive point of view that should be required to pursue the sustainable development in process systems. An illustrative description outlines the proposed system and its usage. Further studies should be devoted to add various optimization methods as applied elsewhere t1'61 besides SQP, and to improve certain user services that enable us to save and manage their own problems. The security routine for usage is also important aspects left for the future studies. REFERENCES [ 1] Y.Shimizu, J. Chem. Engng. Japan, 32, (1999) 51. [2] V.Bhaskar, K.S.Gupta and K.A.Ray, Reviews in Chemical Engng., 16 (2000), 1. [3] Y.Shimizu and A.Kawada, Trans. Soc. Instrument Control Engrs., 38, 11 (2002) 974. [4] M.J.L.Orr, Introduction to Radial Basis Function Networks, http://www.cns.uk/people/mark.html, 1996. [5] T.L.Saaty, The Analytic Hierarchy Process, McGraw-Hill, New York, 1980. [6] Y.Shimizu and Y.Tanaka, "A Practical Method for Multi-Objective Scheduling through Soft Computing Approach," Japanese Soc. Mech. Engrs Int. J., to be appeared. [7] K.Miettinen and M.M.Makela,, Computers & Operations Research, 27 (2000) 709. [8] A.Osyczka, Multicriterion Optimization in Engineering with Fortran Programs, John Willey & Sons, New York, 1984.
Fig. 4. Pair comparison matrix
Fig. 5. Page representing a final result
Process SystemsEngineering2003 B. Chenand A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
334
Computing
Pareto
Fronts
Using
Distributed
Agents
J o h n D. Siirola a, S t e i n a r H a u a n a*, and A r t h u r W . W e s t e r b e r g a aDepartment of Chemical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213, USA 1. I N T R O D U C T I O N Business decision making in the Chemical Process Industry involves selecting from a large list of potential projects. The selection is virtually always based on multiple independent and often conflicting criteria, including capital cost, operating cost, risk, operability, expected return, and environmental impact. To make an informed decision, the decisionmaker needs to identify the inherent trade-offs among the various criteria. These trade-offs can be represented through a Pareto set of options wherein no member of the set is strictly better than any other member of the set. We suggest that the focus of process synthesis, design, and optimization is thus shifted from producing the single globally optimal solution to the rapid generation of the entire Pareto set of non-dominated options from which the business decision maker can select the final project. The generation of Pareto sets through formal, rigorous multi-objective optimization involves the solution of a combinatorially explosive number of subproblems, each one of which may be non-convex and hard, if not impossible, to solve [1]. Further, most of the currently available optimization algorithms are primarily serial algorithms that do not scale well across parallel computational resources. In contrast, the promising emerging platform for large-scale computation is the parallel distributed computing cluster. Agentbased systems can reconcile this discrepancy by integrating numerous autonomous computational algorithms, or agents, into a cohesive system in which the agents can interact through the direct and indirect sharing of information. This work builds on a framework proposed for single-objective optimization [2] in which numerous rigorous, stochastic, and heuristic optimization methods are each encapsulated as agents and allowed to interact through a common shared memory system. We extend the system to solving multi-objective problems with non-convex objective functions. Through system simulation on a parallel computer cluster, we demonstrate the ability of the agent system to leverage effectively the distributed resources and generate complete Pareto solution sets. Further, we propose several metrics for quantifying the collaboration among the individual agents and show how these metrics can be applied to both off-line analysis and on-line dynamic tuning of the agent scheduling policies. 2. M U L T I - A G E N T S Y S T E M S The fundamental properties of multi-agent systems stem from an attempt to mimic selforganizing biological systems such as ant colonies [3] or flocking birds [4]. In each case, the *Corresponding author, email: [email protected]
335 behavior of the biological system is not determined by the actions of a central governing entity; rather, the overall system characteristics emerge from the complex interactions among the individuals that make up the system. In the case of multi-agent systems, the system comprises one or more sets of independent programs (agents), each with its own skill set or algorithm. The agents then interact through the sharing of intermediate or complete solutions to the subproblem of interest. Similar to cellular automatica, the individual agents that compose the populations do not need to have inherently complex skills or governing rules for the overall system to display complex behavior [5,6]. This work employs a multi-agent system built on the concept of Asynchronous Teams [7]. In this system, individual semi-autonomous agents interact asynchronously through a shared memory system. Each agent set must perform three fundamental tasks: initialization, operation, and reporting. An agent begins by establishing a connection to the shared memory and determining the solution set from the shared memory that it will use to initialize itself. The agent then performs some operation on the initial solution set and finally reports the results of the operation back to the shared memory system so that any subsequent agent can use that data as part of its initialization. This process creates a cyclic flow of data from the shared memory, through an agent's operator, and back to shared memory. By channeling all inter-agent communication through the shared memory, the system can avoid the need for direct inter-agent interaction. Since each agent is independent and has its own connection to the shared memory system, there is no need for an explicit system synchronization step (the classic barrier to parallel speedup). Although the agents are independent, they are not completely autonomous in that they cannot arbitrarily decide when to run. This framework uses semi-autonomous agents combined with a central executive to resolve all resource conflicts on the agents' behalf: each agent may prioritize its request for computing resources, but it must wait for a centralized executive routine to allocate the resources and start the agent. 3. M U L T I - O B J E C T I V E
OPTIMIZATION
In single-objective optimization, the goal of the optimization routine is to identify a specific solution which represents the best value of some objective function. However, many of the problems that are of key interest to the business decision maker are evaluated using multiple conflicting objectives. The solution of such a multiple objective problem can no longer be represented as a single point; rather, the "solution" is the Pareto-optimal set of feasible solutions that represents the trade-offs among the objectives. The Paretooptimal set comprises the non-dominated members of the feasible space; that is, it is the set of solutions such that, for each member of the set, there exists no feasible solution that is strictly better in all objectives. There are several standard approaches for optimizing multiple objective problems. The simplest is to optimize each objective function separately, and then identify the Pareto set from the union of the individual optimization problems. This method does well in identifying the extrema of the Pareto surface (the global optima for each objective), but it does not identify the inherent trade-offs among the objectives. We may rigorously identify the entire Pareto surface by converting all but one of the objectives into inequality constraints and solving for all possible values of the inequality constraints: [m,in
s.t.
fl (x, y) f,(x,y)_<ei
1 V ei E Rn Y i=2,...F
(1)
336 However, this formulation potentially represents an infinite number of sub-problems, each of which may be non-convex and time-consuming to solve. Apart from the rigorous approach, there exist several popular methods for creating a composite objective or fitness function and then optimizing over that fitness function. The most common of these is the weighted sum:
/(x, y) =
Z ~ , f~(x, y)
v ~ 9 [0,1]
i
~.t. E ~ = 1
(2)
i
To identify the entire Pareto surface requires optimizing for all possible values of c~i. This approach has the drawback that it does not identify concave regions of the Pareto surface [1]. A second composite fitness function is a form of goal programming in which we select a target goal in objective space, fo,i c R n, and the fitness is calculated as the euclidean distance to that point:
/(~' Y) = i~
(3)
(f'(z' y) - f~
Although we can use judicious selection of the target point, f0,i, to explore into concavities in the Pareto surface, this approach only provides solutions in the local region around the target point; the process of identifying the entire Pareto surface requires optimizing for a large number of target points. A third approach is to rank each point based on the number of other points that have been identified to dominate the point in question:
/j = ~-~Pj,k
j e Q; k e Q
k
where Pj,k =
1 fk,i(x,y) < fj,~(x,y) 0 otherwise
Vi E 1,...F
(4)
Note that unlike the weighted sum and goal programming approaches, this fitness function is discrete and relies on the existence of a set of points, Q, in order to calculate the fitness of any single point. For problems with continuous components in state space, the Pareto set may contain an infinite number of distinct feasible solutions. The goal of completely identifying the Pareto surface must then be changed to a more attainable goal of identifying a representation of the Pareto surface that satisfies the orthogonal goals of minimizing the error (in objective space) between the true Pareto surface and the identified Pareto surface and of maximizing the number of distinct regions (in state space) that are identified as being in the true Pareto surface. The first goal is a measure of the overall accuracy of the best possible trade-off surface, whereas the second goal identifies the completeness with which we have identified the physically meaningful solutions. 4. A G E N T P O P U L A T I O N S We have implemented each of the fitness functions mentioned above as an agent (Table 1). To convert a traditional stand-alone algorithm into an effective agent, each algorithm must be relaxed by removing the restriction that the algorithm be able to identify the complete Pareto surface. The presence of more than a single algorithm working on the problem enables this simple relaxation. Additionally, by reducing the average run time of each algorithm, this approach significantly increases the frequency of inter-agent collaboration
337 Table 1 Multi-objective agent implementation details.
Agent Class
Obj. Fcn.
Initialization
Algorithm Details
HVol* GAMS-Conopt2 e-constraint Eq. (1) random a; SS dyn niching GA: pop 50; 20 gen Weighted sum Eq. (2) HVol**; SS dyn niching GA: pop 50; 20 gen Goal programming Eq. (3) OS dyn niching GA: pop 50; 20 gen Domination count Eq. (4) , - find max(HVol) penalizing any previous HVols; e = 5 evenly-spaced values in the HVol. ** - find max(HVol) penalizing any previous HVols; f0,i -- min(.fHYol,i).
through the sharing of intermediate and final solutions. Further, an appropriate selection algorithm for choosing the point set with which to initialize the optimization algorithm must accompany each optimization algorithm. We use several key approaches for initializing algorithms and selecting point sets in a data-rich environment. The domination count algorithm requires no special initialization, and we initialize the weighted sum algorithm with random settings. In contrast, we must explicitly initialize the e-constraint and the goal programming algorithms using data from the shared memory. In this case, we search the shared memory for the pair of points that are adjacent along the Pareto surface and that form the largest possible objective space hyper-volume (HVol). We then use this hyper-volume to initialize the algorithms. Initial point set selection also plays a significant role in the performance of the genetic algorithm (GA) based agents. To ensure a diverse point set, we use a niching [8] scheme operating in either state space (SS) or objective space (OS). In this scheme, after we select a point for inclusion in the set, we penalize nearby points to decrease the likelihood of their selection. In addition to the four multi-objective agents, we use a set of five single-objective agents: a gradient-based optimizer, a simulated annealing algorithm, a genetic algorithm, a heuristic algorithm that attempts to place points in unexplored regions of state space, and a data removal algorithm that removes poor points from the extremes of the investigated state space. 5. C A S E S T U D Y : N O N - C O N V E X
MULTI-OBJECTIVE
OPTIMIZATION
We combine the agents described in the previous section to solve the following twoobjective problem: fl (x) =
minx,
4--~ c~ (J + J xi)
100 I I i=lj=l
where
where
i=1
(5)
xo,i = 3 V i
mini1 ~,
+ --~ ~ (xi - x0,i) 2
~ E
i=1 j=l
(x~- xj,~ + i
xj,i={-2,
-5,
/]
(6)
5, 7 } V i
We selected this problem for its difficulty and controllability. Both functions are nonconvex and have a large number of local optima. Eq. (5) is the combination of a cyclic waveform with a quadratic overlay and has 8N2 N-1 local optima in a hypercube with side
338 0.00
..........
......................................................................................
i ."sew~:]
2 : sewm
930
931
0.05 e28
0.10
25 924
0.15
o
18 e16#:. ~ 8 ~17 11 9 913 e7
"6 .CO 0 . 2 5
~o
922
~
26
r
914
sew se m se-d s w-m
swd s md
_ewm_
_e_md ew d
wind
171 se 18: s_w__
e
19:
20: s d 21 : _ e w _ 22: 23 : e d 24: 25." w d 26 : m d 27: e. 28: 29: 30: m 31: d
~ -;~
9~12 3r 9
u. 0.30
9 119 49 *23
927
10 e 2 1 0.20
929
"
3 : sew_d semd s_wmd _ewmd
4: 5: 6: 7: 8: 9: lO: 11: 12 : 13 : 14 : 15: 16 :
9
9
0.35
0.40 ).0
0.5
1.0 Scaled
1.5 Area
2.0
Error
Figure 1. Plot of area error versus unique state space regions identified for all 31 combinations of agent classes. T h e n u m b e r indicates the variable p o p u l a t i o n c o m p o s i t i o n : S - single objective, E - c-constraint, W - weighted sum, M - goal p r o g r a m m i n g , d o m i n a t i o n count
D -
length 10 including N identical global optima. Eq. (6) is a fourth-order polynomial with a single global optimum and 2N local optima. For this study, we used a value of N = 5. We combined the agents to form a composite population of 44 agents by combining 4 fixed agents (one each of the single-objective gradient-based optimizer and genetic algorithm in each objective) with a variable composition population of 40 agents. The variable population is composed of eight of each agent class: single-objective (one gradient, simulated annealing, and genetic agent in each objective, plus the heuristic and data removal agents), e-constraint, weighted sum, goal programming, and domination count. We also created subsets of the variable population by removing an agent class and redistributing the remaining agents evenly to maintain the total variable population of 40 agents. To demonstrate the effectiveness of a heterogeneous algorithmic environment, we ran simulations using all 31 possible combinations of agent classes on a 32-node distributed cluster. We evaluated the performance by calculating the error in the scaled area under the identified Pareto surface compared to the known Pareto surface and by computing the fraction of unique Pareto-optimal regions in state space that a particular agent combination identified (Fig. 1). By allowing the five classes of optimization agents to interact, the resulting system far outperforms any of the agent classes operating alone. Further, we can measure the effect of adding a single agent class to a collection of other agents by looking at the average change in solution quality for the combination of agent classes and for the combination plus the agent in question (Table 2). This shows that the addition of any agent class generally improves the performance of the system. Further, in the cases where the addition of an agent hurts the overall system performance, the average degree of worsening is generally less than half the average degree to which the addition of that
339 Table 2 Effect of adding an agent class to a combination of other agent classes: the number of instances, the average improvement (worsening) in area error, and the average improvement (worsening) in the regions identified. Detrimental Mixed Agent Class A d d e d Beneficial # Aarea Aregion # Aarea Aregion # 15 0.37 0.082 0 n/a n/a 0 Single-objective 15 0.20 0.067 0 n/a n/a 0 e-constraint 13 0.37 0.059 1 (0.05) (0.046) 1 Weighted sum 10 0.09 0.060 2 (0.01) (0.034) 3 Goal programming 13 0.26 0.081 1 (0.05) (0.019) 1 Domination Count
agent benefits other agent combinations. In addition, the presence of situations where the addition of a particular agent class was detrimental to (or had a mixed effect on) the system performance demonstrates a form of negative "collaboration." This negative collaboration may indicate that there are phases in the solution process and that different agent classes contribute in different phases of the problem. 6. C O N C L U S I O N S A N D O N G O I N G W O R K We have shown the use of a multi-agent system for identifying the Pareto surface for a complex, non-convex multi-objective problem. By modifying the make-up of the agent population in the system, we were able to demonstrate the benefits of allowing multiple, diverse algorithms to collaborate during the solution of the problem. Business-oriented decisions in the Chemical Process Industries have many of the features similar to the case study presented in this work. Ongoing work is aimed at adding representation and solution capabilities for generalized constraints and models with uncertainty. REFERENCES 1. P.A. Clark and A.W. Westerberg. Optimization for design problems having more than one objective. Comp.Chem.Engng., 7:646-661, 1983. 2. J.D. Siirola, S. Hauan, and A.W. Westerberg. Towards Agents-based process systems engineering: Proposed framework and application to non-convex optimization. Provisionally accepted for publication in Comp. Chem.Engng., September 2002. 3. M. Dorigo, V. Maniezzo, and A. Colorni. Ant System: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics. Part B: Cybernetics, 26(1):29-41, February 1996. 4. C.W. Reynolds. Flocks, Herds, and Schools: A Distributed Behavioral Model. In SIGGRAPH '87, volume 21(4) of Computer Graphics, pages 25-34, July 1987. 5. S Talukdar. Asynchronous teams. In Fourth International Symposium on Expert Systems Applications to Power Systems, La Trobe University, Australia, January 4-8 1993. 6. S. Wolfram. A New Kind of Science. Wolfram Media Inc., 2002. 7. S. Talukdar, L. Baerentzen, A. Gove, and P. de Souza. Cooperation schemes for autonomous agents. Technical Report 18-56-96, EDRC, Carnegie Mellon University, Pittsburgh, PA., 1996. 8. K. Deb. Multi-objective optimization using evolutionary algorithms. Wiley, 2001.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
340
Self-optimizing control" From key performance indicators to control of biological systems Sigurd SKOGESTAD Dept. of Chemical Engineering, Norwegian University of Science and Technology, N-7491 Trondheim, Norway
Abstract. The topic of this paper is how to implement optimal decisions in an uncertain world. A study of how this is done in practical systems - from the nationwide optimization of the economy by the Central Bank to the optimal use of reseources in a single cell - shows that a common approach is to use feedback strategies where selected controlled variables are kept at constant values. For example, the Central Bank may adjust the interest rate (independent input variable) in order to keep the inflation constant (selected controlled variable). The goal of this paper is to present a unified framework for selecting controlled variables based on the idea of self-optimizing control, and to provide a number of examples. Keywords. Optimal operation, active constraint, controlled variable, constrol structure design
1
Introduction
The national economy, the goverment, companies and businesses, consumers, chemical process plants, biological systems, and so on, are all decision makers that make up a complex hiearchical decision system(Findeisen et al. 1980). At each level, there are available degrees of freedom (decision variables or "inputs") that generally are adjusted locally in order to optimize the local behavior. We are here not concerned with the optimal coordination of all these decision makers (which is certainly very interesting), but rather on studying how these individual "players" make and more importantly implement their decisions. A major problem in making the right decision is that the world is changing. These changes, which we can not affect, are here denoted disturbances d. They include changes in exogeneous variables (such as the outdoor temperature), as well as parameter variations in the system (e.g., aging of system components). A common strategy in practice is to use a simple feedback strategy where the degrees of freedom u are adjusted to keep selected controlled variables c at constant values cs ("setpoints"). The idea is to get "self-optimizing control" where "near-optimal operation" is indirectly achieved, without the need for continuously solving the above optimization problem. In this paper we study this is more detail, and provide a number of examples. We assume that optimal operation of the system can be quantified in terms of a scalar cost function (performance index) Jo which is to be minimized with respect to the available degrees of freedom Uo, min Jo(x, Uo, d)
(1)
subject to the constraints gt (x, Uo, d) = 0;
g2(x, Uo, d) < 0
(2)
Here d represents the exogenous disturbances that affect the system, including changes in the model (typically represented by changes in the function gx), changes in the specifications (constraints), and changes in the parameters (prices) that enter in the cost function (and possibly in the constraints), x represents the internal states. We have available measurements y = fo(X, uo, d) that give information about the actual system behavior during operation (y also includes the cost function parameters (prices), measured values
341 of other disturbances d, and measured values of the independent variables Uo). For simplicity, we do not in this paper include time as a variable. The equality constraints (gl = 0 ) include the model equations, which give the relationship between the independent variables (Uo and d) and the states (z). The system must generally satisfy several inequality constraints (99. < 0), for example, we usually require that selected variables are positive. The cost function ,lo is in many cases a simple linear function of the independent variables with prices as parameters. In many cases it is more natural formulate the optimization problem as a maximization of the profit P, which may formulated as a minimization problem by selecting Jo = - P . In most cases some of inequality constraints are active (i.e. g2c = 0) at the optimal solution. Implementation to achieve this is usually simple: We adjust the corresponding number of degrees of freedom Uo such that these active constraints are satisfied (the possible errors in enforcing the constraints should be included as disturbances). In some cases this consumes all the available degrees of freedom. For example, if the original problem is linear (linear cost function with linear constraints gl and g2), then it is well known that from Linear Programming theory that there will be no remaining unconstrained variables. For nonlinear problems (e.g. gl is a nonlinear function), the optimal solution may be unconstrained, and such problems are the focus of this paper. The reason is that it is for the remaining unconstrained degrees of freedom (which we henceforth call u) that the selection of controlled variables is an issue. For simplicitly, let us write the remaining unconstrained problem in reduced space on the form
minJ(u,d)
(3)
where u represents the remaining unconstrained degrees of freedom, and where we have eliminated the states x = x(u, d) by making use of the model equations. J is then not a simple function in the variables u and d, but rather a functional. For any value of the disturbances d we can then solve the (remaining) unconstrained optimization problem (3) and obtain Uopt(d) for which
m i n J ( u , d ) = J(uopt(d),d) def Jopt(d) t~
The solution of such problems has been studied extensively, and is not the issue of this paper. In this paper the concem is implementation, and how to handle variations (known or unknown) in d in a simple manner. In the following we let d* denote the nominal value of the disturbances. Let us first assume that the disturbance variables are constant, i.e., d = d*. In this case implementation is simple: We keep u constant at us = uopt(d*) (here u~ is the "setpoint" or desired value for u), and we will have optimal operation. (Actually, this assumes that we are able to implement u - u,, which may not be possible in practice due to an implementation error n - u - u~ (Skogestad 2000)). But what happens if d changes? In this case Uopt(d) changes and operation is no longer optimal. What value should we select in this case? Two "obvious" approaches are for ~.~ If we do not have any informatmn on how the system behaves during actual operation, or if it is not possible to adjust u once it has been selected, then the optimal policy is to find the best "average" value u~ for the expected disturbances, which would involve "backing off" from the nominally optimal setpoints by selecting u~ different from Uopt(d*). The solution to this problem is quite complex, and depends on the expected disturbance scenario. For example, we may use stochastic optimization (Birge and Louveaux 1997). In any case, operation may generally be far from optimal for a given disturbance d. 2. In this paper we assume that the uconstrained degrees of freedom u may be adjusted freely. Then, if we have information (measurements y) about the actual operation, and we have a model of the system, we may use these measurements in to update the disturbances d, and based on this perform a reoptimization to compute a new optimal value Uopt(d), which is subsequently implemented, u -?/,opt(d). Both of these approaches are complex and require a detailed model of the system, and are not likely to be used in practice, except in special cases. Is there any simpler approach that may work?
342
2 Implementation of optimal operation" Self-optimizing control
I-?: ii Cerdreller
n
d
lr
J
Figure 1: Implementation with separate optimization and control layers. Self-optimizing control is when near-optimal operation is achieved with c, constant. We assume in the rest of this paper that we have available measurements y = fy(u, d) about how the actual operation is proceeding, and that the values of u may be adjusted freely. If we look at how real systems operate, then we see that in most cases a feedback solution is used, whereby the degrees of freedom u are adjusted in order to keep certain controlled variables c at constant values, where c is a selected subset of the available measurements y; see Figure 1. Example 1. Central Bank. Consider the role o f the Central Bank in a country, which has available one degree o f freedom, namely the interest rate (u). The measurements y may in this case include the inflation rate (Yt), the unemployment rate (Yz), the consumer spending (Ya) and the investment rate (Y4). In addition, we also know the chosen interest rate (Its = u). The simplest policy would be to do nothing, that is, keep the interest rate constant (corresponds to the choice c = Y5 = u). A more common policy today is f o r the Central Bank to adjust the interest rate (u) in an attempt to keep the inflation rate constant (corresponds to the choice c = Yd. A typical desired value (setpoinO f o r the inflation rate is c, = 2.5%. What is the motivation behind attempting to keep c constant at cs? Obviously, the idea must be that the optimal value of c, denoted Copt(d), depends only weakly on the disturbances d, such that by keeping c at this value, we indirectly obtain optimal, or at least near-optimal, operation (Morari et al. 1980). More precisely, we may define the loss L as the difference between the actual value of the cost function obtained with a specific control strategy, e.g. adjusting u to keep c = cs, and the truly optimal value of the cost function, i.e. L(u, d) = J(u, d) - .]opt(d) (4) Self-optimizing control (Skogestad 2000) is when we can achieve an acceptable loss with constant setpoint values for the controlled variables (without the need to reoptimize when disturbances occur)
Let us summarize how the optimal operation may be implemented in practice: 1. A subset of the degrees of freedom Uo are adjusted in order to satisfy the active constraints (as given by the optimization). 2. The remaining unconstrained degrees of freedom (u) are adjusted in order to keep selected constrolled variables c at constant desired values (setpoints) cs. These variables should be selected to minimize the loss.
343 Ideally, this results in "self-optimizing control" where no further optimization is required, but in practice some infrequent update of the setpoints cs may be required. If the set of active constraints changes, then one may have to change the set of controlled variables c, or at least change their setpoints, since the optimal values are expected to change in a discontinuous manner when the set of active constraints change. We next present some simple examples to illustrate the above ideas.
Example 2. Cake baking. Let us consider the final process in cake baldng, which is to bake it in an oven. Here the are two independent variables, the heat input (ul de..~.fQ) and the baking time (~2 de__ft). It is a bit more difficult to define exactly what J is, but it could be quantifed as the average rating o f a test panel (where 1 is the best and 10 the worst). One disturbance will be the room temperature. A more important disturbance is probably uncertainty with respect to the actual heat input, for example, due to varying gas pressure for a gas stove, or difficulty in maintaining a constant firing rate f o r a wooden stove. In practice, this seemingly complex optimization problem, is solved by using a thermostat to keep a constant oven temperature (e.g., keep Cl = To~,en at 200~ and keeping the cake in the oven f o r a given time (e.g., choose c2 = u2 = 20 min). The feedback strategy, based on measuring the oven temperature cx, gives a selfoptimizing solution where the heat input (UL) is adjusted to correct f o r disturbances and uncertainty. The optimal value for the controlled variables (cl and c~ are obtained from a cook book, or from experience. An improved strategy may be to measure also the temperature inside the cake, and take out the cake when a given temperature is reached (i.e., u2 is adjusted to get a given value o f c2 = Tca~,. Example 3. Long distance running. Consider a runner who is participating in a long-distance race, for example a marathon. The cost function to be minimized is the total running time, J = T. The independent variable u is the energy input (or something similar). O f course, the runner may perform some "on-line "optimization o f his~her body, but this is not easy (especially if the runner is alone), and a constant setpoint policy may probably be more efficient. The most common and simplest strategy is to run at the same speed as the other runners (e.g. c = Yl = distance to best runner, with cs = lm) , until one is no longer to maintain this speed. However, this does not work if the runner is alone. Another possible strategy is to keep constant speed (c = ~1~speed). However, this policy is not good if the terrain is hilly (d = slope o f terrain), where it is clearly optimal to reduce the speed. This policy, as well as the previous one, may also give infeasability, since the the runner may not able to maintain the desired speed, for example, towards the end o f the race. A better self-optimizing strategy for a lone runner may be to keep a constant heart rate (c = Ya = heart rate). In this case, a constant setpoint strategy seems more reasonable, as the speed will be reduced while running uphill. Example 4. Biology. Biological systems, for example a single cell, have in place very complex chemical and biochemical reaction newtworks, o f which significant parts have the function o f a feedback control systems (Savageau 1976) (Doyle and Csete 2002). Indeed, Doyle (lecture, Santa Barbara, Feb. 2002) speculates that many o f the supposedly unimportant genes in biological systems are related to control and compares this with an airplane (or a chemical planO where the majority o f the parts o f the system are related to the control system. Biological systems at the cell level are obviously not capable o f performing any "'online" optimization o f its overall behavior. Thus, it seems reasonable to assume that biological systems have instead developed self-optimizing control strategies o f the kind discussed in this paper A challenge is to find out how these complex systems work and what the controlled variables are. Biological systems have developed and been optimized over millions o f years. I f we could identify the controlled variables, then we can also do further "reverse engineering" in an attempt to identify the cost function Jo that nature has been attempting to minimize. Example 5. Business systems. Business systems are very complex with a large number o f degrees of freedom (u 's), measurements, disturbances and constraints, and may also be regarded as. The overall
344 objective o f the system is usually to maximize the profit (or more specifically, the net present value o f the future profit, J = - N P V ) (although, businesses are often criticized for using other shorter-term objectives, such as maximizing this years share price, but we will leave that discussion). In any case, it is clear that few managers base their decisions on performing a careful optimization o f their overall operation. Instead, managers often make decions about "'company policy" which in many cases involved keeping selected controlled variables (c's) at constant values. For example, the common approach o f identifying key perfomance indicators (KPls) for the business, may be viewed as the selection o f appropriate controlled variables c. Some examples o f KPls or "value metrics" (Koppe12001) may be 9 l~mefor the business to respond to an order from a customer 9 Energy consumption per unit produced 9 Number o f accidents per unit produced 9 Number o f employees per unit produced 9 Degree o f automation in the plant The optimal value for these variables are typically obtained by analyzing how other succesful businesses perform (benchmarking to find "best business practice ").
3
Optimal choice of controlled variables
In most cases the controlled variables c are selected simply as a subset of the measurements y, but more generally we may allow for variable combinations and write c = h(y) where the function h(y) is free to choose. If we only allow for linear variable combinations then we have Ac = HAy
(5)
where the constant matrix H is free to choose. Does there exist a variable combination with zero loss for all disturbances, that is, for which Copt(d) is independent of d?. As proved by Alstad and Skogestad (2002) the answer is "yes" for small disturbance changes, provided we have at least as many independent measurements (y's) as there are independent variables (u's and d's). The derivation Alstad and Skogestad (2002) is surprisingly simple: In general, the optimal value of the y's depend on the disturbances, and we may write this dependency as Yopt(d). For "small" disturbances the resulting change in the optimal value of yopt(d) depends linearly on d, i.e. Ayor,t(d) = F A d (6) where the sensitivity F = dyopt(d)/dd is a constant matrix. We would like to find a variable combination Ac = H A y such that Acopt = 0. We get Acopt = HAyopt = H F A d = O. This should be satisfied for any value of Ad, so we must require that H is selected such that HF = 0
(7)
This is always possible provided we have at least as many (independent) measurements y as we have independent variables (u's and d's) (Alstad and Skogestad 2002): First, we need one c (and thus one extra y) for every u, and, second, we need one extra y for every d in order to be able to get H F = O. Example 1. Central Bank (continued). For this problem we have n = interst rate and J = - National Product. An important constraint in this problem is that u >_ 0 (because a negative interest rate will result in an unstable situation), but in most cases this constraint will not be active, so we have an unconstrained optimization problem with one degree o f freedom. The measurements y may include the inflation rate (yl), the unemployment rate (Y2), the consumer spending (Y3) and the investment rate (y4). There are many disturbances, for example, dl --"'the m o o d " o f the consumers, d2 = global politics, including possible
345 wars, da = oil prices, etc. As mentioned earlier, and a common policy is to attempt to keep the inflation rate constant, i.e. c = Vl. However, with the large number o f disturbances, it is unlikely that this choice is always self-optimizing. Even i f we assume that there there is only one major disturbance (e.g. dl =consumer mood), then fix~m the results presented above we need to combine at least two measurements. This could, for example, be a corrected inflation goal based on using the interest rate, c = hlVl + h2u, but more generally we could use additional measurements, c = hlyl + h2y9 + hay3 + h4y4 + hsu. The parameters for such a corrected inflation goal could be obtained by reoptimizing the model for the national economy with alternatives disturbances, using the approach just outlined. In the above example, the prices were assumed constant. From physical considerations, it is clear that the introduction of price changes may be taken care of by introducing a "price correction" on each controlled variable, but that price changes otherwise will not affect the problem of selecting controlled variables. The reason is that prices appear only in the optimization part of the block diagram in Figure 1, so that it is not prossible to detect price changes in the process itself.
4
Conclusion
The selection of controlled variables for different systems may be unified by making use of the idea of selfoptimizing control. The idea is to first define quantitavely the operational objectives through a scalar cost function d to be minimized. The system then needs to be optimized with respect to its degrees of freedom Uo. From this we identify the "active constraints" which are implemented as such. The remaining unconstrained degrees of freedom u are used to controlled selected controlled variables c at constant setpoints. In the paper it is discussed how these variables should be selected. We have in this paper not discussed the implementation error n = c - c, which may be critical in some applications (Skogestad 2000). The full version o f this paper with an additional detailed example (blending o f gasoline) is available at the home page o f S. Skogestad, see:
http: //www. nt. ntnu. no/~ skoge/publications/2003/sel fopt_pse2 O03 /
References Alstad, V. and S. Skogestad (2002). Robust operation by controlling the fight variable combination. 2002 AIChE Annual Meeting, Indianapolis, USA. (Available from the home page o f S. Skogestad). Birge, J.R and F. Louveaux (1997). Introduction to stochastic programming. Springer. Doyle, J.C. and M.E. Csete (2002). Reverse engineering of biological complexity. Science 295, 1664-1669. Findeisen, W, EN. Bailey, M. Brdys, K. Malinowski, P. Tatjewski and A. Wozniak (1980). Control and coordination in Hierarchical Systems. John Wiley & sons. Koppel, L.B (2001). Business process control: The outer loop. Proc. Syrup. Chemical Process Control (CPC'6), Tuscon, Arizona, Jan. 2001. Morari, M., G. Stephanopoulos and Y. Arkun (1980). Studies in the synthesis of control structures for chemical processes. Part I.. AIChE Journal 26(2), 220-232. Savageau, M.A. (1976). Biochemical systems analysis. Addison-Wesley. Skogestad, S. (2000). Plantwide control: the search for the self-optimizing control structure. ,1. Proc. Control 10, 487-507.
346
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Reconfigurable batch processes: innovative engineering side of chemical supply chains
design
of
the
L. Z. Stec a, P. K. Bell b, A. Borissova a, M. Fairweather a, G. E. Goltz a, A. McKay b and X. Z. Wang a*
aDepartment of Chemical Engineering and Keyworth Institute of Manufacturing and Information Systems, The University of Leeds, Leeds LS2 9JT, United Kingdom bDepartment of Mechanical Engineering and Keyworth Institute of Manufacturing and Information Systems, The University of Leeds, Leeds LS2 9JT, United Kingdom
Abstract The chemical, pharmaceutical and other speciality chemicals industries are facing customer demands for faster response in the introduction of new products, changes in market requirements, and strong competition in the global market. In order to increase competitiveness and secure business, it is essential to fully address chemical supply chain integration and agile manufacturing. We regard the manufacturing plant, i.e. the engineering side of the chemical supply chain, as one of the key issues that needs to be addressed in improving supply chain agility since traditional chemical plants are often designed for predefined products and operational modes, with connections and control systems that are difficult to replace. We propose the concept of reconfigurable batch processes - an innovative design methodology that allows the plant to be reconfigured within hours rather than weeks in order to fulfil the demands of chemical supply chain dynamics. Reconfigurable batch processes have higher agility and flexibility than multi-purpose, multi-product, dedicated batch and continuous processes. Keywords reconfigurable batch processes, agile manufacturing, chemical supply chains, Britest plants
1. INTRODUCTION Chemical plants are part of the chemical supply chain that probably have the least agility, because traditionally they are often designed for fixed products and operational modes with only limited consideration of flexibility. Attempts to address this issue include optimisation of the production scheduling of equipment within a single site, or of production over multiple sites, as well as designing more agile processes. Fig-ljl 1 compares different types of process plants in terms of agility as well as product life cycle . At one end is the continuous process plant which is most suitable for the mass production of commodity products which often have a long product life cycle, while at the * Correspondence author. Email: [email protected]
347 other end are multi-product, multi-purpose and pipeless batch process plants that are often designed for the production of low volume and high value added products such as pharmaceutical products and speciality chemicals. In this work we propose an innovative design method for batch processes - reconfigurable batch processes that have the highest agility.
Fig. 1 Comparison of different types of processes in terms of agility Other terms have also been used in the literature to describe processes that are relevant to process operational agility, including flexibility and versatility. Based on the dictionary defnition of these words, we define a plant as being flexible if it is adaptable to changes in such things as capacity, routes and product types, and a plant is versatile if it is designed for many uses during its life-time with minimum modifications. In contrast, reconfigurability refers to the ability to reconfigure the hardware as well as software parts of a plant at a low cost and in a short period of time. In this work, we define the relationship between such plants as shown in Fig. 2.
Agility
T
+ Speed
Flexibility
Versatility
Reconfigurability
Fig. 2 The relationships of agility, flexibility, versatility and reconfigurability 2. R E C O N F I G U R A B L E BATCH PROCESSES It is not uncommon at the present time for a speciality chemical plant or a plant manufacturing pharmaceutical intermediates to change its production, i.e. to change to manufacturing completely new products using completely new raw materials up to several ten
348 times per year. The ability to be able to rapidly reconfigure the plant equipment, including hardware and control systems, as well as software items, including operational and scheduling procedures, means that a company can offer a shorter delivery time and thereby respond more effectively to market demands. At present, the average time needed to reconfigure a multiproduct and multi-purpose plant is several weeks, with the shortest taking days and the longest taking months. Reconfigurable plants aim to reduce this time to hours. 2.1 Standardisation and Modularisation Standardisation Standardisation of equipment helps reconfigurability as it makes plant elements interchangeable. While there has been significant progress in control system standardisation through international efforts, there are no equivalent international standards on the size of equipment, making the replacement of equipment difficult. Further difficulties arise due to the incompatibility of other items such as pipes, heating and cooling systems and control systems attached to the equipment, as well as limitations of the bays in which equipment is housed, the size of which is often specified in the first design. As an example, standardisation can mean using a number of small reactors of standardised sizes instead of a large reactor. Restrictions on equipment location should be minimised. Standardisation using off-the-shelf equipment with fixed dimensions means that equipment bays, connection pipes, sensor locations and heating and cooling arrangements can all be standardised, and changes in operating procedures, monitoring displays and control strategies can be minimised to ensure maximum reconfigurability. In other words, standardisation means standardisation of equipment dimensions, process/utility module dimensions, plant infrastructure, design approach and operational procedures, as well as monitoring and control systems.
Modularisation Arrangement of equipment into modules is a complex task; a whole plant design cannot merely be sliced up into conveniently sized pieces. Each unit should be self-contained. For instance, each unit should maximise the independence of the control and monitoring arrangement in order to minimise disruption during changeover. Basically there are three types of modules that can be used (Fig. 3). Type 1 contains one main piece of equipment and peripheral items; type 2 uses the concept of grouping equipment of similar functionality into one structural module; and type 3 has all the equipment required to make a product from reaction through to further processing such as drying. In arranging modules, it is important to consider both the structural as well as the functional aspects. However, other factors should also be considered such as control and transportation. It is clear that a module size that is larger than is able to be put on the back of a lorry is too big (Fig. 4). Therefore, ISO standard containers of 8 ft • 8.5 ft cross sectional area, with a length of 40 ft, can be used to define the maximum size of a module. In terms of assembling the modules, a typical plant floor level is 3-3.5 metres; so, a cube of 3.5 m length can be suggested as the basic building block. Standard connecting points can be defined in relation to the standard cube, so the service modules have the same type of connection spaced 3.5 m apart, in both the horizontal and vertical planes. Fig. 5 shows how modules based on this cube are aligned. Modularisation can improve reconfigurability in many ways. For example, since related objects are grouped together, it reduces the number of interfaces for consideration when change is required. There are some other important issues that need to be considered in arranging modules. For instance, the modules that have the greatest inter-modular flows
349
Fig. 3 Three types of module
Fig. 4 A real module in the pharmaceutical industry that can be put on the back of a lorry (Courtesy of Pharmadule[2]).
Fig. 5 Alignment of utility modules with process modules of different sizes and orientations, where shapes are based on a cube.
350 should be located near the edge of an overall framework for maximum accessibility. Another important factor that must be considered is scheduling. 2.2 Connectivity Connectivity refers to how equipment units within a module and between modules are connected by pipes, as well as to how information flows between the process and control systems. Such issues as maintenance and safety should be considered in addition to reconfigurability, therefore guidelines that are used to assist the design of traditional plants are still important. For instance, steam should often be introduced near the top of the vessel, while condensate leaves near the bottom, and cooling water should enter from the bottom and leave from the top. The following are some important issues that specifically address reconfigurability considerations. Pipe sizes and connections are an important factor affecting reconfigurability. The use of adaptors fitted to the piping as appropriate, as illustrated in Fig. 6, can help rapid pipeline change over.
Fig. 6 An adapter arrangement for pipeline connections Associated with connectivity and modular design are designs for valves and pumps which should be smaller and adopt the idea of standardisation with the aim of easy reconfiguration.
2.3 Mobile Equipment Using mobile equipment and mobile peripheral equipment for the purpose of connection offers great dynamic reconfigurability. A typical example is pipeless plants in which the material remains in the same vessel throughout the process and is guided from station to station for processing. Transport can be implemented using tracks. Pipeless plants have been successfully used in mainly mixing processes, such as paint production. However, there are technical reasons that prevent pipeless plant from becoming a popular choice, such as the lack of proof that seals on available connectors can withstand high temperatures and pressures. Although some progress has been made in this respect, they are still perceived as suitable mainly for mixing requirements.
2.4 Cleaning Inter-batch and inter-campaign cleaning is an important cost factor for any type of batch operation. Pharmaceutical plants are facing increasing pressure to guarantee repeatable levels of high purity. The main concern with regard to cleaning is to remove the heels of un-reacted material to prevent cross-contamination. Cleaning is vital for a reconfigurable plant, and the issue should be considered from the beginning of the design of the plant, as well as at the start
351 of every new campaign. Clean-in-place (CIP), a method that has been used for nearly 40 years in the diary food and fermentation industries, has come to the attention of other process industries in the last few years. This may be used for cleaning both equipment and process piping, and fittings may be permanently fixed in place, or the same fitting may serve several pieces of equipment. CIP regimes are usually automated, so cleaning procedures can be standardised. For equipment that handles solids, it is more appropriate to use cleaning stations. Standardised CIP systems can then be designed for standardised equipment and modules, or a flexible CIP system used for different pieces of equipment or modules. 2.5 Case Studies
The methodology described, albeit briefly, for the design of reconfigurable batch plant has been applied to four case studies provided by industry, including two dedicated batch plants, a multi-product plant and a multi-purpose plant. These studies were undertaken to investigate how the new design methodology could help these industries address future challenges. The choice for cleaning, connectivity, building size, utilities layout and control were considered in each case, although these issues were not the same for each case study plant as their cost priorities, scale of production and needs differed. Nevertheless, within each of these plants the general concepts of reconfigurability, standardisation and modularisation still apply, and in each case were demonstrated to be capable of being used by plant designers to decrease design time and cost, and enhance plant agility. 3. FINAL REMARKS The competitive advantage that can be gained from being able to reconfigure a plant from one set of products to a new set within hours, instead of the current time scale of weeks or even months, can be enormous. Plant design for reconfigurability is a complex, multi-disciplinary activity requiring input not only from engineers, but also from the business to ensure that the final design meets the business objectives. It also requires manufacturers of equipment to work together with process and control engineers in order to produce standard designs. Further research is needed in all aspects of importance to reconfigurable plant design, in addition to the standardisation, modularisation, connectivity, cleaning and controllability issues discussed above. For example, existing safety and operability study procedures clearly cannot be applied directly to such plants. With modular designs already in use within industry, and the market requirement for rapid reconfiguration, the reconfigurable plant is an innovative concept for batch process design that is expected to attract increasing attention in the future.
REFERENCES [1 ] R. Garcia-Flores, Ph.D. Thesis, University of Leeds, UK, December 2002. [2] http://www.pharmadule.com ACKNOWLDGEMENT The authors would like to acknowledge the financial support of the UK Engineering and Physical Science Research Council (EPSRC Ref: GR/L/65949) for the BRITEST project (Batch Route Innovation Technology: Evaluation and Selection Techniques).
352
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Refinery Scheduling of Crude Oil Considering Two Different Uses of Naphtha Wang YanJun, Zhang Hui, He YinRen Research Institute of Petroleum Processing, Beijing 100083
Abstract This paper addresses the scheduling of crude oil considering two different uses of naphtha to build mathematical model. As different aromatic potential content of naphtha is required by reforming process and ethylene process, the individual transportation and storage of crude oil is determined from storage tanks to charging tanks. The purpose is just to prepare reforming naphtha program and ethylene naphtha program for crude distillation unit effectively, which is called naphtha-scheduling rule. We use MILP (mixed-integer linear program) and multiperiod techniques to build this model. The scheduling model of mixed transportation of crude oil in this paper was applied to 6 different types of international crude oil and 6 time intervals, which involve the first transfer from storage tanks to charging tanks in order to largely ensure the quality of mixed oil in pipeline from storage tanks to charging tanks and the average quality of total amount of oil in storage tanks at the end of the last time interval. Key words: mixed-integer linear program, mixed transference of crude oil, the scheduling of naphtha Introduction The 1980s were characterized by the emergence of international markets and the development of global competition. The chemical processing industry had to go through severe restructuring in order to compete successfully in this new scenario. Better economic performance has been achieved with a more efficient plant operation. As part of these efforts, refineries are also optimizing their manufacturing operations by utilizing planning and scheduling technology. So the methods of MILP (mixed-integer linear program) and M1NLP (mixed-integer nonlinear program) have been used extensively and achieved much development. Unlike scheduling of batch processes which has received considerable attention in the literature, much less work has reported in the scheduling of continuous multiproduct plants. The process industry differs dramatically from other industries, such as wholesale and discrete manufacturing industries. Process manufactures employ very intricate processes involving many variables, each of which must be accounted for and controlled in order to maximize business results. In 1996, Lee, Pinto and Grossmann proposed scheduling model using MILP method to solve the optimal operation of crude oil unloading, its transfer from storage tanks to charging tanks, and charging schedule for each crude oil distillation unit. The mathematical model was based on the MILP method and multiperiod technology, which not only met the mass balance equation, but also determined the component compositions at the planning level, such as sulfur concentration. At present, as the scheduling problem of naphtha used for reforming process or ethylene process attracts many national refineries, how to solve the problem in the course of crude oil mixed transportation is just the main objective of this work.
353
Problem Definition The scheduling of crude oil mixed transportation has been an important part of the planning and scheduling program in refinery, which can be described by the MILP model where some of the variables in constraints are integer. For example, in the schedule of crude oil, a variable can be denoted by 1 when some kind of crude oil is transported through pipeline in some time interval, otherwise it will be O. In this way we can realize the control of crude oil transportation. Another application is about aromatic potential content of naphtha. As different aromatic potential content of naphtha is required by reforming process and ethylene process, the former requiring higher aromatic potential content while the later requiring lower one, the individual transportation and storage of crude oil are determined from storage tanks to charging tanks, which is called naphtha-scheduling rule. The purpose is just to prepare reforming naphtha program and ethylene naphtha program for crude distillation unit effectively. So integer variables must be introduced to control the direction of naphtha in modular constraints.
Mathematical Formulation (A) Objective In the objective function, to largely ensure the quality of mixed oil, penalty variables are used in the qualitative constraint equations. As every quality index of mixed oil has different significance, the priority of which can be stipulated by the value coefficient of penalty variables, such as sulfur concentration considered firstly and followed by yield of CDU(~<350 ~ Employing encouragement coefficient and penalty coefficient can help to realize the optimal division for aromatic potential content of naphtha. (B) Mass Balance Equations for the Storage Tank K
K
T
T
minOBJ = ~ 2k(QHVS T,k + QL VS T.k ) + ~ C3k~ (QHTB,.k + QL TB ,.k ) + ~ (trANAE , - OANAR, ) k=l
k=!
t=l
t=l
Mass Balance Equations of every crude oil Crude oil i in storage tank at time t = initial crude oil i in storage tank - crude oil i transported from storage tank to charging tank up to time t. VS ,,, = VS ,,o - ~
F S T ,.m
V l.t
m=l
Operating constraints on crude oil i transportation rate in pipeline from storage tank to charging tank at time t. STi.t is O-1 variable which denotes if crude oil i transport to pipeline at time t. f i,min x S T i, t <_ F S T i,t < f i,mox x S T i, t
V i,t
Total amount of crude oil transport through pipeline from storage tank to charging tank up to time t I
TB t
=
~
F S T i,t
V t
i=1
Operation constraints on the kinds of crude oil mixed transportation I
~'~ S T i , t < n i=1
V t
354 Total amount of oil in storage tank at the end of the last time interval I VSZ T = Z
V S i,T
i=1
(C) Mass Balance Equations for Quality k To largely ensure the quality of mixed oil in pipeline from storage tanks to charging tanks, the qualitative constraints with penalty variables are adopted. I
Z
fli,k • FSTi,t + QHTBt,k >- ~k,min • TBt
Vk,t
fli,k x FSTi, t - QLTB t,k < ilk,max • TBt
V k,t
i=1 I
Z i=1
The qualitative constraints on the average quality of total amount of oil in storage tanks at the end of the last time interval I
Z
~ i,k X VS i,T -t- QL VS T,k >_fl k,min X VSZ r
Vk
fl~,k x VS~,r - Q H V S r , k < flk,ma~ x VSZ r
Vk
i=l I
Z i=1
(D) The Schedule of Naphtha The mass balance equations of naphtha I
WNA t = ~'~ a~FST i,t
Vt
i=1 T
WNA R
= ~ ] WNA R t t=l T
WNA E
= ~
WNA E t
t=l
WNA t = WNA R t + WNA E t
V t
The amount ratio constraints on naphtha for reforming process and ethylene process gminWNAE < WNAR < gmaxWNAE
The total amount of aromatic potential content of naphtha in mixed oil I
AlVA t = Z
ailtiFSTi,t
V t
i=l A N A t "- A N A E
t + ANAR t
Vt
The directional determination of naphtha used for reforming process or ethylene process
355 WNA E t
_<M (1 - I N A R t)
'q' t
W N A R t <_M x I N A R t
Vt
A N A E t <_M(1
V
- INAR t )
A N A R t <_M x I N A R t
t
Vt
Where the constant M is chosen to be large enough so that the constraints are always satisfied if I N A R t =1, for example it equals 1000. I N A R t is 0-1 variable which denotes the trend of naphtha. When it equals O, naphtha is used for ethylene process; while if it equals 1, naphtha will be used for reforming process. Example
In order to provide some insight into the nature of this optimization problem, consider the following simple example. In the early stage of scheduling, there are three kinds of crude oil in the storage tanks, and three crude vessels loading different crude oil are to arrive in tum and can be used to transport respectively at period 2, 4 and 6. The kinds of crude oil used for mixed transportation are no more than two and the amount of each crude in every time interval is between 50kt to 60kt. The amount fraction of sulfur which determine the quality of mixed oil is below 0.8%, and the once-through light oil yield should be in the range of 40% and 59%, and the naphtha-scheduling rule must be realized. There are 6 time intervals during this scheduling, and the naphtha ratio used for reforming process and ethylene process is controlled at the range of 0.5 and 1 in time horizon. The following tables show the input data and scenario of mixed transportation of crude oil. Table 1. Data of Crude Oil in Storage Tanks and Vessels
Crude
Amount available P e r i o d Sulfur kt available %
number
Name of crude oil
1 2 3 4 5 6
Masila Djene Mandji IranianLight Forcados Odudu
oil
........................
,.
.
.
.
.
.
.
.
.
.
.
.
.
35 91 129 135 130 101 .
1 1 1 2 4 6 ,,,
,
,,,
,,,,,,,
0.55 0.30 1.31 1.38 0.35 0.25 ,,,,,, .
.
.
.
.
,,
Yield of CDU (~<350"C) % 45.70 37.84 39.64 54.00 59.19 62.22 ,
,. . . . .
Masila
Djene
Mandji
0 1 2 3 4
35.000 0 0 0 0
91.000 91.000 41.000 0 0
5
0
0
6
0
0
129.000 112.216 112.216 112.216 84.179 56.142 36.252
% 16.71 8.85 13.83 40.43 13.87 21.38
Aromatic potential content of naphtha 36.21 50.53 43.82 38.31 54.15 54.82
,. . . .
Table 2. Amount Change of Crude 0 i l Period
Naphtha yield
kt
IranianLight
Forcados
Odudu
0 135.000 125.000 106.000 106.000 106.000 106.000
0 0 0 130.000 98.037 66.075 66.075
0 0 0 0 0 101.000 60.890
356 Table 3. Source of Crude Oil in Pipeline Period 1 2 3 4 5 6
Masila 35.000 0 0 0 0 0
Djene 0 50.000 41.000 0 0 0
Mahdi 16.784 0 0 28.037 28.037 19.890
IranianLight 0 10.000 19.000 0 0 0
kt Forcados Odudu 0 0 0 0 0 0 31.963 0 31.963 0 0 40.110
Table 4. Amount of Blend Oil in Pipeline Period
! 2 3 4 5 6 1-6 7
Amount of blend oil kt
Sulfur %
Yield of CDU ( ~<350 ~ ) %
51. 784 60.000 60. 000 60.000 60.000 60.000
0. 8000 0.4792 0. 6413 0.8000 0.8000 0.6037
43.74 40.53 42.96 50.05 50.05 54.73
269.216
0. 8631
55.20
Aromatic potential contentof naphtha 38.37 44.70 42.23 49.33 49.33 52.15
Amountof Amountof naphthafor naphthafor reforming ethylene process process 0. O0 0.00 0.00 8.3 8.3 11.3 27.9
8.2 8.5 11.3 0.00 0.00 0.00 28.0
Total aromatic
Totalaromatic
potential content for refon'ning process 0.00 0.00 0.00 409.4 409.4 589.3 1408. 1
potential content for ethylene process 314.6 379.9 477.2 0.00 0.00 0.00 1171.7
In table 4, t h e d a t a o f t i m e i n t e r v a l 7 are total i n v e n t o r y at t h e e n d o f t h e 6 th t i m e i n t e r v a l , t h e a v e r a g e q u a l i t i e s o f s u l f u r a n d o n c e - t h r o u g h light oil yield. A s w e c a n see f r o m t a b l e 4, the q u a l i t y r e q u e s t s for m i x e d t r a n s p o r t a t i o n o f c r u d e oil are b a s i c a l l y m e t , a n d t h e n a p h t h a w i t h l o w e r a r o m a t i c p o t e n t i a l c o n t e n t ( 4 1 . 8 5 ) are u s e d for e t h y l e n e p r o c e s s a n d t h e n a p h t h a w i t h h i g h e r o n e ( 5 0 . 4 7 ) are u s e d for r e f o r m i n g p r o c e s s .
Conclusions
In the s c h e d u l i n g o f m i x e d t r a n s p o r t a t i o n o f c r u d e oil, t h e p r o b l e m o f h o w to r e a l i z e the s c h e d u l i n g c o n t r o l o f a r o m a t i c p o t e n t i a l c o n t e n t o f n a p h t h a h a s b e e n d i s c u s s e d in this paper. It is e c o n o m i c to t a c k l e t h e s c h e d u l i n g p r o b l e m o f m i x e d t r a n s p o r t a t i o n o f c r u d e oil u t i l i z i n g m u l t i p e r i o d t e c h n o l o g y a n d M I L P m e t h o d in t h o s e r e f i n e r i e s w h i c h p r o c e s s m a n y k i n d s o f c r u d e oil. W i t h t h e i n c r e a s e o f o p e r a t i n g s p e e d o f c o m p u t e r a n d t h e i m p r o v e m e n t o f a l g o r i t h m , the p a c e o f g e t t i n g f e a s i b l e r e s u l t s w i l l be m u c h m o r e fast. W h e n w e u s e M I L P m e t h o d to s o l v e s o m e s c h e d u l i n g p r o b l e m s , w e c a n l i m i t the i n c r e a s i n g n u m b e r o f i n t e g e r v a r i a b l e s , so the p r o b l e m o f c o m b i n a t o r i a l e x p l o d e c a n b e a v o i d e d .
Nomenclature
(a) Indices and Sets time interval
i - 1..... I - serial number of crude oil
t -
m = 1,...,t = time interval
k - 1..... K - quality of crude oil
(b) Variables
1 ..... T -
357 VS,.t = amount of crude oil i in storage tank at time interval t VSZT. = total inventory of crude oil i in storage tank at the last time interval FST,.t = amount of crude oil i transferred to pipeline at time interval t TBt = total amount of crude oil through pipeline at time interval t ST,,t = 0-1 variable to denote if crude oil i transfer to pipeline at time interval t I N A R t - 0-1 variable to denote the trend of naphtha used for reforming process or ethylene process W N A E t - amount of naphtha used for ethylene process at time interval t WNARI-
amount of naphtha used for reforming process at time interval t
W N A t - amount of naphtha at time interval t W N A E = amount of naphtha used for ethylene process at the whole time interval W N A R = amount of naphtha used for reforming process at the whole time interval A N A E t = amount of aromatic potential content of naphtha used for ethylene process at time interval t A N A R t = amount of aromatic potential content of naphtha used for reforming process at time interval t A N A l = amount of aromatic potential content of naphtha at time interval t
(c) Parameters QHTB,.t = penalty variable that quality k of mixed oil can not fit the lower limit at time interval t QLTB,.t = penalty variable that quality k of mixed oil can not fit the upper limit at time interval t Q H V S , . t - penalty variable that quality k of mixed oil can not fit the lower limit at the end of the last time
interval QLVS,.t = penalty variable that quality k of mixed oil can not fit the upper limit at the end of the last time
interval gin,, = minimum proportion of reforming naphtha and ethylene naphtha at time horizon gmax = maximum proportion of reforming naphtha and ethylene naphtha at time horizon
f.m,n = minimum amount of crude oil i used to transference f . m ~ - maximum amount of crude oil i used to transference n = maximum kind of crude oil used to transference at time interval VS,.o = initial amount of crude oil i in storage tank
a k = penalty coefficient that m i x e d oil can not fit the quality limit at the end of the last time interval 8 k : penalty coefficient that m i x e d oil can not fit the quality limit o = encouragement coefficient of aromatic potential content 0 = penalty coefficient of aromatic potential content a , = naphtha yield of crude oil i /1,= aromatic potential content of naphtha for crude oil i /3,.k - the kth quality index of crude oil i .8 k.... = minimum quality k of mixed oil ,Sk . . . . = maximum quality k of mixed oil
Literature Cited J.M.Pinto, M.Joly, L.EL.Moro. Computers and Chemical Engineering 24(2000), 2259-2276. Heeman Lee, Jose M.Pinto, I.E.Grossmann, and Sunwon Park. Ind. Eng. Chem. Res. 1996,35,1630-1641. An Yi, Zhang Hui, He Yinren, Liao Zhong. Petroleum Refinery Engineering 2002,32(2), 52-54. He Yinren, Yuan Changen, Chen Xianya, Yang Huaying and Li Huizhu, Application of penalty function technique in planning models. 1997,26(6), 380-383.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
358
Dimensionality reduction in computer-aided decision making Yoshiyuki Yamashita Department of Chemical Engineering, Tohoku University, Sendai 980-8579, Japan
Abstract Two methods for reducing dimensions in order to address the classification problem are shown here. Both methods are filters using information gain ratio to select feature subset from the original data. These methods are applied to fault detection for the Tennessee Eastman problem. In the case study, both decision tree inducer and support vector machine are used as the base learner. Classification results for test data are analyzed in detail.
Keywords 1.
decision tree, support vector machine, feature subset selection, fault detection
INTRODUCTION
One of the most important tasks in computer-aided decision-making is the problem of dimensionality reduction. This task is often called feature subset selection or attribute selection, which is used in the pre-processing stage to reduce the time or space required by data mining. Reducing the dimensionality of the data reduces the size of the hypothesis space and allows the classifier to operate faster and more effectively. In many cases, accuracy of the classification can be improved by this reduction. Furthermore, the result is a more compact and easily interpreted representation of the target system. Feature subset selection is the process of removing as much irrelevant and redundant information as possible to reduce the dimensionality of the data. Several kinds of methods have been proposed for feature selection. One approach is done using the filter paradigm, in which undesirable features are filtered out of the data before the learning process, which operates independent of any machine learning algorithm, paradigm is called filter, undesirable features are filtered out of the data before the learning, which operate independently of any machine learning algorithm. Another approach to feature subset selection is called the wrapper, which estimates the accuracy of the feature subset using the actual machine learning algorithm. Although the wrapper approach is powerful, it does cosume significant amount of computation time. One of the typical classification problems in process systems is the problem of fault detection and isolation. Because it is a problem of practical interest in many chemical process plants, many algorithms for fault detection have been developed. Application of soft computing methods are summarized in Calado et. a/. [1], including neural networks, fuzzy systems, knowledge-based systems, and evolutionary algorithms. Chiang et. al. also summarizes many
359 approaches for fault detection and isolation including statistical methods and soft computing methods[2]. In this paper, two methods for the feature selection are investigated. Both methods are the filter approach, which are independent from the successive primary learner. Experiments are performed by applying each of the methods to fault detection of the Tennessee Eastman problem. In this experiment, both decision tree inducer and support vector machine are used as the primary learner. Classification results before and after the feature selection are analyzed in detail. 2. 2.1
METHOD Decision tree induction
A decision tree is used for a decision procedure for determining a class label associate with a given example. The decision tree induction is an algorithm that constructs decision trees automatically from data. Various methods of induction have been investigated. The algorithm for the decision tree induction constructs decision trees in a top-down recursive divide-and conquer manner. The tree starts as a single node representing the training data. The algorithm uses an entropy-based measure for selecting the attribute that will best separate the samples into individual classes. A branch is created for each known value of the test attribute. T h e n the algorithm uses the same process recursively to form a decision tree. One of the most popular algorithms of the inducers is C4.5 algorithm [3, 41, which is used in this study. It uses the measure known as information gain ratio as a heuristic for selecting the attribute. 2.2
Feature subset selection
Two filter approaches of feature subset selection are investigated in this paper. The first method is using the C4.5 algorithm as a prefilter. All the attributes in the final pruned decision tree are selected for the successive learning process. The idea of using a particular learning algorithm as a pre-processor in order to discover useful feature subsets for a primary learning algorithm have been studied by several researchers. Cardie [5] used decision tree algorithm for selecting feature subsets, while using instance-based learners as the primary learning algorithm. In her study, the method is applied to the text mining of the natural language, obtaining better performance than the C4.5 itself. The second method for the feature subset selection is the attribute ranking method. Dimensionality reduction is accomplished by cross-validating the attribute rankings produced by each attribute selector with respect to the current learning algorithm. The highest n ranked attributes with the best cross-validated accuracy is then selected as the best subset. In this study, the information gain based attribute selector is used.
360 Table 1: Disturbances No. IDV(1) IDV(2) IDV(3) IDV(4) IDV(5) IDV(6) IDV(7) IDV(8) IDV(9) IDV(10) IDV(11) IDV(12) IDV(13) 2.3
.z
Fault description A/C feed ratio high B composition high D feed temp. Reactor CW inlet temp. Condenser CW temp. A feed loss C header pressure loss A,B,C feed composition D feed temp. C feed temp. Reactor CW inlet temp. Condenser CW inlet temp. Reaction kinetics
Type Step Step Step Step Step Step Step Random Random Random Random Random Slow drift
.........
....
. . . . . .
i
i
s
:
..4;
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
..... ; i e e
seei a;iman i aii ng;
.......
process with base control
Support vector machine
The other primary classifier used in this paper is the Support Vector Machines (SVMs). SVM can map a pattern x into a high dimensional space and construct an optimal hyper-plane in this space. The mapping ~(.) is performed by a kernel function K. The decision function f is f ( x ) = w. (I)(x) + b = ~ o~yiK(x,, x) + b.
(1)
The optimal hyper-plane is identified as the one with the maximal distance to the closest pattern 9 (x,) from the training data. To train the SVMs, Platt's Sequential Minimal Optimization Algorithm (SMO) [6] is utilized in this paper. 3. 3.1
CASE STUDY The Tennessee Eastman process
The above mentioned algorithms are applied to the simulated Tennessee Eastman process [7] with a base control system configured by McAvoy and Ye [8]. The Tennessee Eastman simulator is a computer simulation of a real chemical plant provided by the Eastman Company. The problem is described in detail by Downs and Vogel [7]. This simulation was given to the process control community as a challenging control system design problem. The plant is nonlinear, open-loop unstable, and has many unknown disturbances. Figure 1 shows a schematic diagram of the Tennessee Eastman process. The process has 41 measurements and 12 manipulated variables. In this study, 22 continuously measuring variables and 12 manipulated variables are used for the analysis, because the other variables are not usually measured continuously.
361 Table 2: Summary of the C4.5 classification IDV IDV(1) IDV(2) IDV(3) IDV(4) IDV(5) IDV(6) IDV(7) IDV(8) IDV(9) IDV(10) IDV(11) IDV(12) IDV(13)
tree size 3 7 45 3 7 3 3 5 53 13 13 29 5
selected features in the pruned tree XMEAS(7) XMEAS(1,13,16) XMEAS(1,3,9,10,12,13,15,16,17,18), XMV(3,6) XMEAS(21) XMEAS(13,22) XMEA S( 1) XMV(5) XMEAS(13) XMEAS(2,3,6,7,8,9,13,16,17,18,20,21), XMV(3) XMEAS(2,7,8,17,18) XMEAS(1,7,13), XMV(6,10) XMEAS(2,7,9,11,12,13,16,22), XMV(7,10) XMEAS(16,18)
classification error(%) training set test set typel type2 typel type2 1.0 0.0 1.0 3.0 1.5 0.0 3.0 2.0 2.0 3.0 11.0 18.0 1.0 0.0 1.0 0.0 1.5 0.0 1.0 0.0 1.0 0.0 1.0 0.0 1.0 0.0 1.0 1.0 15.0 0.0 14.0 1.0 11.0 2.0 32.0 28.0 7.5 0.0 16.0 11.0 5.5 0.5 3.0 10.0 4.0 1.0 13.0 8.0 29.0 0.0 21.0 1.0
The evaluation of the fault detection was performed on the data by disturbances (faults) listed in Table 1. Although the original problem has 20 kinds of disturbances, the first 13 disturbances are considered here, because the other disturbances are suggested to be used in conjunction with another disturbance [7]. The training set contains 200 samples from normal and 200 samples from abnormal operation with a sampling interval of 3 minutes, which is generated from two independent cycles of the simulation. Test data for the evaluation of the fault detection is prepared separately with different random numbers. 3.2
Classification without feature selection
Two machine learning algorithms were used as the primary learner a decision tree inducer (C4.5) and a support vector machine (SVM). Before applying feature subset selection, classification by using each of these learners is investigated for the TE plant data. With all the 34 features, the C4.5 algorithm is applied to the training data. Classification result for each disturbance is summarized in Table 2. In this table, variables appearing in the final pruned decision tree are shown along with the classification errors. There are two types of errors: a type 1 error occurs with the miss-classification of a normal element into the fault class; a type 2 error occurs in the event there is a risk of pointing normal when the process is actually at fault. For the step type disturbances, except for IDV(3), the accuracy of classification is very high. For the other types of disturbances, the accuracy of classification is relatively low. In IDV(3), the changes of monitored variables are not significant because the influence of the disturbance (D feed temperature) can be easily reduced by the control system [8].
362 Table 3" SVM classification
60
No.
overall error(%) training test
1
1.8
1.5
2 3 4 5 6 7 8 9 10 11 12 13
2.3 11.0 0.5 0.5 0.5 0.5 24.3 36.8 43.0 25.8 41.3 22.8
3.0 9.5 0.5 1.0 0.5 0.5 13.0 58.0 54.0 14.0 53.0 17.5
.
g~0V,
6O ,..-,
*~so
g
.
.
.
.
"///~
~ 40
~,o e-
.~ 30
@
:~- 2 0 / ,
20
~ lO w
o;-
, 10
, 20
, 30
, 40
, 50
Error without feature selection [%]
(a) C4.5 selection Figure 2"
60
o
0
10
20
'
30
'
40
'
50
Error without feature selection [%]
60
(b) Information gain based ranking
Comparison of SVM overall classification error for test data before and after applying the Feature Subset Selection (FSS).
The SVM algorithm is also applied to the same data with all the features. Overall classification error for each disturbance is summarized in Table 3. Here, the polynomial kernel is used for SVM. Although, it fails to classify data for IDV(9), IDV(10) and IDV(12), the other results are similar to the C4.5 classification. 3.3
Classification w i t h feature selection
Now the method of feature subset selection is applied as a prefilter of primary learners. By applying the C4.5 algorithm, 1 to 13 attributes are selected for each disturbance data. Polynomial kernel SVMs are applied to the data with selected attributes. Overall classification errors are summarized in Fig. 2(a) as the comparison to the result without feature selection. Except for IDV(11), the result is almost comparable. In Fig. 2(b), the overall error of the SVM classification on the selected attributes by information gain based ranking method is summarized. The result is very similar to the result of Fig. 2(a), except for IDV(11) the accuracy is comparable to the result for the original set. The classification results of C4.5 on selected attribute by information gain based ranking is summarized in Table 4. Comparing to the result in Table 2, the resulting size of the tree becomes smaller for IDV(3) and IDV(9)~IDV(12). This means that, the result becomes more easily interpreted and will help to discover knowledge of the target system. Overall classification error is plotted in Fig. 3 as the comparison with the result of the original C4.5 classification. For several cases, feature subset selection gives better classification than the original C4.5 learner. The others are comparable. This result indicates that feature subset selection improves the generalization ability and reduces the classification error on test data.
363 Table 4: C4.5 classification after FSS No.
1 2 3 4 5 6 7 8 9 10 11 12 13 4.
tree size 3 7 27 3 7 3 3 5 39 5 9 13 5
estimation training set typel type2 1.0 0.0 1.5 0.0 8.0 8.0 1.0 0.0 1.5 1.0 1.0 0.0 1.0 0.0 15.0 0.0 18.5 3.0 12.5 0.0 7.5 0.0 7.5 0.0 29.0 0.0
error(%) test set typel type2 1.0 3.0 3.0 1.0 9.0 19.0 1.0 0.0 1.0 0.0 1.0 0.0 1.0 1.0 14.0 1.0 14.0 7.0 16.0 1.0 3.0 4.0 13.0 4.0 21.0 1.0
30
,
,
,
,
,
~'2s
~o w
o
0
5 10 15 20 25 30 Error without feature selection [%]
Figure 3: Comparison of C4.5 overall classification error before and after applying the information gain based ranking selection.
CONCLUSION
Two feature subset selection methods were investigated with fault detection problem for the Tennessee Eastman plant. The experiments have shown that, in most cases, feature subset selection gives results that are comparable or better than the original learner. Many applications of business decision-making involve a task for predicting classifications. An investigation of feature subset selection process would support such efforts. REFERENCES [ 1] J. M. F. Calado, J. Korbicz, K. Patan, R. J. Patton and J. M. G. SA da Costa, European Journal of Control, 7 (2001) 248. [2] L. H. Chiang, E. L. Russell and R. D. Braats, Fault Detection and Diagnosis in Industrial Systems, Springer, London, 2001. [3] J. R. Quinlan, C4.5: Programs for Machine Learning. Morgan Kaufmann, Los Altos, 1993. [4] J. R. Quinlan, Journal of Artificial Intelligence Research, 4 (1996) 77. [5] C. Cardie, 10th Int. Conf. on Machine Learning, Morgan Kaufmann, (1993) 25. [6] J. Principe, L. Giles, N. Morgan and E. Wilson (eds.), Advances in Kernel Methods: Support Vector Machines, MIT Press, Cambridge, 1998. [7] J. J. Downs and E. F. Vogel, Comput. Chem. Engng., 17 (1992) 245. [8] T. J. McAvoy and N. Ye, Comput. Chem. Engng., 18 (1994) 383.
ProcessSystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
364
Information Directed Sampling and Ordinal Optimization for Combinatorial Material Synthesis and Library Design Chia Huang Yen, David Shan Hill Wong, S.S. Jang Department of Chemical Engineering, National Tsing Hua University, Hsinchu, Taiwan 30043
Abstract: Combinatorial synthesis techniques have become more and more important in many areas of process and material designs. Simulated annealing is often suggested as a possible sampling policy in combinatorial methods. However, without model estimates of fitness function, true importance sampling cannot be performed. We suggested that a simple prediction model can be constructed as a generalized regression neural network using currently available. An information free energy index is defined using this model, which directs search to points that have potentially high fitness function (low information energy) and areas that are sparsely sampled (high information based entropy). Two benchmark problems were used to model the optimization problem involved in combinatorial synthesis and library design. We showed that when importance sampling is performed, the combinatorial technique becomes much more effective. The improvement in efficiency is explained using the concept of ordinal optimization. Keywords:
combinatorial synthesis, simulated annealing, ordinal optimization.
1. INTRODUCTION In recent years, combinatorial synthesis become an important optimization technique in product and process development [1 ], e.g. material synthesis[2], design of catalysts[3], drug design [4], improvement of enzymes activity[5], solvent selection[6]. One of stochastic search methods often employed as search strategy for large scale optimization problems is simulated annealing (SA)[7]. In a typical SA scheme, a reference point is selected; a set of candidates are generated. The merit functions at these candidates are compared to that of the reference. The reference is updated by the candidate according to the transition probability that mimics the Metropolis importance-sampling strategy used in Monte Carlo simulation of molecular ensembles. However, the "actual" merit function of a data point is known only if the actual experiment is performed. Therefore, unless a "model" of the merit function is available, importance sampling cannot be performed. However, for problems that need to be solved by combinatorial synthesis, accurate models are usually unavailable. Sampling policy in combinatorial synthesis is just another form of experimental design. In the past, we have proposed experimental design methods based on information theory for process optimization and recipe selection [8]. The philosophy is as follows. Given a set of data, a "rough" model can be constructed using neural-network. This model can be used to direct the search and suggest experiments at points that have potentially high fitness function (low information energy). However, such a model is untrustworthy when the number of data is sparse compared to the entire search space (high information based entropy). We need to explore areas that have not been investigated. A temperature-annealing schedule can be used to balance the two needs. The reason of why a rough model of limited accuracy can be useful to direct search was explained by Ho and coworkers [9] in terms of ordinal
365 optimization theory. Using probability theory and numerical simulation, they showed that the number of top ranked object selected can be significantly increased if a very poor model is used to aid a stochastic picking procedure. In this paper, we shall applied this concept of using a rough model to direct search and applied it to two benchmark problems in combinatorial synthesis. The objective is to demonstrate the effectiveness of this approach in solving large scale combinatorial optimizations. 2. B E N C H M A R K P R O B L E M S 2.1 N-K model
A benchmark problem that describes the basic physics of high-dimensional "structural" search is known as the N-K problem [10]. The simplest N-K model can be described as follows. Consider a N-dimensional array of integer variables. The merit function depends on the interaction of K-nearest neighbor.
1 N=K+I Y= [ ( N - K ) J 1/2 j=l~ ~tt(aj'aj+l ..... aj+K-1)
(1)
is a predefined discrete variable function, aj is the state of the jth element. The N-K model captures the basic physics of many phenomena such as genomics, protein evolution etc. Two characteristics of the N-K model that are especially important. The fitness function landscape becomes more and more rugged with many local minima as N and K increases. While the fitness function is very rugged, functions of neighboring points are correlated and not random variations. This ensures that accumulation of knowledge is helpful and the identification of a predictive model is possible. 2.2 RPV Model
In many combinatorial material synthesis problems, experimental variables include candidates of input include the compositions of the material and processing conditions. The input space consists of both discrete and continuous variables and is typically of very high dimensions. The merit function is usually one or a set of specific properties of the material such as superconductivity, luminescence, catalytic activity, tensile strength etc. Such properties will depend on the particular phase of the product material. Since the changes of physical property of the material is discontinuous across a phase boundary. The objective function encountered in a combinatorial synthesis optimization is only piecewise continuous. Falcioni and Deem [10] proposed a "random phase volume" (RPV) model to simulate the merit function encountered in combinatorial synthesis. Essentially RPV is a relation between the merit function U and a set of compositional variables x and non-composition variables z. Composition variables are expressed as mole fractions. Therefore for a C component system, there are C-1 independent composition variables. The entire composition space is divided into ct=l...M different phases by defining phase center points x",~. Each point in the composition space belongs to phase of the nearest phase center point. Non-composition variables may be processing conditions such as temperature, pressure, pH, time of reaction, thickness of film. They may be discrete or continuous and normalized in the range [-1, 1] in the RPV model. Similarly the noncomposition space is divided into y-1...N different phases by defining phase center points Z~
366
r ( x , z ) = rat
if W
IIx xoll m,. ILxx~ a':l...M 11-II- rain Z Z;
(2)
Z'=I...M
The merit function Yar in this phase is represented by a Qx th order polynomial in composition variable and a Qzth order polynomial in non-composition variable:
Qx d ,.-k--ak Yar = Va + Crx ~ Z f il ...ik ~x Ail...i k Yi1Yi2""Yi k k=l il>-...>-ik=l
(3)
1( Qz b ,x_k R?,k ) +-ff WZ +o"z ~ ~ f il...i k "~z -'il...ik Wil Wi2 "" Wik k=l i1>...>ik=l V~, Wr, Crx, ~, ~x, ~. A~'il...ik and BYkil...ik are coefficients of the model. The factor is f~, ...ik is a symmetric factor. Figure 2 is a typical merit function landscape of a RPV model with C=3, N=O, M=15 and Qx=2. The piecewise continuous and nonlinear nature is evident. The complexity of the problem can be increased by increasing the dimension of variable and number of phases. Given the N-K model and the RPV model, the sampling algorithm for optimization in combinatorial synthesis and library design can be benchmarked. 3. METHODOLOGY 3.1 IFEDSA The Information Free Energy Directed Annealing (IFEDA) procedure proposed is described as follows: 1. A set of 100 initial configurations are sampled randomly and the merit functions are sampled. 2. The data are trained to generate a meta-model using the generalized regression neural network (GRNN, [12]) 3. A simulated annealing search is performed using this GRNN meta-model. i. The existing minimum is selected as the reference and its information free energy is calculated. ii. A potential test candidate is generated and its information free energy is calculated. iii. If information free energy of the candidate is less than that of the reference a. The candidate is collected as a new test point b. The reference point is replaced the candidate c. If NT candidates have been collected go to step iv, otherwise go to step ii otherwise, go to step ii 4. The actual merit functions at this new batch of candidate solutions are evaluated (tests are performed). 5. If enough tests have been performed and the optimum found is satisfactory, the process is stopped. Otherwise, the temperature is reduced and then go to step 2 so that the modeling-selection-testing cycle is repeated.
367
3.2 Information Energy Given a model of all the presently available experimental data, we define the information energy index of any candidate point in the search space
u (x) = ~ ~ (x) for menimizaton problems [-~" (x) for maximizationproblems
(4)
At any candidate point, the better the predicted fitness function, the lower is its information energy. The more valuable is the information at that point.
3.2 Information Entropy Our knowledge of a candidate point can be measured by the information entropy
S(x)--i=~I(X--xi)TA-I(x--xi)expI--(x--xi)TA-I(x-xi)]
(5)
i~=exp[-(x-xi)rA-'(x-xi)] A-z is a scaling factor matrix. The information entropy is just the average of the square of the distance between a candidate point and all existing data points. The higher the information entropy, the more we know about the candidate point.
3.4 Information free energy and temperature annealing A candidate is worthy of experiment if it has a potential of having a good fitness value (low information energy), or it neighborhood has not been sufficiently explored (high information entropy). During the initial stages when the number of experiment is small, the model predictions are of little value, experiments should be devoted to sample un-chartered search space. When sufficient information has been gathered, only samples that are potentially important should be tested. Chen et al. (1998) proposed an information free energy index: F(x)- U (x)- rS(x) (6) T is an annealing temperature proportional to the number of experiment. The proper convergence of the sampling procedure to the global minimum depends on the annealing schedule. In this work, the very fast annealing scheme proposed by Ingber [ 13] was used:
T = TO 9exp(-a. N o )
(7)
with To, a, b being adjustable constants and N is the total number of the experiments. 3. RESULTS Table 1 shows the results of solving several N-K and RPV problems after 1000 function evaluations. It was found that our IFEDA approach is much more effective than simple SA for both the N-K and R-P-V problems. Table 1:
Search Results for the NK and RPV problem using SA and IFEDSA RPV NK
M:15,N=3, Z:O,Q=2 Avg. a SA -1.71 IFEDA -1.83 average of 10 runs,
Stdb 0.08 0.01 b standard
M=37,N=6 Z:O,Q=6
N=20,K=4
Avg." Stdb ,, Av~. a -1.02 0.12 1.54 -1.62 0.02 1.13 deviation of 10 runs
Std b 0.08 0.00
N=30,K=5 Avg. a 1.88 1.58
Stdb 0.11 0.07
368 Fig. 1 illustrates the decrease in the objective function with number of batches of experiments for a RPV model. An undirected simulated annealing search causes the system to deviate from the current minimum so that the search process will avoid being trapped in a local minimum. However, aided by the GRNN, the IFEDSA is able to locate the correct minimum almost 1 order of magnitude faster. .9o . s
...........................
" ...... "-.-.-.'..--'--.'.-~--'-~
...................
" .... i..."
I:,,! i .....i.ililiii .......... .
- ......... .
.
.
.
.
.
.
.................... ~ ............. " . . . . . . . : . . 4 - - - ' .
..i...§
......... ~ ....
-. . . . . . .
.
.
.
l
.
.
.
.
~ .
1o
.
.
.
.
! .
.
.
No. o f Batch
"~ t
!.!.i'~i .
.
L.tU;I7;I;;.;[71i;7~.I771.11~il;75TS77];;S;Li .177]J
.
..........i i.iri:iii 2o
~.'t ....................t ............i ........~......~...~.. ~ ' i ................" ...........~ .......~i~
"-.'..§
i .... i....'.. ".-.'.§
so
10o
No. o f B a t e l l
SA IFEDSA Fig. 1: Changes in optimum located in the solution of a simple RPV problem (M=15,N=3, Z=O,Q=2)
Ordinal optimization theory [9] found that the improved alignment of top ranked candidates by the use of a model depends strongly on (i) how the fitness function is distributed, shape of the ordered performance curve, but weakly on (ii) accuracy of the model, and (iii) selection rule, how the model is used. Both the N-K and RPV models exhibit what is called a bell-shaped distribution curve, which is most amenable to help by a model. A parameter W defined as
maxlY-r~j
W = rmax - rmin
,
(8)
was used in their studies to represent model accuracy. Lau and Ho [14] found that when W-0.5, the alignment can be increased by 6--8 times if a horse-race selection rule is used. Our IFEDSA is just another form of selection rule. For both N-K and RPV model, we found that W-0.3 to 0.5. Therefore approximately one order of magnitude increase in efficiency is expected. Fig. 2 shows the distribution of data sampled in the 1st "~10 th batch, 21 st to 30 th batch and 51 st to 60 th batch for a N-K problem. It is obvious that distribution of data sampled shifted towards low energy end. Such changes were not found for simple SA. This shows that IFEDA allows us to perform importance sampling in the early stages of the search.
Fig. 2: Distribution of data sampled different stages of optimization for the N-K model
369 4. CONCLUSION In this work, we have demonstrated the necessity of true importance sampling in solving optimization problem of high dimension. Our results show that while brute-force combinatorial techniques may be powerful, but the technique becomes much more effective if we can organize the present knowledge periodically to direct the search. The organization of knowledge need not be done in a theoretical or a precise manner. A simple empirical model that memorizes all existing data is sufficient. A "rough" model together with appropriate consideration of model uncertainty is able to significantly improved search efficiency. REFERENCES
[1] M.E. Davis, "Combinatorial Methods: How will they integrate into Chemical Engineering?," AIChE J., 45(11), 2270, 1999 [2] X.D. Xiang, X. Sun, G. Briceno, Y. Lou, K. A. Wang. H. Chang, W.G. Wallace-Freedman, S.W. Chen and P.G. Schultz; "A Combinatorial Approach to Materials Discovery," Science, 268, 1738, 1995 [3] B.M. Cole, K.D. Shimizu, C.A. Kreuger, J.P.A. Harrity, M.L. Snapper, and A.H. Hoveyda, "Discovery of Chiral Catalysts Through Ligand Diversity: Ti-Catalyzed Enantioselective Addition of TMSCN to Meso Epoxides," Angew Chem Int. Ed. Engl., 35, 1668 (1996) [4] B.M. Gordon, M.A. Gallop, and D.V. Patel, "Strategy and Tactics in Combinatorial Organic Synthesis. Applications to Drug Discory," Acc. Chem. Res., 29, 144, 1996 [5] L. You, and F.H. Amold. "Directed Evolution of Subtilisin E in Bacillus Subtilis to Enhance Total Activity in Aqueous Dimethylformamide," Protein Eng., 9(1), 77,1996 [6] E.J. Pretel, P.A. Lopez, B.B. Susana, and E.A. Brignole, "Computer-aid molecular design of solvents for separation processes," AIChE J., 40(8), 1349, 1994. [7] P.J.M. van Laarhoven, and E.H.L. Aarts, "Simulated Annealing: theory and applications", Rediel, Dordrecht, The Netherlands, 1987. [8] J.H. Chen, S.S. Jang, D.S.H. Wong and S.L. Yang, "Product and Process Development Using Artificial Neural-Network Model and Information Analysis," AIChE J., 44, 876, 1998. [9] Y.H. Ho, "Tutorial on ordinal optimization" http://hrl.harvard.edu/people/faculty/ho /DEDS/00/ OOTOC. html [10] S.A. Kauffman, and S. Levin, "Towards a general theory of adaptive walks on rugged landscapes," J. Theor. Biol., 128, 1 l, 1987. [11 ] M.M. Falcioni and W. Deem, "Library Design in Combinatorial Chemistry by Monte Carlo Methods", Physical review E., 61(5), 5948, 2000. [12] D.F. Specht., "A General Regression Neural Network," IEEE Transactions on Neural Network, 2(6), 568, 1991. [ 13] L. Ingber, "Very Fast Simulate Annealing," Math. Comput. Modeling, 12, 967, 1989. [14] T.W.E. Lau, and Y.C. Ho "Universal alignment probability and subset selection for ordinal optimization.", Journal of Optimization Theory and Applications, 93(3), 455, 1997
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
370
Balanced Production Cost Estimation for Byproduct Gases in Iron and Steel Making Plants Heui-Seok Yi a, Jeong Hwan Kim b, Chonghun Han b aSchool of Environmental Engineering, Pohang University of Science and Technology, San 31, Hyoja, Pohang, Kyungbuk, 790-784, Korea bDepartment of Chemical Engineering, Pohang University of Science and Technology, San 31, Hyoja, Pohang, Kyungbuk, 790-784, Korea
Abstract A process industry consists of many individual process plants and each plant produces the various byproducts and final products. The byproduct in a plant can be used as feedstock or energy source in another plant. The cost estimation of the byproduct is critical to calculate the cost of final product when the byproduct is used as feedstock or energy source. However, the unbalanced characteristics of process measurements make the cost estimation of the final product incorrect due to the inaccurate cost estimation of byproduct. In this paper, balanced production cost estimation is implemented for an iron and steel making plant using data reconciliation and multiple gross error estimation. The byproduct cost is estimated from the error-corrected data so that the byproduct cost is consistent between the producer and consumer in the plants. Mixed integer nonlinear programming is applied to estimate the magnitudes of gross errors and remove random errors in process measurements simultaneously. Balanced cost estimation solves any discrepancy between the production cost of producers and the production cost of consumers. Keywords cost estimation, gross error estimation, data reconciliation, byproduct gas 1.
INTRODUCTION
An industrial plant complex is composed of several individual plants. Each plant makes byproducts, intermediate products and the final products from raw materials. Intermediate products and byproducts in a plant can be used for energy sources or raw material in another plant [ 1]. The operational data for energy and raw material consumption and production rate are used for production accounting. However, the random and gross errors of process measurements result in the discrepancies of the generation/consumption amounts between production plants and consumption plants, which make it difficult to estimate the exact production cost. For the production accounting, the error-corrected data must be used to make the financial data from operational data consistent [2]. Measurement errors can be removed by gross error estimation and data reconciliation. Data reconciliation adjusts the process measurement to satisfy the conservation law under the assumption of no gross error. There has been much research work to compensate the gross errors. Serial elimination is simple to implement but loses redundancies during the procedure. Serial compensation keeps redundancy but its accuracy depends on the size of gross errors. Simultaneous compensation is more correct than the serial method, however, much computation time is required [3]. The modification of the simultaneous compensation was proposed to reduce computation time [4]. In this paper, balanced cost estimation of byproduct gases for an iron and steel making plant is proposed using the error-corrected data. Random and gross errors are removed
371 simultaneously by mixed integer nonlinear programming (MINLP). Error-corrected data are used to estimate the production cost of byproduct gases. Estimation results show no discrepancies between the production plant and consumption plant of byproduct gases. 2.
BALANCED E S T I M A T I O N OF BASIC UNIT COST
Pl~ Balanced measurements are .__._~. essential to estimate the basic unit R1 cost. Figure 1 shows the process P2/,' flow network. Plants A, B, C and D R2 ~ V ~ I2 ] ~'I~ i consume raw materials, R1, R2, R3 IT ~ IMT / I M 2 ~ V ~ _ _ . ~ and R4 respectively, and produce P 1, ~P3~t P2, P3 and P4 as the final products, R3 ~[ C [ and I1, I2, I3 and I4 as intermediate p~ products. The intermediate products R4 I4 ,,..are mixed and consumed in the "~ plants E, F and G. The final Figure 1. Simple process network to show the balanced cost products, P5, P6 and P7 are made estimation. from the mixed intermediate products IM1, IM2 and IM3. Table 1 shows the true values and the measurements of the measured streams. Streams IT and IMT are not measured and all the Table 1. Measured values and true values other streams are measured. The plant E makes the final Measured product, P5, from raw Streams True Measured Streams True values values values values material, IM1 that is the R1 121 128 I4 90 95 mixed intermediate product. IM1 180 185 R2 165 171 The basic unit costs of IM2 100 95 R3 98 102 Pi (i = 5,6,7)for raw material IM3 140 136 R4 96 104 I1 have two different kinds S1 36 32 P1 21 25 of views: from the $2 20 17 P2 15 18 production plants A and $3 28 30 P3 18 15 from the consumption plant P5 144 151 P4 6 4 P6 80 76 I1 100 95 E. For example, the basic P7 112 121 I2 150 155 unit cost for P5 from points I3 80 83 of the plants A, B, C and D's views can be defined by Eq. (1). Table 2. Reconciled values P5
Cps,II, A =
IM 1
(1)
II ~-'IM, The basic unit cost of P5 for raw material I1 from the point of plant E's view can be defined by Eq. (2). P5 Cps.11.L. =
11 IM~ ~ I , t
(2)
Streams R1 R2 R3 R4 P1 P2 P3 P4 I1 I2 I3
Reconciled value 121.7 169.2 98.7 100.2 25.2 18.0 15.1 4.0 96.5 151.2 83.7
Streams I4 IM1 IM2 IM3 S1 $2 $3 P5 P6 P7
Reconciled value 96.2 186.9 94.6 146.0 32.2 17.1 29.7 154.7 77.5 116.3
The value of Cp5,:i,~ is equal to the value of Cp5,11, E with balanced data of mass flow rates. For example, the values of Cp5,11,A and Cp5,1~,E are 3.36. However, the value of Cp5,11,A is 3.500 and the value of Cp5,11,E is 3.677 with the measured data of mass flow rates. The error of basic unit cost with unbalanced data is
372 about 4.5 percent. Therefore, the calculation of basic unit cost or other accounting must use balanced flow rates. Assuming that the measured data in Table 1 have no gross error, data reconciliation gives balanced data for mass flows. Table 2 shows the results of data reconciliation. The basic unit costs using balanced data give the identical results (Ces,ll,A=Ces,il,~,=3.667). Therefore, the basic unit cost must be implemented using balanced data that are obtained by data reconciliation of measured data. 3.
GROSS E R R O R E S T I M A T I O N AND DATA R E C O N C I L I A T I O N
To obtain balanced measurement data, data reconciliation and gross error estimation must be implemented. The simultaneous estimation of the multiple gross error magnitudes and the reconciled values is used. The procedure consists of two steps: the first step is to identify the location of the multiple gross errors and the second step is to estimate the gross error magnitudes and the reconciled values. 1 st step In the first step of the proposed algorithm, the candidates of the gross errors are identified by measurement test. The vector of measurement adjustment can be defined as:
a=y-i (3) where y is the vector of measurement and i the vector of reconciled estimation. The measurement test requires data reconciliation before the identification. The test statistics, Za,j, follows a standard normal distribution under H o that no gross error is present in the system.
la l
z a,j = X/-~
(4)
where W~, is diagonal element of covariance matrix of a. Other identification methods can be used in the step 1 such as nodal test.
Ir,I
(5)
where r, is the i th element of the residual vector, r = Ay, and V,, is the diagonal element of the covariance matrix of r . The identified gross errors are formulated as integer variables to confirm their existence and estimate their magnitudes in the second step. 2 ad step
The model for the flow measurements and gross errors can be given by y =x+e+6 (6) where y is the vector of measurements, x the vector of true value, e the vector of random errors and 6 the vector of gross errors. The simultaneous estimation of the gross error magnitudes and the reconciled value is the solution of minimization problem given by mi.
Z
Jt
(y -x y j.t
streams
O'j
/=identified streams
subject to Ai = 0
(8)
373
]dkl < UkBk
(9)
ls l>-e g B
(10)
~,>0
(11)
Bk e Binary (12) where k is the stream that gross error is identified, V is the covariance matrix of constraint residuals, l~ the covariance matrix of measurements. U is chosen as arbitrary large value that can be considered as the upper limit on the bias magnitudes. The value of Bk must be fixed as zero if the measurement or nodal test in the first step does not identify any gross error. The values of ek must be chosen such that the values of ekUk is some times of standard deviation of the measurements [5]. 4.
INDUSTRIAL APPLICATION
Balanced cost estimation by data reconciliation and gross error estimation is applied to byproduct gases in an iron and steel making plant. The iron and steel making plant consumes much energy, and the basic unit cost for energy is very important for the economical production and energy management. Energy can be purchased in the form of liquefied natural gas, coal, heavy oil and electricity and be byproduced in the processing of iron ores as well. There are four kinds of byproduct gases: BFG (blast furnace gas), CFG (COREX furnace gas), COG (coke oven gas) and LDG (Linze Donawitz gas). These gases are not released to the environment because they are very toxic, and are consumed as energy sources in the downstream processes. The schematic diagrams of BFG, CFG and LDG distribution flows are shown in Figure l(a). Solid line shows 5,6 ~o';eT"'~ ................ ii................. i the BFG flow, dotted line CFG flow plant ] . _ . . _ . ~ 4coke i [ Corexfumace ] and dash-dotted line LDG flow. BFG 4furnace 7/ [: i~ / 5 / . _. . . _ ~-------__ 3 ~/ 3 coke .... i i ~t. / t F/S i s by-produced i n b l a s t furnaces and F furnace-~------~ - - , i ........ ~"-'---4 2coke ] : (HD consumed in coke plants and power 2F/S/ ~ ......... ~:" .... : : z " : ' " " : ' , . . . . i ' ]-plants. The remaining BFG is l...._~ -----5........... l coke i t ..................... pressurized and then mixed with COG [~}~uman~ ~-=-:~_& and LDG. CFG is by-produced in U ~ g ~ - ~ , _ _ _ _ / ~ 1F/S COREXfumace a n d i s c o n s u m e d i n [~fumyi~ ~ ~ power plants. The remaining amount is ) mixed with BFG directly. LDG is by~ - - -~;k--~~'H"~iWG/s --~ ] rT~,x produced in steel making plants. LDG _Z~x~ generated in the first steel making ~ / ~ [~-~-~) I t zM, ~ 12power 1 _~_____-:~_ plant is consumed in the first and second
power
plants.
However,
LDG
generated in the second steel making plant is pressurized and then consumed in low-pressure boiler or mixed with BFG and COG. Figure l(b) shows the schematic diagram of COG distribution flow. COG is byproduced in coke plants and consumed in furnaces, power plants, chemical plants, steel making plants, etc. The remaining COG is pressurized and mixed with BFG and LDG. The mixed
I-l"/"plant
~
1
I
--~_.. . . . . . ~3.4-po--wer-~ 2BFG/S 2LDG/S , ,, plant =--- :::--~t'HD) I ' Pi_ 1 !.-- 9-- 9~[{}7~ -~ - ..... " -ste----Lei 1 mak!ng_J
i
j.~.
I
_~
I---.~-T'Nlix-j . p.~W---~,x
[ LP boiier 3 LDG/S ..j
( HD )
Y ~m2kSt~l ingJ F i g u r e 1 (a). T h e s c h e m a t i c d i a g r a m o f B F G , C F G a n d L D G d i s t r i b u t i o n lines
374 gas is consumed in the plate rolling mill (PL) and wire rod rolling mill (WRM) and hot strip mill (HSM). The magnitudes of multiple gross errors and the reconciled values of measured streams are estimated by MINLP formulation and their results are shown in Table 3. The parenthesized values are the magnitudes of gross error estimated. The gross errors are identified in the production plants and no gross error is found in the consumption plants. Using the results of error-corrected data, balanced estimation of the basic unit cost can be computed. The basic unit cost for all plants is not required to show the significance of balanced estimation. The basic unit costs for some of BFG consumption plants in the point of no. 1 BFG plant's view are only shown in this paper due to the limited space of paper. Table 4 compares the basic unit cost of some BFG consumption plants in the point of no. 1 BFG plant's view by the error-corrected data with that by the measured data. The basic unit cost with balanced data from the point of the production plant has no differences with that from the point of the consumption plant. However, the basic unit costs with unbalanced data from the point of no. 1 BFG plant's view are not equal to that from the point of the consumption plant. The estimation error is about 1.14%-~ 3.60% compared with the basic unit cost using balanced data. Cost estimation with the unbalanced data is inconsistent and should be implemented with the error-corrected data. 5.
CONCLUSIONS
Balanced estimation of basic unit cost or production cost is implemented using the error-corrected data. To remove the measurement error, simultaneous gross error estimation and data reconciliation by MINLP is used. The results show that the basic unit cost using the error-corrected data shows no errors. However, the basic unit cost using the measured raw
375
data shows estimation errors about 1.14%-~ 3.36%. Therefore, the estimation of production cost must be implemented using the error-corrected process data. Balanced cost estimation could solve the discrepancy between the production cost for producers and the production cost for consumers. Table 4. The comparison of the basic unit costs Basic unit cost for each plant with Balanced estimation of the basic unit cost unbalanced data ................................................................................ From the point of From the point of From the point of From the point of the no. 1 BFG plant's the consumption no. 1 BFG plant's consumption plant view plant view 1 HSM 39.3 37.5 38.0 38.0 2 HSM 39.0 37.3 37.7 37.7 2 PM 139.7 133.3 134.8 134.8 3 PM 845.7 806.9 816.2 816.2 Billet 542.2 517.3 523.2 523.2 1 WRM 509.8 486.4 492.0 492.0 2 WRM 472.4 450.7 455.9 455.9 3 WRM 275.4 262.8 265.9 265.9 Plants
[C
....
5, 6 power],n I[ TP~T' ~
I c -il b sts i ~1
fu . . . . .
F fu~aeeJ
I,f~.... ~ P[CI 1
chemical plant 4 coke
I
___~3
chemical plant
]
chemical plant
]
___~2
F/S
I
I T M II ICAL ___~i
3 coke ]
t
E
Iio'~l~
l ~,,,o, I-L........................
~ 1 - ~ 3s'~
[,fu .... ] V'~
~------{1smt
I ~PM I
i
2 coke ]
I
I PL ,]J
~ ~
j
I,-~ll 11
~ s~s~, i
i......... ~.-.~~ 17i w~M
~i
I11
EsM i
STS 2
[ I SMPI
1 BCCP
~--J
i
11~2" ~
[ 3, 4 p.... I l pl~tnt J
~l,J~176
i......... 1 -q~.sM /
R st
....l c.,cP
'
'
Figure 1(b). The schematic diagram of COG distribution line REFERENCES
[ 1] D. P. Lal, Chem. Eng. Prog., 98 (2002), 72. [2] B. Harkins and K. Mills, Chem. Eng. Prog., 97 (2001), 58. [3] Q. Jiang and M. J. B a g a j e w i c z , Ind. Eng. Chem. Res., 38 ( 1 9 9 9 ) , 2119.
[4] H.-S. Yi, H. Shin, J. H. Kim and C. Han, Proceedings of ICCAS 2002, International Conference on Control, Autimation and Systems, Oct. 16-19 (2002) Muju, Jeonbuk, Korea. [5] T. A. Soderstrom, D. M. Himmelblau and T. F. Edgar, Control Eng. Practice, 9(2001), 869.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
376
Optimal Design of Batch-Storage Network with Recycling Streams Gyeongbeom Yi* Gintaras V. Reklaitis** * Department of Chemical Engineering, Pukyong National University, Busan, Korea 608-739 School of Chemical Engineering, Purdue University, West Lafayette, IN 47907, U.S.A.
Abstract An effective methodology is reported for determining the optimal capacity (lotsize) of batch processing and storage networks which include material recycle or reprocessing streams. We assume that any given storage unit can store one material type which can be purchased from suppliers, be internally produced, internally consumed and/or sold to customers. We further assume that a storage unit is connected to all processing stages that use or produce the material to which that storage unit is dedicated. Each processing stage transforms a set of feedstock materials or intermediates into a set of products with constant conversion factors. The objective for optimization is to minimize the total cost composed of raw material procurement, setup and inventory holding costs as well as the capital costs of processing stages and storage units. The expressions for the Kuhn-Tucker conditions of the optimization problem can be reduced to two subproblems. The first yields analytical solutions for determining batch sizes while the second is a separable concave minimization network flow subproblem whose solution yields the average material flow rates through the networks. For the special case in which the number of storage is equal to the number of process stages and raw materials storage units, a complete analytical solution for average flow rates can be derived. The analytical solution for the multistage, strictly sequential batch-storage network case, which was previously reported by the authors, can also be obtained via this approach. The principal contribution of this study is thus the generalization and the extension to nonsequential networks With recycle streams.
Keywords : Optimal, Design, Batch, Storage, Network, Recycle 1. INTRODUCTION The purpose of this study is to suggest an effective methodology to determine the optimal capacity(lot-size) of general batch-storage network including recycling streams. We already have developed the compact analytical solution of the optimal lot sizing of multiproduct, sequential multistage production and inventory system with serially and parallel interlinked storage units and processes[ 1]. In this study, we enlarge the network connection structure of the storage units and the processes to the most general form. We assume that any storage unit can be connected to any process as feedstock and/or product. A practical advantage of this study over our previous work exists in that we can deal with the network structure involving non-sequential recycling streams, which is very popular in chemical processes. In spite of general presentation, analytical solution is still available as a special case. 2. O P T I M I Z A T I O N M O D E L
Chemical plant, which converts the raw materials into final products through multiple physicochemical processing steps, is effectively represented by the batch-storage network, as shown at Fig.1. The chemical plant is composed of a set of storage units (J) and a set of batch processes (I) as shown in Fig. l(a). The circle ( j ~ J ) in the figure represents a storage unit, the square (i ~ I ) represents a batch process and the arrows represent the
377 material flows. Each process requires multiple feedstock materials of fixed composition ( f J) and produces multiple products with fixed product yield ( g / ) as shown in Fig. 1(b). Note that storage index j is superscript and process index i is subscript. If there is no material flow between a storage and a process, the corresponding feedstock composition or product yield value is zero. Each storage unit is dedicated to one material.
~~
,~
~ (a) Process and Storage Set f'
20%
~
(b)Feedstock
RawMaterials 60% 30% b,~ 10%
80% ~ Feed Composition
g/
Product Yield
i Composition and Product Yield
......
Finished Products
supp,ir k ,
I C. . . . . . . .
J
(c)MaterialMovement around Storage
Fig. 1. General Structure of Batch-Storage Network Each storage is involved with four types of material movement, purchasing from suppliers ( k ~ K ( j ) ) , shipping to consumers ( m ~ M ( j ) ) , feeding to processes and producing from processes as shown in Fig. l(c). Note that the sets of suppliers K(j) or customers M(j) are storage dependent. The material flow from process to storage (or from storage to process) is represented by the Periodic Square Wave (PSW) model[l]. The material flow representation of PSW model is composed of four variables: the batch size B,, the cycle time co,, the transportation time fraction x,lit (or x~
the start-up time t,lit (or t,o u t ).
According to the same discussion in Reference [1], we assume that the feedstock feeding operations to the process (or the product discharging operations from the process) occur at the same time and their transportation time fractions are the same among feeding or discharging flows. That is, the superscript j is not necessary to discriminate the storage units in the x,. . .(or . x, , ) and t~it (or t ~ ) . The material flow of raw material purchased is J transportation time fraction x/, and start-up represented by order size B[, cycle time cok, time t~ tilt (or t~
All transportation time fractions will be considered as parameters
whereas the other will be the design variables as used in this study. The material flow of J J m J in the same way. The arbitrary finished product sales is represented by B~,com,Xm,t periodic function of the finished product demand forecast can be represented by the sum of ) Xm' J tm. ) periodic square wave functions with known values of B~, COrn,
378 The feedstock flows from predecessor storages and the product flows to successor storages are of course not independent. From the fact that one production cycle in a process is composed of feedstock feeding, processing and product discharging, there exists the following timing relationship between the time delays of feedstock stage and the time delay of product stage. t,out = t~in + co,(1 - x~out \)
(1)
Let D, be the average material flow rate through process i, which is Bj divided by co,. The average material flow through raw material storage and finished product storage are denoted by D~, Dmj respectively. The overall material balance around the storage results in the following relationships; I~1
IKO)I
I~1
IMO)I
J=l
k=l
t=l
m=l
Zg/D, + ZD/, = Z f 'D, + ZD~
(2)
The size of storage j is denoted by VJ.
The initial inventory of storage j is denoted by
V j (0). The inventory hold-up of storage j at time t is denoted by V j (t). The inventory hold-up can be calculated by the difference between the incoming material flows from supply processes and the outgoing material flows into consumption processes. Special properties of the periodic square wave function are required to integrate the detail material balance equation [1 ]. The resulting inventory hold-up functions for storages are;
v.,(,) = v~(o)+ Z,,:, 8~
[intr":l +minl,__resF ' '--;lll ' L 04 J
+ ;(g/B,)/lnt| ,=, -Z m=l
t ' x~~
+min
L(o, jJJ
1,4res x,
[ Lt_,] {1 F,4 ll
B]m int
+min
COJm
L co,
1,~res
(3)
[ E "] f ',,
(f/B,) int t-t, XJm L (Dm JJJ ,=1 cO,
+min
1,~res x;
cO,
l}l
The upper bound of inventory hold-up, the lower bound of inventory hold-up and the average inventory hold-up of Eq. (3) are calculated by using the properties of flow accumulation function[ 1]. (4) k=l
i=l
Ixo)l I11 VtZ = V J ( O- ) Z v ~', ~j- Z g / ~ , k=l
t=l
m=l
It;"+ co, (1- x , out)]
t=l
Izl
IMO)I
t=l
m=l
+Ef/D,[t;"-(1-x;">,]+ ZDJm['Jm--(1--XJm)COj ]
(5)
3'/9 IxO)l III (1 o . t I/I ~) J -VJ - = V ; ( 0 ) + I I(1 - x k ) D kj c ojk - Z Dktk j j + Z ~- x, ) g / D , co, - ~_, g / D , t ~ k=! 2 k=l t=l t=l IM(J)l(l_Xm) J
m
_ t=l
~ ( 1 - x , ) f, JD, co, + ~ f , JD, t~~ - ~ 2 t=l m=l
2
J
J
IM (J)l
Dmcom+ ~ D m tj m;
(6)
m=l
The purchasing setup cost of raw material j is denoted by A~ S/order and the setup cost of process i is denoted by A, S/batch.
The annual inventory holding cost of storage j is
denoted by H j S/year/liter. The annual capital cost of process construction and licensing can not be ignored in the chemical process industries. In this article, we will assume that capital cost is proportional to process capacity in order to permit analytical solution. The objective function of designing batch-storage network is minimizing the total cost consisted of the setup cost of processes, inventory holding cost of storages and the capital cost of processes and storages.
TC:ZZ j=i
Ak +a~D~COk
+
k=l
+ a , D , co,
+
t=l
(7)
HJ-V-7+bJVffb j=l
where a kj is the annual capital cost of the purchasing facility of raw material j, a, is the annual capital cost of process i and b j is the capital cost of storage j. Without loss of generality, the storage size VJ will be determined by the upper bound of inventory holdup, V~.
Therefore, Eq. (4) is the expression for storage capacities.
The independent variables
are selected as cycle times o kJ , co,, initial time delays t~,t,In and average processing rates D~,/9,.
The inventory holdup V j (t) should be confined within the storage capacity.
Sufficient conditions are
0_
Since the storage size
determined through this analysis, only the conditions 0 < V~/, are necessary.
VsJ should be The problem
is defined as minimizing total cost given by Eq. (7) subject to the constraints Eq. (5) _>0 with respect to the non-negative search variables cokJ , co,, t kj ,t~m , D kj , D,. Kuhn-Tucker conditions result in analytical solution.
where ~t'~ =
= [A,D,
Optimal cycle times are;
Hj 2 + b J ] (1- x/,) + a k;
where V, : a, + 0 - , ,/z~,__l/ 2 +
fJ
(8)
_ o,,, J:,
bJ
(9)
Optimal start-up times are; ImJ)l
I/I
k=l
i=l
IMO}I m=l
IMO)I
I~I
m=l
t=l
(10)
380 Eq. (10) has IK(j)[+ II[ variables and [JI equations.
In most real cases, the variables
outnumber the equations. We may need a secondary objective function to fix the additional freedom. Optimal storage sizes are; Ill V~ . Z [ ( 1 - x ' ." )Z J+(1 . x ~
IKO)I . co,+ E ( 1
t=l
IM(J)[ x/c)D~co kJ+ 51(1 z......~
k=l
x~ )DmO) J mJ
(11)
"-
m=l
The optimum value of objective function is;
9r c -
2Z Z ,/A;v;z); + r=l
+
k=l
j=~\ 2
t=l
In order to determine the average flow rates
~IM(J)I + b J ) ~ DJm(Om J J ( 1 - XmJ )
(]2)
=
D/~and D,, a subsidiary optimization problem
minimizing Eq. (12) subject to Eq. (2) should be solved. This is known as a separable concave minimization network flow problem. An analytic solution exists when Eq. (2) can be solved directly. Suppose that IRI+III-IJI where R c J is the storage set with raw material purchase. Eq. (2) can be solved with the following matrix-vector equation.
DRL= (R~x)-'Dou t where DRL
-1 0
{D I
pur'
0 ... 0 0 1 .
o o Re, ` =
--
(13)
0.0
,
D2
lit[
Dp.r, D1, D2 .... , Oll.l_l, DIl~I
p .......
gl -f,' . g2-f12 .
g~ -f21 .
.g ~ - f 2 2
}r
"'" g]'I[-I -J~l[-' 2
}7'
.riM(j) [
, Dout = J m~__iDm j
glll_l
__fll~_ l
g["l-J~}[ 2 gill - ji/I2
o
. . . . . . ~lll-1 qI[-I ~lll qll IRl+l qRl+l IRI+] qRl+l IRI+I -IRl+l 0 0 ... 0 0 glRl+~-f~IRI+~ g2 -J~ "'" gill -~ -Jl/I-~ gill -Jill 0 0 0 0 gl RI+2 f~IRI+2 g~RI+= f IRI+e IRl+2-f, IRI+2 IRI+2 ~IRI+2 "" 9 "'" glll_l Ill_l gill Jill -
0 0 ... 0 0
glJl-l-fl IJl-'
0 0 ... 0 0
glJl-fl IJI
-
2
(14)
-
[J[-1 f~J]-1 lJ[-1 r -] g2 "'" gill-1 Jill-1 ,,[J[ el J[ g~JI _ f~J[
"'" ~1'1-'-~1'1-'
IJl-1
gill
~[J[-1
-"1111
r
gl~ I- ~1'1
3. EXAMPLE PLANT DESIGN The example plant in Fig. 2 has similar structure and parametric values as the example in Reference [1 ] except that this example has recycle streams. The output product of downstream process 3 and 4 is fed into up-stream storage 1. The product in storage 6 is recycled into process 1.
381 4. CONCLUSIONS This study deals with the optimal design of a chemical plant composed of a set of storage units and a set of batch processes in most general form. The material in a storage unit can be purchased, internally produced, internally consumed, and/or sold out. A process can consume multiple materials in storage units and produce multiple products into storage units. All operations are assumed to be periodical. Start-up time and cycle time of any process operations are our major decision variables with average material flow rates. The economic factors considered in this study cover raw material purchase cost, setup cost, inventory holding cost and capital cost of construction. Sequence dependent setup cost and backlogging cost are not considered in this study. Analytical optimal solution for the design problem indicates that the design procedure can be decomposed into two phases; i) To determine the average material flow rates by solving separable concave minimization network flow problem, ii) To determine the cycle times and start-up times by simple analytical equations. Solving concave minimization problem is non-trivial work but analytical solution is available in a special case. The average material flow rates can be calculated by any other methods. 10% 10%
7s=1 90
k= 1 $ ~ / o ~ e r
batch
3o11~
k=2~
25 S / o r d e r
.
"
9o%x~,
,,,
ear
85~
C- ~"~ ~
@
I
~
384,000liter/year batch
~
30 S/order
J
k= 3 ~ 10 S/order
I 15S/order
d I I I ~"r~' I I
7s $~tch
'
168,000liter/year
a
~
Oo
25%~ 75$1 batch
Fig. 2. Example Plant Design.
c ~
I 2,400 liter/year
Acknowledgement This work was supported by a grant No.(R01-2002-000-00007-0) from Korea Science & Engineering Foundation. REFERENCES [1] Yi, G. and G. V. Reklaitis, Optimal Design of Batch-Storage Network Using Periodic Square Model, AIChE J., 48(8), 1737(2002).
Process SystemsEngineering2003 B. Chen and A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
382
Investment Decision-making for Optimal Retrofit of Utility Systems Mingang Z E N G a, Ben H U A a, Jinping LIU a, Xin'an XIE a and Chi-Wai HUI b
aSouth China University of Technology, Guangzhou, 510640, E R. China bChemical Engineering Department of Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong Abstract The paper discusses theory and methods for integrating design and investment decision-making. Based on an application of process integration, this paper presents an optimization strategy for investment decision-making for utility systems. A Mixed-Integer Programming model for utility systems is presented. The objective of a systematic investment evaluation is to minimize operation cost. Based on the mathematical model, software is developed on MS-Excel 2000 for scientific decision-making and optimal design. A case study is given applying this method to investment decision-making for utility systems in a petrochemical plant. The results indicate that this method is effective and rational for making a satisfying choice. Keywords
Investment decision-making, utility systems, process integration
1. INTRODUCTION Because of new challenges, it is very important for process systems to rationally make use of energy and otherwise minimize energy consumption. Utility systems provide both heating and power to consumers. All process plants rely on consistent steam and electricity supplies which require high capital and operating costs in utility systems. They offer a good potential for the efficient cogeneration of power through the use of steam turbines. The need for more efficient cogeneration requires better design of utility systems as well as more efficient operation of existing equipment [11. 2. DECISION SUPPORT SYSTEM (DSS) BASED ON PROCESS MODEL Technology support systems include such aspects as everyday management, marketing, and investment decision-making which vary by both time scale and management function. Process systems optimization is more than control optimization or operation optimization, and includes global dynamic optimization at all three integrated aspects. The objective should not be any simple goal, such as maximum output or minimum cost. The basic goal of global optimization should be maximization of economic benefits over the whole lifecycle of an enterprise or product. At present, some process industry enterprises (especially, refineries) in China have applied Computer Aided Design technology to decision-making in operations and management. At the same time, some software systems (for example, DSS) have been
383 developed so that the application of DSS brings considerable economic benefit and enhanced management and decision-making ability. Here we present a model structure of a DSS for retrofit projects shown in Figure 1[1]. User
G
I
Interactive System
.
.
.
.
.
I
I
Database KBS MI
t
Problem Generatio n System
_ I .......... I ~'~
,
DSS MainControl
t
t
Problem Solution System
I
....
Model Generatio n System
-base, Information Base Fig. 1.
r
Model Solution System
. ion [ ;ystem .....]
I ...... ,tin_
__i
..... ~'D~at:Base, Knowied;~
t
I
[
"I i
Model structure of DSS for retrofit project
3. TOTAL OPTIMAL RETROFIT STRATEGY OF UTILITY SYSTEMS 3.1 Optimization of utility by technology of process integration In process integration, there are two major approaches that a design engineer can take to determine the optimal configuration of a flowsheet and its operating conditions. In one approach the problem can be solved in sequential form, by decomposition, fixing some elements in the flowsheet, and then using heuristic rules to determine changes in the flowsheet that may lead to an improved solution. An example of such an approach is the sequential hierarchical decomposition strategy by Douglas TM. The second strategy that can be applied to solve a process integration problem is based on simultaneous optimization. The design of process systems has been mainly tackled through four major approaches involving heuristics, targeting methods, mathematical programming and systematic exergoeconomic methodology. Heuristic methods provide general rules and guidelines that concentrate on the optimization of process system. The targeting methods are mainly represented by the developments behind Pinch Technology which introduced a methodology aimed at the design of integrated processes [41. Pinch Technology was extended to the concept of "Total Sites" through the work of Dhole and Linhoff tSl. More recently, research has shifted focus onto plant regions [6]. Mathematical programming methods are based upon the formulation of the problem as a mathematical model solved using appropriate optimization technology. Papoulias and Grossmann tTl (1983) presented a mixed integer linear programming approach for performing structural and parameter optimization in process
384 systems. Systematic exergoeconomic methodology, a rigorous quantitative mathematical energy structure model known as the "Three-Link-Model" proposed by Hua t8J (1986), is suitable for any complicated process system. All process plants rely on consistent steam and electricity supply which require high capital and operating costs in utility systems. The need for more efficient cogeneration requires better design of utility systems as well as more efficient operation of the existing equipment. Based on process integration, the optimal strategy for utility systems includes the following considerations. (1) Advanced technology and equipment should be applied simultaneously with process integration. Combined heat and power (CHP) is an important application of process integration. For petrochemicals in China, CHP technology should be developed by cogeneration with a gas turbine and boiler (heating furnace) in order to enhance energy efficiency and decrease energy consumption. (2) Integrated retrofit optimization must be done for combined utility and other process systems. New regenerating technology applied in a process flowsheet has an important influence for investment decision-making of utility systems. (3) Old equipment should be replaced in many petrochemical corporations in China. They obviously decrease energy efficiency. (4) By purchase of outside steam, the energy efficiency of utility systems should be improved simultaneous with minimum annual operation cost [9]. 3.2 Steps for the optimal strategy for utility systems The procedure includes the following steps: (1) Conduct an energy optimal design of all essential plants using exergoeconomic methodology, process simulation, and process integration. This step requires a validated mass and heat balance of the process. Several investment projects should be presented. (2) According to new conceptual design, an overall model includes: a. energy model of each plant; b. models of the energy equipment (e.g. turbine, boiler); c. the site infrastructure (e.g. steam mains); d. factors which influence the analysis (e.g. energy consumption, cost data). (3) The global optimization for investment project can be solved by computer software based on mathematical programming. 4. MODEL OF INVESTMENT OF INVESTMENT DESION-MAKING FOR UTILITY The main objective of investment projects in utility systems is to minimize the total annual operation cost. The total annual operation cost includes: project investment cost, depreciation cost of permanent assets, fuel cost, steam cost, power cost. Mine = Z E.fl.l. + Z EkDk + Z Cf ,iFf ,i q- Z Cs,jFs,j q- eeP n
k
i
(1)
j
Among the numerous assumptions to be considered, some of the most important are: (1) The yearly average steam and electricity demands are known and will not change. (2) Constant efficiency for each boiler and turbine at different flow rates. (3) Constant pressure on steam mains. (4) All gas turbines and their waste heat boilers use clean by-product fuel.
385 Among the numerous constraints to be considered, some of the most important are: (1) Constraint of investment cost for project n (2)
in ~ In limit
(2) Constraint of stream balance for equipment k Z F,.,~- Z Fk, ou, = O in
(3)
out
(3) Constraint of energy balance for equipment k Z F , . , k H i . , k - Z F k , o.tHk,o . t - W k - Q k = O in
(4)
out
(4) Constraint of equipment capacity for equipment k OConstraint of flow rate for equipment k: Fin, k, min ~ Fin, k ~ Fin, k . . . .
(5)
Fk, out,min_~Fk, out < Fk, out,max
(6)
(~)Constraint of work capacity for equipment k: Wk, min~ Wk ~ Wk....
(7)
(5) Constraint of demand and supply: O Power balance f demand and supply: ZE.W.+ZEkWk+P>_Pdem n
(8)
k
@Steam balance f demand and supply: Z E . F . .... + Z E z F k n
....
+F.,m>_F.,aem,m
(9)
k
(6) Constraint of consumption ratio: ....
(10)
P < Pmax
(11)
Ff~Ff
Fs, m < F s . . . . . .
(12)
The optimization model was developed using MS m E x c e l 2000. Like other applications in process systems, the utility system should be formulated as a Mixed Integer Non-Linear Programming (MINLP) problem. However, without scarifying too much accuracy, this problem can be solved by using MSmExcel 2000. Application of this method to investment decision-making for utility systems projects in a petrochemical corporation indicates that this method is effective and rational. 5 CASE STUDY The methodology just described has been applied to a continuous plant. The petrochemical corporation is made up of fertilizer factory, refinery and chemical plant. It consists two
386 utility systems separately for fertilizer factory, refinery. Based on process integration and integrated hierarchical structure of process systems, optimal investment projects are presented for conceptual design and retrofit of utility. In order to minimize the overall cost, the fuel consumption and power generation in the utility system are normally the crucial cost components to consider. According to expert experience, some other feasible projects are recommended. By the method of computer-aided calculation, the results of every project are shown in Table 1. From the analysis of results, we conclude that the rational investment project should include retrofit of fertilizer (coke), new turbine of fertilizer and new gas turbine of fertilizer. Table 1. Project Project 1 Project 2 Project 3 Project 4 Project 5 Project 6
The results of every project for retrofit of utility systems Content of project Investment c o s t Annualcost (108u (108u Currentbasic operation _ 5.00 Retrofitof fertilizer (off-gas) 0.6 4.66 Retrofitof fertilizer (off-gas), new turbine of 0.7 4.55 fertilizer Retrofitof fertilizer (off-gas), new turbine of 0.8 4.19 fertilizer, pipe-line for high pressure steam Retrofitof fertilizer (coke), new turbine of 4.0 4.39 fertilizer Retrofit of fertilizer (coke), new turbine of 5.5 3.98 fertilizer, new l~asturbine of fertilizer
6 CONCLUSIONS Based on process integration and hierarchical structure of process systems, retrofit investment decision-making has been studied. A general formulation and an optimal strategy for the synthesis of utility systems have been presented. The objective for systematic investment evaluation is operation cost minimization. Sottware for scientific decision-making and optimal design has been developed on MSuExce12000. A case study is presented applying this method to investment decision-making for retrofit utility systems in a petrochemical corporation. It is shown that the new options considered may reduce significantly investment and operating cost. REFERENCES [1] Hua B., Zhou Zhangyu & Cheng S., hierarchical model architecture for enterprise integration in process industries. Chinese Journal of Chemical Engineering. Vol 9, No.4, 371-376, 2001. [2] Zeng Mingang, Yin Q., Hua Ben et al, Study on the Operation and Management Decision-making for Process Industry, Chemical Techno-Economics, Vol 18, No.8, 5, 42-46, 2000. [3] Douglas J. M., Conceptual Design of Chemical Processes, p. 569. McGraw-Hill, New York (1988). [4] Linnhoff B. et al., User Guide on Process Integration for the Efficient Use of Energy.
387 IChemE, U.K. (1982). [5] Dhole and linhoff, Total Site targets for fuel, co-generation emissions and cooling. Computers chem. Engng 17, S 101-S 109 (1992) [6] Hui C. W. and Ahmad, S., 1994, Total Site Heat Integration Using the Utility System, Computers and Chemical Engineering, Vol. 20, supple, 729~742, 1994. [7] Papoulias S. A. and I. E. Grossmann, A structural optimization approach in process synthesis, Computers chem. Engng 7, 695-734 (1983) [8] Hua B., A Systematic Methodology for Analysis And Synthesis of Process Energy Systems, AES, 2-1,57(1986) [9] Zhang Guoxi, Hua Ben Liu Jinping, Optimal Synthesis and Operation of Utility Systems, Modem Chemical Industry, Vol 20, No 7, 43~45, 2000. NOTATION C ~ T o t a l annual operation cost E n ~ b i n a r y variable, 1 if project n in operation, 0 otherwise E k ~ b i n a r y variable, 1 if Equipment k in operation, 0 otherwise 13n ~ I n v e s t m e n t payback ratio for Project n In--Investment cost for Project n D k ~ D e p r e c i a t i o n cost for Equipment k Cf,i Cost of fuel i Ff,i Consumption of fuel i C s , i ~ P r i c e of purchase of steam F s j ~ F l o w rate of purchase of steam C~--------Price of purchase of power P ~ F l o w rate of purchase of power In,limit~Max Investment cost of project n F k , i n ~ F l o w rate of steam from equipment in to equipment k Fk,out~Flow rate of steam from equipment k to equipment out Fk,in~Enthalpy of steam from equipment in to equipment k Fk,out~Enthalpy of steam from equipment k to equipment out W k ~ W o r k of equipment k Fk,out~flow rate for output of equipment k Fin,k~flOW rate for input of equipment k Q k ~ Q u a n t i t y of heat of equipment k W ~ Q u a n t i t y of power from equipment P ~ Q u a n t i t y of power from purchase Pdem~Quantity of demand Fk,s,m~flOW rate of steam level m from equipment k F s , m ~ f l o w rate of steam level m from purchase Fs,dem,m~flOW rate of steam level m demand Ff,m a x ~ M a x flow rate of fuel from outside P m a x ~ M a x flow rate of steam from outside Fs,m,max Max quantity of power from outside
Subscripts n ~ P r o j e c t number
k ~ E q u i p m e n t number
ACKNOWLEDGEMENTS Financial support from project 79931000 of NSFC and project G2000263 of the State Major Basic Research DevelopmentProgram is gratefully acknowledged.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby Elsevier Science B.V.
388
Design water allocation network with minimum freshwater and energy consumption
X.-S. Zheng, X. Feng and D.-L. Cao Department of Chemical Engineering, Xi'an Jiaotong University, Xi'an 710049, China
Abstract
In this paper, water minimization as well as energy minimization is taken into account at the same time. Some new concepts are put forward, such as substream, favorable substream, favorable water network and favorable water system. Because of the complexity of coupling water system with energy system, the problem is divided into two steps. First, the optimal water allocation networks are obtained. Since the result may be not unique, those optimal water networks that are favorable to heat integration should be selected. In the selection, choosing those favorable water networks to perform subsequent heat integration. Second, the heat exchanger network of the selected optimal water network is constructed. If the selected network can satisfy the demands of the favorable water network, the determined minimum utilities can guarantee global optimality. Considering the particularity of heat integration on favorable water network, a method to design heat exchanger network on favorable water network is introduced in this paper.
Keywords 1.
water allocation, water network, heat exchanger network, heat integration
INTRODUCTION
Water and energy are widely used in process industry, and they play important roles in the sustainable development of society. Generally speaking, if the level of contaminants of wastewater from one process can satisfy the demand of the inlet stream in another process, the wastewater from the former process can be a source of water for the latter one. In this way, a large amount of freshwater may be saved. This procedure is called water allocation planning. Moreover, temperature may be one of the demands for inlet stream. The inlet stream should be heated or cooled to a certain temperature in order to satisfy the demand of the operation. So heat integration is involved at the same time. Till now, research in the area about Simultaneous Energy and Water Minimization is fairly few. In such research, the solving process is divided into two steps: perform the water system integration first, and then integrate the energy system. The research either cannot guarantee the global optimum [ 1], or can only be applied to single contaminant water-using
389 system [2]. In this paper, a new approach is proposed. The solving process is also divided into two steps, but after the water system integration, the optimal water-using network that is favorable to heat integration is selected. Heat integration is performed for the selected optimal water-using network. This method can be used to not only single contaminant water-using systems but also multiple contaminant systems. 2.
PROBLEM STATEMENT
Given is a set of water using or water disposing processes which require water of a certain quality and temperature. It is desired to design a water-using network to achieve the minimum freshwater consumption, while the processes receive water of adequate quality. At the same time, it is desired to construct a heat exchanger network to achieve the minimum utility. 3.
INTEGRATION PROCEDURE
In this paper the process of designing energy efficient water-using network is decoupled into two steps. Step 1: water system integration. This step involves two sub-steps. First perform the water system integration to obtain the optimal water-using networks consuming the minimum freshwater. Then select the proper water-using network that is favorable to heat integration from the optimal ones. Step 2: heat integration. Construct the heat exchanger network for the selected water-using network. 4.
WATER SYSTEM INTEGRATION
By using the water pinch technology or solving the mathematical program, the minimum freshwater consumption as well as the network of interconnection of water streams can be obtained. Detailed discussions of this process are given elsewhere [3-5]. It should be noticed that the optimal water-using networks obtained by any of the two methods may be not unique, and different water-using network may have different energy target. Thus it is preferred to select some favorable networks to perform energy integration so that the energy target can be minimized. 5.
SELECTION OF WATER-USING NETWORK
Although the number of optimal water-using networks may be not unique, and even may be infinite, the number of water-using network structures is finite. It is water-using network structure that mainly affects the energy target. If the optimal structure that is favorable to heat integration can be found, the heat exchanger network constructed on the basis of this water-using network will be optimal. To find out the water-using network structure that is favorable to heat integration, firstly,
390 a new conception, substream, is defined. A substream is such a stream with unchangeable flowrate that starts from fresh water or a water-using process and ends at wastewater or a water-using process 9 Any water stream in the water-using network can be regarded as a blend of some independent substreams. The total flow rate of these substreams should be equal to the flow rate of the water stream. The whole water system can be considered to consist of a set of substreams, instead of water streams 9 If there are no water generation or water loss in the processes, each substream starts from fresh water and ends at wastewater. If there are water losses in some processes in the water system, some substreams must end at those processes. Otherwise, if there are water generations in some processes, some substreams must start from those processes. The following are some examples to explain this concept more clearly 9 For the water-using network shown in Fig. 1, if there are no water generation or water loss in the water system, all the substreams in the system are given in Table 1. 20 t.h 1, 40~ , 20 t.h 1 43 75 t.h ~, 75~ i " ~1 Process 3 [ 43.75 t.h ~"1, wastewater _ Freshwater [ 2 2 7 ~ Process l [ 2 82.5 t.h_l [ ~
o --16 20 C
.
12.St.h.1
t 1-150 t'h31 ,"
9
I l l 1.25 t.h.,
50~
7
oo 10 e C
- 4 ~ [ ~roc
ss 2 I/38. 6 tlh-l[' Process 4
~ . ~
, 30o ' h ]
138.75 t ' 8 ~
Fig. 1. Water-using network without water generation or water loss processes
Table 1 Substreams of the water-using Substreams Flow rate NO. (t.h 1) 1 20 2 12.5 3 11.25 4 38.75
network shown in Fig. 1 Passing process sequence Freshwater, Freshwater, Freshwater, Freshwater,
Process Process Process Process
1, Process 3, Wastewater 3, Wastewater 2, Process 3, Wastewater 2, Process 4, Wastewater
Fig. 2 shows the processes with water generation or water loss. It can be seen that, in process 1 there is a water loss, while in process 2 there is water generation. The whole water system can be considered to consist of the following substreams, as shown in Table 2. I
Process 1
[ 3 t.h -~
~!
10 t.h -1
Freshwater
11 t.h ~
~~'i
Process2 ] 8 t ' h l .--.---I
~
Wastewater
Fig.2. Water-using network with water generation and water loss processes
391 Table 2 Substreams of the water-using network shown in Fig. 2 Substreams Flow rate Passing process sequence NO. (t'h-') Freshwater, Process 1, Wastewater 1 3 Freshwater, Process 2, Wastewater 2 5 Freshwater, Process 1 3 2 Process 2, Wastewater 4 3
According to the definition, the flow rate of each substream is a constant. And the total flow rate of those substreams that enter a certain process is equal to the inlet flow rate of the process. On the other hand, the total flow rate of those substreams that flow out a certain process is equal to the outlet flow rate of the process. In this way, the analysis of the water streams is transformed to the analysis of the substreams, which can make the analysis more convenient. Since different substreams flow through different processes, each substream has its own temperature variation. If the temperature variation of a substream meets any of the demands listed below, the substream is called a favorable substream. 1 The temperature of the substream increases from the beginning to a certain maximum value, and then decreases until the end, and the outlet temperature is higher than the inlet one. 2 The temperature of the substream keeps increasing until the end. 3 The temperature of the substream decreases from the beginning to a certain minimum value, and then increases until the end, and the outlet temperature is lower than the inlet one. 4 The temperature of the substream keeps decreasing until the end. 5 The temperature of the substream keeps a constant. Favorable substreams have an important feature that only one kind of utility (cold or hot utility) is needed to meet the temperature demand of the substream. If the inlet temperature of the substream is lower than the outlet one, only hot utility is needed. Otherwise, only cold utility is necessary. The minimum utility duty is equal to the product of the heat capacity and the temperature difference between inlet and outlet. This is very easy to demonstrate. Assume a substream, whose inlet temperature is Ti,, and outlet temperature is Tout (Tin< Tout), has a maximum temperature Tm,~. This substream belongs to the first kind of favorable substreams. The hot part of the substream, whose temperature range is from Tmox to Tout, can be used to heat the cold part, whose temperature range is from Tm,~- (Tout-Tin) to Tin. The rest part of this substream, whose temperature range is from Tmax- (Tout-Tin) to T,,,~ (ATmi,,~ Tout-Tin), can be heated directly by the hot utility. As to this substream, only hot utility is needed. The case of other kinds of substream can be demonstrated in the similar way. If the temperature variation of a substream can't satisfy any of the above demands, in other words, the temperature of the substream increases and decreases for several times, the substream is called an unfavorable substream. An unfavorable substream usually needs to be cooled and heated by other substreams or utilities.
392 If all the substreams in the water-using network are favorable ones and their outlet temperatures are higher than their inlet ones, only hot utility is needed for this system. Otherwise, if the outlet temperatures of all the favorable substreams are lower than the inlet ones, only cold utility is needed. Such water-using networks are called favorable water-using networks. Those water-using systems with the minimum freshwater consumption, for which at least a favorable water-using network can be found, are called favorable water-using systems. A favorable water-using system can guarantee that the minimum freshwater and energy consumptions can be achieved simultaneously. In other words, the energy target of a favorable water-using network is no higher than that of any other optimal water-using networks. But if unfavorable substreams exist in the water-using network, the system is likely to need not only hot utility but also cold utility. Therefore, it is preferred to select favorable water-using networks to perform heat integration so that the utility can be minimized. 6.
HEAT INTEGRATION
The pinch technology as well as LP model can be used to identify the minimum utility duty, or the target. The heat-exchanger network can be designed by heuristic rules or by solving a MILP model. The detailed information can refer to Ref. 6-7. Here, a simplified method of designing heat exchanger network is proposed to construct the heat-exchanger networks for favorable water-using networks. In a favorable water-using network, the proper hot part of each substream can be used to heat the corresponding cold part. The rest part can be directly heated or cooled by utilities. An example is shown in Fig. 3. Deal with all the substreams with this method, and then a heat-exchanger network with minimum utility duty can be obtained.
Tin
,..._ Tmax-( Tin- Tout)
Ti,, < Tout and Tout-Tin >1 A Train To~,~
Tm~ Fig.3. Heat transfer for a favorable substream
7.
CONCLUSIONS
In this paper how to design an energy efficient water-using network is discussed. Because of the complexity of the problem, it is decoupled into two steps. In the first step, the water system integration is performed and optimal water-using networks are obtained. Then those networks that are favorable to heat integration are selected. In the second step, heat integration is performed for the selected networks sequentially. Selecting the water-using networks favorable to heat integration is most important in the
393 method. The favorable water-using networks composed of favorable substreams can guarantee that the minimum freshwater and energy consumptions can be achieved simultaneously. For favorable water-using system, a simple method for heat integration is also proposed in this paper. The minimum utility duty is equal to the product of the heat capacity and the temperature difference between freshwater and wastewater. The proper hot part of each substream can be used to heat the corresponding cold part. The rest part can be directly heated or cooled by utilities. Deal with all the substreams in this way, and then a heat-exchanger network with minimum utility duty can be obtained. ACKNOWLEDGEMENT The financial supports for this research provided by the National Natural Science Foundation of China under G20176045 and the Major State Basic Research Development Program under G2000026307 are gratefully acknowledged. REFERENCES [ 1] [2] [3] [4] [5]
L.E. Savulescu and R. Smith, AIChE Annual Meeting, Miami Beach, Florida, 1998. M. Bagajewicz, H. Rodera and M. Savelski, Comput. Chem. Eng., 26(2002) 59. Y.P. Wang and R. Smith, Chem. Eng. Sci., 49(1994) 981. M. Savelski and M. Bagajewicz, Waste Manage., 20(2000) 659. C.H. Huang, C.T. Chang, H.C. Ling and C.C. Chang, Ind. Eng. Chem. Res., 38(1999) 2666. [6] R. Smith, Chemical Process Design, McGraw-Hill, New York, 1995. [7] W.D. Seider, J.D. Seader, and D. R. Lewin, Process Design Principles: Synthesis, Analysis and Evaluation, Wiley, New York, 1998.
394
ProcessSystemsEngineering2003 B. Chen and A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
Monitoring, diagnosing and improving the performance of LPbased real-time optimization systems D. Zyngier and T.E. Marlin Dept. of Chem. Eng., McMaster Univ., 1280 Main St. W, Hamilton, ON, L8S4L7, Canada
Abstract Operations optimization seeks to track a changing true plant optimum by maximizing a model-based calculated profit in closed-loop real-time optimization (RTO). The model is updated using recent plant data before the economic optimization is performed. This paper presents a monitoring method for RTO that compares the plant performance to the best possible (maximum) profit and a diagnostic method that uses profit-based experimentation to improve RTO performance, when needed. The proposed methodology for monitoring and improving the performance of RTO systems is evaluated on a gasoline-blending case study. Guidelines for RTO applications to blending operations are provided and insights into future research are presented. Keywords real-time optimization, performance monitoring, experimental design 1. INTRODUCTION
1.1. Real-Time Optimization (RTO) Systems A model-based real-time optimizer determines operating conditions that maximize profit while obeying a set of constraints. Opportunity for RTO exists when there are degrees of freedom available for optimization after safety, product, quality, and production rate goals have been satisfied. Potentially large benefits are possible when the optimum operation point changes often, i.e., there are significant disturbances in the process or changes in economicsIll. Many successful RTO industrial applications indicate that large improvements are possible when the RTO system is functioning well E21.However, methods for monitoring RTO performance are not yet available; online performance evaluation is the topic of this paper. Blending is a very important process in, among others, petroleum processing, cement manufacturing and food processing. Because of its economic importance and relatively simple models, blending was one of the first applications of RTO. This paper will present a monitoring method tailored to the blending process. 1.2. Blending RTO using Linear Programming Blending refinery streams to produce gasoline is a non-linear process; however, a linear model can be formulated when blending indices compensate for some non-linearities, models are formulated using flow-quality units, and the component flow ratios remain within established limits. Thus, blending RTO becomes a linear program t3]'t41. Usually, the feedback measurements are used to correct the model by adjusting a "bias" in the model, similar to the feedback used in model-predictive control, as shown in Fig. 1.
395 Measured:
Reformate
"-'lows 31end )ctanc
n-Butane FCC Gas ~ _.~ i ~ - . iiinli.iiiii..i-i-..:'9 Alkylate ~ . _ _.-._..................................... 31end RVP :. I
I EFi*Qi -<(EF,) (Q~x-Bias) I
...
......' i
Predicted
Fig. 1. Bias-updating strategy for Gasoline-Blending RTO In linear programming, an optimum is located at a comer point of the feasible region. The RTO feedback affects the right-hand side but does not affect the component qualities, Qi (technological coefficients). Therefore, the "bias" feedback does not affect the comer point selected by the RTO; it only ensures that constraints are satisfied after sufficient iterations ts]. The RTO model must be accurate enough to yield the correct comer point, i.e., the set of flows that are adjusted to satisfy the constraints. Therefore, we develop a monitoring system that evaluates the selection of the comer point. At the optimum comer point of a maximization problem, the profit gradient lies within the cone defined by the gradients of the active constraints. This is equivalent to constraints having non-negative Lagrange multipliers t61. When uncertainty exists in the constraints, this optimality criterion can be satisfied for more than one comer point for the range of parameters in the uncertainty description Esl. This concept forms the basis of the monitoring and diagnosis method. 2. M O N I T O R I N G AND DIAGNOSIS M E T H O D O L O G Y The proposed method has several stages. First, a bound on the best possible performance is established; naturally, if the plant performance is very near the best possible, no further diagnosis is required. Second, we check if the system may converge to different optimal comer points. If so, we use plant data to improve the estimate of the blending components. The data are collected during RTO operation on the current blend, including designed perturbations to improve the accuracy of selected parameters. Finally, the improved parameters (component qualities) are applied in the RTO system to converge to a single comer point. 2.1. Monitoring RTO Performance Successful process control performance diagnosis has demonstrated the importance of a bound on the best possible performance. For RTO, the maximum profit bound is achieved at feasible values of the adjusted component flows and the uncertain component qualities. Eq.(1) is used to calculate the maximum possible profit, where Fi represents the flows of each component added to the gasoline, value is the value of gasoline, costi is the cost of each component, Q; represents the octane and Reid vapour pressure (RVP) qualities of each component and the subscript blend refers to the final blended gasoline.
396 5
max ~ (value- cos t i )Fi s.t. 5
5
ZFiQi~Oblend,minZFi i=1 5
i=1 5
ZE'Qi~-Qblend . . . . i=1
(1)
ZF/ i=i
5
f blend,min ~ Z Fi ~ f blend. . . . i=1
F/,min - ~ E ~ F / . . . .
Qi,min < O i
< Q i ....
The RTO calculation in Fig. 1 uses the best estimates of the component qualities. In Eq.(1), the component qualities are variables within their uncertainty bounds; the bounds are determined from historical plant monitoring data. Thus, Eq.(1) yields an upper bound on profit that can be achieved in the plant. If the current profit achieved is near the maximum, no further analysis is warranted; if not, model accuracy should be further investigated. A negative value of the minimum Lagrange multiplier, 8, in Eq.(2) indicates that there is a 1 possibility of converging to different comer points [51. The in'tial step of the investigation is the check for different optimum comer points for the uncertain optimum.
/mjn 8 = min| s't" ; /VR Pr IF --2rVRgA IF = 0 ~Qi,min
(2)
~ Qi ~-~Qi. . . .
If 8is positive, only one optimum comer point is possible, and the difference between the current profit and its upper bound is due to the true properties in the plant, which cannot be changed. In Eq.(2), 2 are the Lagrange multipliers of the active constraints, I7RPr is the reduced gradient of the profit function, and 17~gA is the reduced gradient of the active constraints. 2.2. Improving Model Accuracy 2.2.1. Using recent closed-loop RTO data To improve the accuracy of the LP solution, the uncertainty in the technological coefficients, i.e., the component qualities (Qi), must be reduced. The importance of the coefficients depends on the LP solution considered; for example, coefficients appearing in (only) inactive constraints might not need improved accuracy, while other coefficients would be crucial to finding the be~t correct comer point. The initial step in re-estimating the coefficients involves using the data during closed-loop RTO. Estimating process parameters from closed-loop data is often problematical because of correlated disturbances. However, in this situation, the components come from large tanks, so that the qualities, though uncertain, should not change significantly during several RTO executions. The major drawback to this approach is information content in the data; experiments are likely necessary.
397
2.2.2. Using designed plant perturbations Another option to re-estimate plant parameters is to generate data through designed plant perturbations. In the literature, one can find techniques for generating data through plant perturbations in order to improve the quality of an existing phenomenological model, such as the ones using E- and D-optimality criteria tTl'tSl. However, these approaches (1) do not account for the cost of experimentation and (2) do not consider the relative importance of the coefficients in reducing the uncertainty of a specific LP solution. In view of these deficiencies, more recent approaches have taken the cost of making perturbations in the plant into account when obtaining data to update the model parameters [9]'[~~ These approaches consider the trade-off between the cost of the plant perturbations themselves and the improvement (in terms of profit) that will be derived from more accurate models via parameter updating. The method presented here builds on prior work by incorporating inequality constraints, e.g., product qualities, into the experimental design criteria. In evaluating the future benefit of an experiment, the variance of the estimated parameters is required; this is used to determine whether only one comer point is possible given the reduced uncertainty after the experiment. The parameter variability Var(Q) based on the data reconciliation and parameter estimation (DRPE) using the method in Eq.(3) [111 can be calculated using Eq.(4) [1~ min a a,Q
r V-~ a
(3)
s.t.
f (z, + a,, Q) = 0 Var(Q)=[,=, ~,OQ), -~z , ~,Oz),
~
,
OO) [~.Oz) ~,Oz)
~
(4)
In these equations, z is a vector of process measurements, a is the vector of measurement adjustments, V is the variance-covariance matrix of the measurements z, f are the mass balance equations and active constraints, and Nh is the historical data horizon considered (Nh=l for the case study presented here). The partial derivatives in the second term of Eq.(4) were evaluated at the designed experiment. One goal of the experimental design strategy is to reduce the parameter uncertainty such that only one optimum comer point is possible. If only one comer point is possible, even if there is a significant difference between the profit bound and the current profit, the closedloop RTO system will converge to the best achievable situation given the current (plant) component qualities. The second goal is to perform the experiments at low cost. The cost of experiments is determined by evaluating the effect of changing flows from the current, best estimate of the optimum. The cost of these changes is given by the product of each flow change times its marginal cost provided by the LP solution from the RTO. Changing a basic flow has a zero marginal cost and changing a non-basic flow has a positive cost (reducing profit). The experimental design combines these two goals in the calculation given in Eq.(5). The experimental costs are minimized while ensuring that all Lagrange multipliers for the RTO after the experiment are positive for all models within the parameter uncertainty. In Eq.(5), the parameters/ti correspond to the marginal costs obtained at the current RTO run. Note that this experimental design has a specific, but not unique, approach regarding constraints during the experiment. In this case study, the total flow of blended material was
398 5
min ~ / / i F/ Ft
i=1
(5)
s.L 5 5 EF~.Q,>Obtend,minEFi i=1 5
i=1 5
EFiQi<-Qblend . . . .
EF/ i=1
i=1
F/,mi n ~ F /
~
if qualities are fixed, inequalities become equalities
_~
m}n ~ > 0 8 : n~nAi _. fV R Pr Ir -ArVRgA 1~= s't" loi,min <- Oi
guarantees unique optimum basis using quality bounds after experiment- Qi, min, Qi.max obtained from Eq.(4)
<-Q, . . . .
allowed to change, as well as the blended qualities (as long as the inequality constraints were satisfied). Since the blended gasoline product is stored in a large tank before being shipped to customers, no requirement exits for strict control of instantaneous product qualities during the experiment t31,t~31.Other experimental designs, such as restricting changes in the total flow, can be easily added. 3. BLENDING CASE STUDY RESULTS A simulation case study is reported to demonstrate the method applied to a blending process. The process model was assumed to have initial uncertainty in both Reformate octane number (+2 octane numbers) and n-Butane RVP (+4 psi), which are realistic estimates of the uncertainties encountered in industrial gasoline-blending processes I~41. To simplify the example, all other component qualities were assumed to be known without error. All flow and blended quality measurement data used in RTO feedback and parameter estimation were corrupted with realistic measurement noise. The transient behaviour of flows and profit are presented in Fig. 2 for closed-loop RTO with periodic monitoring and diagnosis. Initially, the process under closed-loop RTO was stable and operating with a profit of $9,118/day. Two monitoring results indicate that the initial profit might not be the best achievable; (1) the maximum profit bound was much higher ($12,687/day) and (2) a comer point different from the RTO result could be optimal within the initial parameter uncertainty. The next diagnostic action is to use the current (RTO execution 10) data without experimentation to re-estimate the component qualities using Eq.(3). When these parameters and their uncertainties were applied at RTO execution 11, multiple comer points were still possible. Therefore, further diagnosis was required. The diagnosis proceeded to experimental design, which was implemented at RTO execution 12. This was the lowest cost, single experiment that satisfied all requirements in Eq.(5). The parameters were re-estimated using the approach in Eq.(3) with the experimental data. The parameter uncertainty was reduced substantially, so that only one comer point was possible for all parameters within the uncertainty range. The new parameters were implemented at RTO execution 13 and the closed-loop RTO was continued with bias
399 2UUUU
~= ~ 15000
-
~ ~ 10000 - y 5000
- ~
..............
f
Actual profit Upper bound
.-,
~ 6 0 0 0 - ,r
o ~
~-~
e r r o o e r (' \t II
-
\
2000 ,
/
4000
1
6
11
o
Ref
-'-x
LSR n-But FCC
_
16
21
RTO runs Fig. 2. Results for Blending Case Study updating, increasing the profit to $10,037/day. Since only one basis was possible, the RTO converged to the maximum possible profit that can be achieved in the plant, which is still much lower than the upper bound of $12,492/day. This is not a concern since the bound uses component qualities better than are available in current plant operation. 4. DISCUSSION The proposed scheme successfully monitors and enhances the performance of linear RTO systems. The proposed experimental design strategy is profit-oriented and is able to observe both equality and inequality constraints. A natural extension to this work is to simultaneously design multiple experiments in order to make less aggressive moves in the component flows. Future research will also focus on imposing trajectory constraints on the blended qualities. One could then specify that these constraints be satisfied instantaneously, halfway through the blend, or at the end of the blending batch. REFERENCES [ 1] [2] [3] [4]
T.E. Marlin and A.N. Hrymak, 5th Int. Conf. on Chem. Proc. Control, 316 (1997) 156. D.C. White, Hydrocarbon Processing, (1997) 43. A. Diaz and J. A. Barsamian, Hydrocarbon Processing, (1996) 71. P.J. Vermeer, C.C. Pedersen, W.M. Canney, and J.S. Ayala, NPRA Comp. Conf., CC-96130 (1996) 1. [5] J.F. Forbes and T. E. Marlin, Ind.Eng.Chem.Res., 33 (1994) 1919. [6] S.G. Nash and A. Sofer, Linear and Nonlinear Programming, NY, USA, 1996. [7] J.A. Juusola, D. W. Bacon, and J. Downie, Can.J.Chem.Eng., 50 (1972) 796. [8] T.L. Sutton and J. F. MacGregor, Can.J.Chem.Eng., 55 (1977) 609. [9] J.C. Pinto, Can.J.Chem.Eng., 79 (2001)412. [10] W.S. Yip and T.E. Marlin, Control Engng.Practice, (2001) [11] I.P. Miletic and T. E. Marlin, Comp. chem.Engng., 22 (1998) $475. [12] S.E. Keeler and P. M. Reilly, Can.J.Chem.Eng., 70 (1992) 774. [13] M.M.F. Sakr, A. Bahgat, and A. F. Sakr, Control and Computers, 16 (1988) 75. [ 14] R.W. Szoke and J.P. Kennedy, NPRA Comp. Conf., CC-84-117 (1984) 1.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
400
A MultiObjective Genetic Algorithm optimization framework for batch plant design Loucif ATMANIOU (a), Adrian DIETZ (a), Catherine AZZARO-PANTEL (a), Pascale ZARATE (a"b), Luc PIBOULEAU (a), Serge DOMENECH (a), Jean Marc Le Lann (a) (a)Laboratoire de G6nie Chimique LGC UMR CNRS 5503, ENSIACET INPT, 118 route de Narbonne, 31078 TOULOUSE Cedex 4, FRANCE (b)Institut de Recherche en Informatique de Toulouse, IRIT UMR CNRS 5505, 118 route de Narbonne, 31062 TOULOUSE Cedex 4, France Abstract This paper presents a MultiObjective Genetic Algorithm (MOGA) optimization framework for batch plant design. For this purpose, two MOGAs are implemented with respect to three criteria, i.e., investment cost, equipment number and a flexibility indicator based on Work In Process (the so-called WIP) computed by use of a Discrete-Event simulation model. The performances of the two procedures are studied for a large-size problem.
Keywords batch plant design, multiobjective optimization, genetic algorithm
1. INTRODUCTION Batch plant design provides a source of many problems suitable for solution by genetic algorithms. The complexities and constraints involved in the field motivate the development of genetic algorithm techniques to allow innovative and flexible design solutions. In that context, multiobjective genetic algorithms (MOGAs) extend the standard evolutionary-based optimization technique to allow individual treatment of several objectives simultaneously [1, 2]. This allows the user to attempt to optimise several conflicting objectives, and to explore the trade-offs, conflicts and constraints inherent in this process. This is consistent with the increasing complexity of decisional problems which requires the use of more flexible and open approaches providing a more realistic and effective resolution of problems than that offered by the traditional approach in Decision Making
[31. In this work, two MOGAs are implemented to solve a batch plant design optimization problem with respect to three criteria, i.e., investment cost, equipment number and a flexibility indicator based on Work In Process (the so-called WIP) computed by use of a Discrete-Event simulation model previously developed [2].
401 2. BASIC PRINCIPLES The two MOGA procedures are first based on the same two-step approach: i. At the inner level (slave problem), the AD-HOC 1 discrete-event simulator is used to evaluate different batch plant configurations, thus solving the underlying scheduling phase [2]. ii. At the upper level (master problem), the search strategy for finding the most interesting configurations from a given criterion viewpoint is achieved by use of a classical monocriterion Genetic Algorithm (GA). By lack of place, only the encoding procedure is presented in what follows (see [2] for more detail). The size of the chromosome is defined by the maximum number of operations in the recipe list and each gene encodes the number of parallel equipment for each available size into a string of decimal digits. All variables are integer values. The procedures then differ by the multicriteria optimization procedure. On the one hand, multicriteria optimization was implemented by use of a Pareto-algorithm, which is based on the search for good configurations found by the GA, that is, the optimum solutions in the sense of Pareto (let us recall that a solution is said "Pareto-optimum" if it is strictly better than the other solution on, at least, one criterion). All the "Paretooptimum" solutions define the Pareto zone. On the other hand, the Electre methodology was adopted [4] offering the possibility to the decision maker to take into account quantitative as well as qualitative criteria. As already mentioned, three quantitative criteria were defined but this study could conduct us to several other choices (for instance, environmental considerations). 2. 1 Presentation of MOGA1 approach Multicriteria optimization has received considerable attention in various domains of chemical engineering [5]. Historically, the first reference to deal with such situations of conflicting objectives is attributed to Pareto in 1896. Of course, multicriteria optimization is not restricted to Pareto optimality approach. This very popular technique, being well suited to a GA procedure, where a set of solutions is generated, has been retained in this study. The Pareto optimal solutions can be defined as follows. A solution x* e X is called Pareto optimal if V x ~ X : either [fi(x)= fi(x*)] Vie1, with I={1,2 .... ,k} or fi(x) > fi(x*) even if it exists j e I, such that fi(x) < fi(x*) In most cases, the Pareto optimal set (also called the Pareto zone) is not constituted of a single solution, but involves a set of solutions, called non dominated solutions. To characterize the Pareto zone among a population of feasible solutions, Dedieu [2] has implemented the following non dominated sorting procedure, called Pareto Sort (PS). (i) The first individual x' of the population is chosen as reference. (ii) All individuals x of the population are in turn compared with the reference according to the following function h: hi = 0 if f i ( x ) > fi(x) hi = 1 if f i (x ') < f i (x) 1AD-HOC : Ateliers Discontinus-Heuristiques et Ordonnancement h Court-terme
402 k
H=I-Ihi where k is the number of criteria. t=l
If H = 1, the individual x is dominated by the reference, and so excluded from the current Pareto zone; if H = 0, x is not dominated by x'. (iii) When all elements of the population are tested, return to step one, where the new reference is the first individual not yet ousted and not yet chosen as reference. (iv) The Pareto zone is achieved when all individuals of the population are either chosen as reference, or ousted. The MOGA1 procedure [2] consists of a two-phase approach: i. ii.
The monocriterion GAs are implemented to optimize separately each one of the k objective functions. The PS is applied on a population obtained by merging some populations generated when solving the various monocriterion GAs.
2. 2 Presentation of M O G A 2 approach
The second approach involves the two following steps: (1). the first step consists in the generation of good solutions for each criterion using the same genetic algorithm.(2). The set of solutions is ranked using the Electre methodology (for more details about the Electre procedures see [4]). The alternatives to rank are those given by the various mono-criterion Genetic Algorithms. The solutions are then scored on each criterion in order to give the performance matrix. This matrix is usable only if the decision maker gives some weight to each criterion. All these performances and weights are aggregated by the Electre procedure through Indifference and Preference thresholds. 3. RESULTS AND DISCUSSION The performances of the two multi-objective procedures are now analyzed for a large size example (7 final products from 10 raw materials, 10 unit operations). A quantitative comparison of the performances of the MOGAs is conducted and some results are then examined. Three criteria are considered, i.e., investment cost and two criteria related to workshop flexibility, number of equipment items and the number of campaigns to reach steady state. In this example inspired from a study treated in [Ber99], the number of possible configurations is about 3.5 1024. The main parameter set is presented in Table 1. Data set for simulation
Data set for AG
Time horizon Campaign duration Number of campai~;ns Equipment type
Population size Generation Number Crossover probability Mutation probability
Table 1: Data set for simulation and parameters for GA
403 3.1 Results o f M O G A I The results show that the approach allows obtaining satisfying solutions, with relatively short computational time, that is very important in a multi-criteria environment. Tables (2-11) present the obtained results before the application of the Pareto sort procedure. RUN # 6
RUN # 1 Number of campaigns
COST
Number of equipment items
1-1
9545884
4O
5
1-2
10390540
40
9
N~
Number of campaigns
COST
Number of equipment items
6-1
5643025
23
7
6-2
7281451
27
6
N~
....
1*-3
10827407
42
10
6-3
9545884
40
5
1-4
10956312
44
11
6-4
9973141
41
7
1-5
11522842
42
9
6-5
10827407
42
1-6
12351483
45
5
6-6
11843192
12732121
44
5
6-7
12351483
RUN# 7 N~
COST
1-7 RUN # 2 ..
Number of campaigns
N~
COST
Number of equipment items
2-1
72814514
27
6
7-1
2-2
9545884
40
11
7-2
2-3
9636421
41
10
2-4
10732413
42
2-5
11142309
2-6
11522842
2-7
9636421
Number of equipment items 41
Number of campaigns 10
10732413
42
9
7-3
11142309
40
9
9
7-4
11633935
43
9
40
12
7-5
12732121
44
42
7
7-6
13321235
45
11843192
44
5
N~
COST
3-1
5643025
Number of equipment items 23
Number of campaigns 7
3-2
7281451
27
6
5 5
RUN#8
RUN # 3
COST
Number of equipment items
8-1
5643025
23
7
8-2
7281451
27
6
N~
Number of campaigns
8-3
9545884
40
5
84
10390540
40
9
3-3
9545884
40
5
8-5
10827407
42
to
3-4
9973141
41
7
8-6
11522842
42
9
3-5
10956312
44
11
8-7
12351483
45
5
3-6
11432713
40
10
3-7
11740273
44
5
N~
COST
Number of equipment items
Number of campaigns
12411371
45
5
9-I
10390540
40
9
9-2
10964221
43
10
9-3 9-4
11142309 11432713
40 40
9 10
9-5
11952074
44
5
9-6
13232126
45
5
N~
COST
Number of equipment items
Number of campaigns
10-1
5643025
23
7
10-2
7281451
27
6
10-3
9545884
40
5
3-8
RUN #9
RUN # 4 N~
COST
Number of equipment items
Number of campaigns
4-1
5643025
23
7
4-2
7281451
27
6
4-3
9545884
40
5
4-4
10470337
40
9
4-5
11251226
41
9
4-5
11633935
43
9
RUN # 10
, , ,
4-6
11952074
44
5
4-7
13232126
45
5
4-8
13321235
45
5
404
BUN # 5 N~
COST
Number of equipment items
Number of campaigns
10-5
10732413
42
5-1
5643025
23
7
10-6
11142309
40
9
5-2
10480631
41
10
10-7
11432713
40
10
5-3
10964221
10
10-8
11952074
44
5
,.
9
5
5-4
12351483
5-5
12732121
44
5-6
13232126
45
5
Tables 2-11" Results of the monocriterion GA (before Pareto sort) After the Pareto sort procedure application, the best solutions are presented in Table 12: Solutions
Cost
Number of equipment items
3-1 ;4-1 ;5-1 ;6-1 ;8-1 ; 10-1
5643025
23
Number of campaigns 7
2-1 ; 3-2 ; 4-2 ; 6-2 ; 8-2 ; 10-2
7281451
27
6
1-1 ; 3-3 ; 4-3 ; 6-3 ; 10-3
9545884
40
5
Table 12: Best solutions obtained for MOGA1 (after Pareto sort) 3.2. Results of M O G A 2
Several parameters are to be defined before using the Electre methodology. As proposed with MOGA1, the used criteria are the cost of the obtained solution, the number of equipment items and the number of campaigns. The relative weight for each criterion and its value scale must be defined. Two Preference thresholds are also introduced, giving an interval of values, meaning that the decision maker prefers one action rather than another one. Two Indifference thresholds give an interval of values,for which the decision maker is indifferent between two actions. The Electre methodology is used with the results obtained before the Pareto sort procedure application. The parameter set is summarized in Table 13. Criteria
Weight criteria Pc
Indifference thresholds a , I]
Cost
0.25
O. 1-1638
Preference thresholds a , I] O. 1-1639
Number of equipment items
0.25
0.1-7
0.1-8
Number of campaigns
0.5
0.1-8
0.1-2
Table 13" Parameter set used for Electre method The performance matrix required for Electre II1 is presented in Table 14. The best compromise obtained with Electre III with respect to the three criteria considered simultaneously are given below: A1, A2 are preferred to A3 andA13 A3 andA13 arepreferred to A18, A19, A20, A21, A22 A18, A19, A20, A21, A22 are preferred to A23, A24, A25 A23, A24, A25 are preferred to A5 A5 is preferred to A4
405 A4 ispreferred to A6, A7, A9, A14, A16, A17 A6, A7, A9, A14, A16, A17 are preferred to A8, AIO, All, A12, A15.
It can be concluded that the three best solutions obtained are the same using MOGA1 and MOGA2, but the hierarchical order presents some differences for the other ones. Solution
Solutions '
AI
3-1 "4-1 9 5-1 " 6 - I 9 8-1"10-1 | 2-1 "3-2 9 4-2 96-2 9 8-2"10-2 , 1-1 "3-3 " 4-3 "6-3 " I 10-3 i 2-3"7-1 i
Cost ('10 3)
Number of equipment items
Number of campaigns
5643
23
7
,
A2
A3
A4
A5 A6 A7
3-4'6I 4 ;10-4 i 1-2 98-4 " 9-1 i 4-4 5-2
AI0 All
2-4 97-2 9 10-5 i 1-3 96-5 " 8-5 1-4"3-5
7281
27
6
9545
40
5
9636
10
9973
41 41
5-3 | 7-3 "9-3 9 10-6
Number of equipment items
,
AI4
4-5
10390
40
9
10470
40
9
10480
41
10
10732
42
9
10827
42
10
10956
44
11
10964
43 40
10
i
A15
i ,
11251
'
Number of campaigns
41
'
J
i
i
9
7
|
AI8 |
AI9 |
|
i
A21
i
|
A22 |
|
11633 11740
|
9 5 |
44
i
4-6"95" 10-8 1-6"54"6-7 9 8-7 3-8
|
43 44
|
11843
|
|
42
|
2-7"6-6
A20
5 |
|
11952
44
|
|
i
12351
5
45
|
|
12411
5
i
45
5
|
A23 |
|
A24 | |
|
10
|
|
4-5 " 7 - 4 3-7
, 40
11522
|
AI7
, 11432
1-5 98-6
|
11142
, ii 3 - 6 " 1 0 - 7
AI6
|
|
A12 Al3
Cost ('10 3)
J
|
A9
Solutions
,
|
A8
Solution
|
A25
|
1-7"55"7-5 4-7 956"9-6 4-87-6
'
12732
i
'
| |
44
i
13232 i
13321
i |
45
|
|
45
|
9
Table 14: Performance matrix for Elect're III
'
4. CONCLUSIONS
A perspective of this work is now to generalize the approach to other criteria (environmental respect) and to study the influence of the criterion weight in order to study the robustness of the method. REFERENCES
[ 1] Goldberg, D.A., <>, Addison-Wesley (1994). [2] Dedieu S., Algorithmes g6n6tique multicrit6re : conception et remodelage d'atelier de chimie fine. Th6se de doctorant, INP ENSIGC Toulouse, France, 2001 [3] Moreno-Jimenez, J.M.; Aguaron, J.; Escobar, M.T. and Turon, A. (1999): Multicriteria Procedural Rationality on SISDEMA. European Journal of Operational Research, 119 (2), 388-403. [4] Roy, B. (1985) : M6thodologie Multi-crit6re d'Aide h la D6cision. Ed Economica. [5] Coello A.C., 2000, An Updated Survey of GA-based Multiobjective Optimisation Techniques, A C M Computing Survey, Vol. 32, 109-143
406
ProcessSystemsEngineering2003 B. Chenand A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
Adaptive Multigrid Solution Strategy for the Dynamic Simulation of Petroleum Mixture Processes: A Case Study. Heiko Briesen and Wolfgang Marquardt Lehrstuhl fuer Prozesstechnik, Aachen University, Germany
Abstract The paper presents some results for the extension of our work on the adaptive steady-state simulation of refinery processes to dynamic problems. The results show that the adaptive algorithm is capable to follow the changes in the composition by adapting the composition representation during the course of simulation. Keywords refinery processes, adaptivity, dynamic simulation, multigrid 1. INTRODUCTION The appropriate representation of the composition of a multicomponent mixtures like occurring in refinery processes plays a crucial role for the accuracy of simulation results. To reduce the system size the prevailing method of choice in industrial practice is to introduce a set of fictitious so called pseudocomponents to approximately reflect the thermo-physical characteristics of the given mixture. Although, this approach is highly descriptive and straightforward, the lack of adaptivity and approximation error control are major disadvantages. The lack of adaptivity means that the compositional representation is chosen once and for all right at the start of the simulation. The fact that another representation may be desirable at certain locations or at certain times is not reflected. Also the error by the simplification inherently introduced by the pseudocomponent lumping procedure is neither monitored nor controlled. Altematively to this reduced, discrete representation of the mixture composition by pseudocomponents, a continuous distribution function can be used instead. The use of a continuous distribution function has been intensively investigated in the 80ies [ 1-2] and had become popular under the term continuous thermodynamics. The approach presented in this work however, is not restricted to the continuous thermodynamics framework. Instead every discrete, detailed compositional analysis of a multicomponent mixture can be interpreted as a continuous distribution function with piecewise constant values. With the distribution function F(~) the fraction of a particular component can always be recovered by integrating the distribution function for the corresponding interval of the characterizing variable ~. The choice of the characterizing variable is quite flexible. In this work we use the boiling temperature of the components. In the distribution functions all the components are sorted by increasing boiling temperature. The main advantage of the continuous composition representation lies in the fact that the model reduction problem can be regarded as a discretization problem [3], for which a large number of adaptive techniques have been developed in the field of numerical mathematics.
407 2. MATHEMATICAL F R A M E W O R K In previous work we have set up an adaptive multigrid solution strategy for the solution of steady-state refinery process models [4-5]. In principle, each model given in a discrete (or pseudocomponent) formulation can be rewritten by use of the continuous distribution functions. This continuous formulation does not yet reduce the model size but comprises the full detailed model. In an iterative algorithm a solution of the process model is obtained in high detail. In each iteration loop the current iterate is corrected by a correction term, which is determined on a reduced detail in compositional representation. According to a multigrid framework the obtained intermediate solution is prolongated to the high compositional detail by use of a linearization of the model equations. This contribution deals with the extension of the proposed approach to dynamic simulation problems. In the case of dynamic problems the time has to be discretized, additional to the characterizing variable of the distribution function defining the composition. To be able to exploit the knowledge gained for steady-state processes, a Rothe-type discretization sequence is chosen, where the time coordinate is discretized prior to the spatial variable. Using a 3stage Runge-Kutta time discretization the problem has the same mathematical structure as the steady-state process model, thus the developed techniques can be adopted in a straightforward manner. In each time step the compositional detail is taken only into account if it is necessary for the accurate determination of the current step. Though formally not correct, this can be seen as a pseudocomponent method, in which for each time step an optimal set of pseudocomponents is chosen to reflect the behavior. The details of the algorithm, including its formal representation can be found in [4]. 4. RESULTS AND DISCUSSION As a simple test problem a dynamic flash simulation with temperature and pressure control is performed. The feed was characterized by a True-Boiling-Point (TBP) curve and an overall density. This is the minimal information needed to set up a composition representation of a petroleum stream. However, the algorithm is also capable to handle feed streams with a given detailed composition. For the details of the model and the used property correlation we again refer to [4]. As shown in Fig. 1, the process runs at steady-state the first phase corresponding to the controller set points (temperature set point = 400 K, pressure set point = 1bar). At the time t=2 min the temperature set point is changed to 420 K. The plots on the left show the time trajectories of the temperature and pressure. On the fight the corresponding manipulated variables (heating and vapor flow) are plotted. We see that the heating is strongly increased in order to correct the temperature. The increased temperature leads to an increased evaporation and therefore to a pressure increase, which itself is corrected by increasing the vapor outlet flow. After an overshoot the system reaches its new steady-state according to the new controller set points. These "macroscopic" results are not really surprising. More interesting in an adaptive simulation scheme is the question how the composition representation is handled during simulation. The adaptation grid for the above simulation is given in Fig. 2. The plots on the
408 right additionally show the liquid and vapor phase distribution functions for the times indicated close to the arrows. The horizontal time grid shows the time steps, while the vertical grid reflects the composition discretization. The finer the grid the higher is the resolution of the corresponding region of the characterizing variable and the associated components.
4301
250
420
~ 200 ~o 150 "~ 100
~ IF 4101]
400
"
,--,
"ff 1.2
|
50
-6
|
1 0.8 0.6
I-i
o 0.4
o.
io
0.2
4'0
integration time [min]
2'0
4'0
integration time [min]
Fig. 1. Control of a dynamic flash with a change in the operation point (Tset from 400 to 420 degree C).
45 "~
40
~35 ..~
vapor phase
F(~) liquid phase
50
!111111
T = 50 min 0 F(~) 5 T = 13,9 min
.2
~~ II ~25 IIIIIII II It .=- 2ommltt I I I I IIIII I
I
I
I
I
!
!
I l l III II liil li 15i,
ainu I I I I mmgli i i Ill I i l l l l Ilil l
!
10 ...,,-"n' ', ',',;',',1 ' -"::
5~ 90
I
0
F(~) 5
I:::::
0.25 0.5
7,4 min
I
0 1/ F(~) 5 ~ T = 1,6 min
I
0.75
1
O0
0[5
1
0
0.5
1
Fig. 2. Adaptation grid and composition results for a dynamic flash simulation with a change of the operation point.
409 For the compositional grid it is apparent that we do not need a high resolution representation within the initial steady-state phase. Hence, only the initialization grid with the four discretization basis functions is used. Note that that the grid does not reflect the composition representation itself but the rate of change of the composition. So if the composition does not change, as in steady-state, no detail in the compositional grid is needed. The cut point, the largest value of ~ for which the concentration in the vapor product flow is greater than in the feed, is at ~,c = 0.2265. The cut point for the new steady-state with a system temperature of T=420 K increases to ~,c=0.246. As most of the feed stream is removed with the liquid product for both temperature set points (79.5%, 70.8%), we would expect that the composition of the liquid distribution would not change very much while the vapor distribution should undergo significant changes, especially for values smaller than the cut points. By judging the vapor and liquid distributions at some selected times, we exactly see this expected behavior. Whereas for the liquid distribution, the change in the distribution function is hardly detectable, the vapor distribution shifts toward the higher boiling components. The adaptively generated composition representation is capable to pay respect to this behavior. After the step change, we have an increased resolution of the grid for ~<0.25. Since changes in composition generally take longer than temperature and pressure changes, this high resolution is still needed even if the system seems to be in steady-state for the temperature and pressure. Finally, however also the composition reaches its new steady-state so that the grid can be coarsened for the time steps as well as for the composition grid. 5. CONCLUSIONS As a simple test problem for the adaptive composition representation a dynamic flash calculation was performed. The results reveal that the composition representation is adapted over time according to the current needs. As components become less important for the behavior of the process their modeling detail is reduced. Other components which govern the transient behavior are modeled in higher detail. Despite of the adaptive reduction of the model complexity, the multigrid framework always ensures that the solution is obtained on a high compositional detail. To judge the full benefit of the approach more complex example problems have to be investigated. REFERENCES [1 ] M. R/atzsch and H. Kehlen. Continuous thermodynamics of complex mixtures. Fluid Phase Equil., 14, 225-234 (1983). [2] R. Cotterman and J. Prausnitz. Flash calculations for continuous or semicontinuous mixtures using an equation of state. Ind. Eng. Chem. Process Des. Dev., 24(2), 434-443 (1985). [3] R. von Watzdorf and W. Marquardt. Fully adaptive model size reduction for multicomponent separation problems. Comp. Chem. Eng. Suppl., 21, 811 - 816 (1997). [4] H. Briesen. Adaptive Composition Representation for the Simulation and Optimization of Multicomponent Mixture Processes. VDI-Verlag, Diasseldorf, PhD Thesis (2002). [5] H. Briesen and W. Marquardt. An adaptive multigrid method for steady-state simulation of petroleum mixture separation processes, submitted to Ind. Eng. Chem. Res. (2002).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
410
Electricity Contract Optimization for a Large-Scale Chemical Production Site Pang Chan 1, Kwok-Yuen Cheung l, Chi-Wai Hui l, Haruo Sakamoto 2 and Kentaro Hirata 2 1 Chemical Engineering Department, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. 2 Safety Engineering and Environmental Integrity Lab., Process Systems Engineering and Production Technologies Field, MCC-Group Science & Technology Research Center, Mitsubishi Chemical Corporation, 1 Toho-cho, Yokkaichi Mie 510-8530, Japan.
Abstract This paper presents a study on selecting and optimizing electricity contracts for a large-scale chemical production site, which requires electricity importation. Two common types of electricity contracts are considered, Time Zone (TZ) contract and Loading Curve (LC) contract. A multi-period linear programming model, Site-Model, is adopted for the contract selection and optimization. This model includes all site-wide information, such as, product demands, material and utility balances. Therefore, an optimal contract for maximizing the total site profit can be determined. Keywords: Site Model, Electricity Contract, Optimization, Utility System 1. INTRODUCTON In a large-scale chemical production site, a utility plant generates steam and electricity to support production. Steam supply usually has to be self-guaranteed because of no external supply. Electricity on the other hand can be imported from the local electricity company when it is desired. In the past, no special electricity supply contracts were designed for large-scale industrial users. Like other domestic users, the chemical site pays electricity fee based upon its monthly or annual import amounts [ 1]. This in fact is unfavorable for both chemical site and electricity company to utilize their electricity generation facilities effectively. In order to improve the situation, long-term electricity contracts are suggested for industrial users. There are two main types of electricity contracts, Time Zone (TZ) contract and Loading Curve (LC) contract. Time Zone (TZ) Contract: Electricity is classified into several types (e.g. peak, normal and off peak) according to the electricity demands in different time shifts and periods. Simple restrictions are employed in the contract to encourage the users to consume more electricity during the off peak periods, such as, different prices, intake maximum and minimum values at different time periods. A detail timetable is attached with the contract to indicate the prices and the restrictions for electricity importation. Loading Curve (LC) Contract: Besides different electricity types, the LC contract has more restrictions on the users, so that the electricity company can plan for its operation
411 easier. The users are requested to follow the maximum and minimum importation limits at all time shifts, the annual overall purchasing amount, the import ratios among different electricity types and some other constraints for the contracted year. If any of these restrictions is violated, penalties will be charged on the users. The users however are rewarded by a lower electricity tariff and are guaranteed by the contracted demands. So far, not too many research papers addressed the electricity contract optimization from a user's point of view. This paper will investigate a contracting problem for a large-scale continuous plant site. A multi-period linear programming (LP) model, namely site model [2], is used for generating the optimum contract. The site-model includes all production and utility plants' operational constraints, interconnections and other necessary site-wide information. The objective of the model is to maximize the total site profit within a year and it is assumed that the contract is applied for one year only. 2. PROBLEM DEFINITIONS
A sample site [3] shown in Figure 1 is used for following case studies. It contains five main production plants (ETY, VCM, PVC, PE and PP) and three small process groups (Plants A, B and C). There is also a utility plant, which consists of two boilers (B 1 and B2) and three turbines (T1, T2 and T3). By providing the raw materials costs, fuel costs, product prices and together with the electricity contract information, the model can then maximize the overall site profit by selecting an optimal contract. ....................
Figure 1: Example Chemical Production Site. 3. MATHEMATICAL FORMULATIONS A site model used for the case study is described in this section. It is developed by GAMS [4] and solved by the solver OSL [5]. Indices: p a s Sets." P A S
Plant or unit Altemative Shift in time period T
M T r
Material Time period Material balance index
set of plants and units set of altematives set of time shifts
M T R
set of materials set of time period set of material balance indices
412
Parameters: Coefficient for variable at plant p with material m and alternative a in period t, shift s for material balance equation r. Cost of material m with alternative a at plant p in period t, shift s. Length of shift s (8 hours/shift). Electricity fixed cost. Maximum of normal electricity import (MW). Maximum of total additional electricity import (MW). Fixed cost parameter of the normal electricity import at shift s (Day: 18, Night: 9, Mid-night: 2.6, MYen/MW). Fixed cost parameter of the total additional electricity import (4.5 MYen/MW). Total electricity import (MWH). Cost parameter for total electricity import (5x 10-4 MYen/MWH).
Er, p, m, a, t, s p, m,a,t,s
SLs ELFix MNELs MTAEL NFCPs AFCP TEL TELCP
Positive Continuous variables: Fp, m,a,t,s Variable of material m with altemative a at plant p in period t, shift s. Continuous variable." Profit in period t, shift s. Profitt, s Material and Energy balance equations." ~"( F, . . . . . t , s X E r , p , m , a , t , s )
-'0'
reR, teT, seS.
(1)
p e P , meM, aeA, seS.
(2)
re( p,m,a )t,s
Profit Calculation: PROFIT,., = ~ (Fp . . . . .
,,s X C p . . . . . t,s x S L s ) ,
p,m,a
ELFix Calculation for LC Contract: ELFix = ~" (MNEL, x NFCP, ) + (MTAEL x AFCP) + (TEL x TELCP),
seS.
(3)
g
Electricity fixed cost equals to zero when TZ contract is applied.
Objective Function: The objective is to maximize the total profit by considering electricity fixed cost. teT, seS. max ~ PROFIT,., - ELFix ,
(4)
l,S
4. CASE STUDY: CONTRACT FEE CALCULATIONS
Parameters and conditions of the study contracts are shown in Table 1, electricity types are classified by time shifts (Day, Night & Mid-night). For the TZ contract, the maximum importation rate of 40 MW is applied for the whole contracted period. The contract fee is based on the total electricity intake at each shift multiplied by the corresponding price. There is no annual importation minimum for the TZ contract. For the LC contract, the maximum importation rate is changed at different seasons and time shifts. The contract fee contains two parts, variable cost and fixed cost. The variable cost is calculated as same as TZ contract. The fixed cost calculation follows Eq. (3). The annual importation minimum of LC contract is 100,000 MWH/yr.
413 For both TZ and LC contracts, additional 20 M W is allowed to add on the normal importation m a x i m u m during the maintenance periods in April and September. 4.1. Case 1 In Case 1, the monthly production targets of final products (PE, PP and PVC) are fixed at 1400, 675 and 1325 tons/month respectively. Although the monthly targets are fixed, the production rates can vary for days and shifts. Table 2 shows that the total profit under LC contract is higher than TZ contract, but the electricity importation under the LC contract is also higher than the TZ contract. From Table 3, it shows that the importation costs of LC contract are lower than self-generation at night and mid-night shifts. The chemical site purchases more electricity at these two shifts. As the production targets are the same, purchasing cheaper electricity eventually increases the chemical's total profit. Table 1: Electricity contract summary. LC contract Price at Price at Normal Shift weekday weekend EL Max (Yen/MWH) (Yen/MWH) (MW) Winter Day 18,000 17,000 20 (Nov Night 6,000 5,000 9 40 Feb) Mid-N 2,700 2,500 60 Spring Day 19,000 18,000 18 (Mar Night 7,000 6,000 35 Apr) Mid-N 2,700 2,600 55 Summer Day 22,000 21,000 10 (May Night 8,000 7,000 20 Aug) Mid-N 3,300 3,100 40 Autumn Day 20,000 19,000 15 (Sep Night 7,000 6,000 30 Oct) Mid-N 2,700 2,500 50 Min. for all periods in both contracts:
TZ contract Price at Price at Normal weekday weekend EL Max (Yen/MWH) (Yen/MWH) (MW) 21,000 20,000 40 15,000 13,000 40 8,000 6,000 40 22,000 21,000 40 16,500 14,500 40 8,500 6,500 40 25,000 24,000 40 18,000 16,000 40 9,000 7,000 40 22,000 21,000 40 16,500 14,500 40 8,300 6,300 40 0.5 MW
Table 2: Results of case 1. Total profit (MYEN) Total electricity import (MWH) Electricity fixed cost (MYEN) # ELFix = 20• 18+40•215215215
LC contract 10,014 243,030 1,088#
TZ contract 9,905 154,164 --10 -4 = 1088 MYen.
Table 3" Electricity cost comparisons between self-generation and importation in case 1. Data for August Shift LC contract TZ contract Cost of electricity generation (Yen/MWH) Day 11,000 11,375 Night 10,875 11,375 Mid-N 10,750 10,875 Cost of electricity import (Yen/MWH) Day 22,000 25,000 Night 8,000 18,000 Mid-N 3,300 9,000 Day 0.5 (10) * 0.5 (40) Amount of electricity import (MW) Night 20 (20) 0.5 (40) * Values in ( ) = electricity import max. Mid-N 40 (40) 40 (40)
414 For the TZ contract, the day and night shifts importation costs are higher than the self-generation costs. The site only imports minimum amount of electricity at the two shifts (Figure 2). At the same time, the in-house utility plant has to generate more electricity at day and night shifts to sustain the production. This however increases the total operation cost of the chemical site and makes the TZ contract less preferable. It is also observed that the TZ contract importation rates (Figure 2) for all three shifts are relatively stable through the year (minimum at day and night shifts, maximum at mid-night shift), except in utility plant maintenance periods (April and September). The LC contract however gives a fluctuated profile (Figure 3), especially in night shifts. It is because of the seasonal effect, more import in winter but less in summer. This is a typical property of LC contract; importation rates will vary with different time periods. ..........
+-
+
:_/
aon
Feb
Mar
.
--I Apr
May
Jun
.
.
.
.
~
J
.
l.!
I-
Jul
/~g
i
Oct
Nov
Dec
Month
[ ....
_
...............
.....
- -
I N'g" I--~
.
i
. . . . . . . .
Sep
__
.
.
Jan
Feb
Mar
Apr
May
Jun
.
.
Jul
.
.
.
.
.
.
Aug
.
.
.
.
.
Sep
.
.
.
.
-1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
--~
.
Oct
Nov
Dec
Momh
]
Figure 2: Electricity importation profiles of TZ contract in case 1.
Figure 3" Electricity importation profiles of LC contract in case 1.
4.2. Case 2 Electricity importation rates are high for both contracts in case 1. It indicates that there is a necessity for expanding the current utility plant in the chemical site. In case 2, electricity generation capacity of turbine T1 in the site utility plant is doubled from 25 MW to 50MW. It is assumed that no investment cost is required in the study. All other conditions are kept as same as in case 1, and the two contracts are then compared again. After T1 expansion, the electricity self-generation costs are lower than the importation prices at all shifts during the normal operation periods. Hence, the site tends to import less electricity during above time shifts. For TZ contract, there is no restriction on total annual electricity import. Except when the prices are very low in holidays and weekends, the site intakes only the minimum amount (Figure 4). This greatly reduces the total electricity importation cost. However, the annual importation amount in LC contract is bounded at 100,000 MWH. Even when the purchasing cost is more expensive than generation cost, the site still needs to import certain amounts in mid-night shifts to satisfy the contract (Figure 5). Therefore, the TZ contract has a higher total profit than LC contract in case 2 (Table 4). Table 4: Results of case 2. LC contract TZ contract 10,565 11,053 Total profit (MYEN) 166,440 55,044 Total electricity import (MWH) 1,049 --Electricity fixed cost (MYEN) Table 5: Comparisons between August and September results in TZ contract. TZ contract Shift August September Cost of electricity generation (Yen/MWH) D 6,000 19,750
415 N M D N M
Amount of electricity import (MW) * Values in ( ) = electricity import max.
.
,0/
. . . . . .
!I 1 - ' ~
~ 9. . . . . .
~_/llza!
MJu~ . . . . . .
6,000 6,000 0.5 (40)* 0.5 (40) 0.5 (40)
15,900 7,725 0.5 (60) 17.5 (60) 54 (60)
I
_
~
" .....
Figure 4" Electricity importation profiles of TZ contract in case 2.
!/
~0t
/
........
~__--~-~ni~li~
Ap. . . . . .
......
! .... L__'
......
MJUoI~Ill Au. . . . . . . . . . .
Figure 5" Electricity importation profiles of LC contract in case 2.
Above situation only applies for normal operation periods. When examining Table 5 and Figure 4 carefully, it discovers that electricity is imported in maintenance periods though the self-generation costs are lower than the price at the time. It suggests that the electricity importation depends on the contract types and also is affected by the site operating conditions. All these factors should be taken into account for maximizing the total site profit. 5. CONCLUSIONS In this paper, a site model is used for selecting and optimizing electricity importation contract. The selection of electricity contract type and the optimization of contracted terms should be carefully studied for every individual consumer according to its in house demands and generation capacities. New investment decisions that change the site-wide utility demands or capacities may also affect the selection of electricity contract. This should be considered simultaneously during making a decision. ACKNOWLEDGMENTS
The authors would like to acknowledge financial support from the RGC (Hong Kong) and the Major State Basic Research Development Program (G2000026308) and technical support from Mitsubishi Chemical Corporation. REFERENCES [1] W. Partowidagdo, Exploration Insurance for Geothermal Energy Development in Indonesia, (2000), Proceedings of World Geothermal Congress 2000. [2] C.W. Hui, Computers and Chemical Engineering, 24, (2000), 1023-1029. [3] K.Y. Cheung, & C.W. Hui, Total-Site Maintenance Scheduling, Proceedings of PRES'01, (2001). [4] A. Brooke, D. Kendrick, & A. Meeraus, GAMS - A User's Guide (Release 2.25); the scientific Press: San Francisco, CA, 1992. [5] IBM. OSL (Optimization Subroutine Library) Guide and Reference (Release 2), Kingston, NY, 1991.
'
416
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
lterative Dynamic Programming of Optimal Control Problem Using a New Global Optimization Technique Min Ho CHANG, Young Cheol PARK, and Tai-yong LEE Department of Chemical and Biomolecular Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Republic of Korea
Abstract This work presents an efficient global optimization algorithm suitable for the basic numerical engine of iterative dynamic programming (IDP). Random search method which is utilized with IDP, generally does not guarantee the optimality and hence a deterministic algorithm with finite e-convergence is recommended. In this work difference of convex envelope method is used as the global optimization technique, which generate the difference of convex underestimator of objective function iteratively. Using the proposed modified IDP, two optimal control problems are solved successfully. Keywords iterative dynamic programming, optimal control problem, global optimization, deterministic optimization, difference of convex underestimator 1. INTRODUCTION Consider an optimal control problem of a system with nonlinear dynamics. When the problem is described by an IDP problem, the equations that describe the transient behavior of the system are also nonlinear, and the resultant optimization problem may be nonconvex necessitating a global optimization technique [1]. Since the global optimization technique is iteratively utilized in each stage of IDP, finite e-convergence is essential in order to guarantee the optimality of the resulting solution. It is well known that random search method generally does not guarantee the optimality and hence a deterministic algorithm with finite e-convergence is recommended [2]. In this work difference of convex envelope method is used as the global optimization technique. The difference of convex envelope method is applicable to NLP problems consist of twice differentiable objective functions. The key idea of the method is the generation of difference of convex underestimator of objective function iteratively. In the case of global minimization problem, the lower bound of objective function is given by the global minimum of underestimator. There are many researchers who utilize a convex function as underestimator and they use a local optimization technique to obtain the lower
417 bound of objective function [3,4]. In this paper the difference of convex underestimator is always represented by a continuous piecewise concave function whose global minimum can be easily found. And the lower bound can be updated by only one calculation of objective function and its gradient. The upper bound is additionally obtained during updating lower bound [5]. A new global optimization technique, difference of convex envelope method, guarantees the global optimality of the performance index at each stage in the optimal control problems. 2. Difference of convex envelope method
The difference of convex envelope method (DCEM) operates within a branch-and-bound framework and is designed to solve unconstrained global optimization problems of the generic type representation by formulation (1). min f ( x ) s.t. xi L < x i < xUi
(1) Vi
where f belongs to C 2, the set of twice-differentiable function, and x is a vector of size n. In a BB algorithm, convex underestimator, P(x), is given by equation (2). f ( x ) > P(x) = f ( x ) + ~ o q ( x ~ - x ~ ) ( x u - x~) = f ( x ) + Q(x)
(2)
i
where Q(x) is a convexifying quadratic function for f ( x ) . Note that each a, is chosen as a non-negative number large enough to make P(x) convex [3]. Let's consider a tangential hyperplane of P(x) represented by R ( x ) , then R(x) is a linear underestimator of P(x). Now a new underestimator, S(x), of original objective function, f ( x ) , can be generated by S(x)= R ( x ) - Q ( x ) = f ( x ) + {R(x)- P(x)} < f ( x )
(3)
The new underestimator, S(x), is a d.c. function since each of R(x) and Q(x) is convex function [3] and it is a continuous piecewise concave quadratic function. This underestimator has several remarks. Remarks (i) R(x) is a polyhedron whose vertices are determined by the intersection of (n + 1) tangential hyperplanes. (ii) The location of vertices of S(x) is the same as that of R(x). (iii) The global minimum of S(x) is located at one of the vertices of S(x).
418 From remarks, the global minimum of S(x) is easily found since the location of the each vertex of R(x) is obtained by solving simple linear algebras. DCEM is used to obtain lower bounding value of the current subregion for minimization problems during branch-and-bound procedure. Since the global minimum of S(x) is located at one of the vertices of S(x), the upper bound can be selected as the minimum value of f ( x ) evaluated at each vertex. The selection rule of this algorithm is to select the subregion which has minimum lower bounding value among active current subregions. The branching rule is to divide the subregion selected by above selection rule into two equal subregions cross the longest relative interval [5]. 3. MODIFIED ITERATIVE DYNAMIC PROGRAMMING Optimal control problem is generally formulated by multistage decision problem where decisions have to be made sequentially at different points in time. The dynamic programming technique represents the decomposition of a multistage decision problem as a sequence of single stage decision problems. The key idea of dynamic programming is 'principle of optimality' stated by Bellman [6]. "An optimal policy (or a set o f decisions) has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision."
One of disadvantages of dynamic programming for optimal control of nonlinear systems is the interpolation problem encountered when the trajectory from a grid point does not reach exactly the grid point at the next stage [7]. That is called as 'curse of dimensionality'.
3.1. Iterative dynamic programming In order to prevent curse of dimensionality, accessible states are used as grid point in iterative dynamic programming. Instead of choosing the state grid points independently, it is suggested to generate the grid points by applying different state values for control. Then all the grid of the state does not matter, and a small number of grid points are required. Random search method is utilized to determine optimal control value at each stage [7]. In iterative dynamic programming, control trajectory is not optimal, but near optimal. However, final optimal control trajectory is obtained by iterative procedure with the contraction and reconstruction of search region.
3.2. Modified iterative dynamic programming As mentioned above, IDP is not able to obtain optimal control value but to obtain near optimal control value through iterative procedure in each step. So, DCEM is applied instead of random search method, which is one of deterministic method to be able to obtain global optimal control value. It is well known that random search method generally
419 does not guarantee the optimality and hence a deterministic algorithm with finite econvergence is recommended. The procedure of modified IDP is following. Step 1. Division of time axis with P stages Step2. Initialization of control & state trajectories Step3. Determination of optimal control trajectory with DCEM Step3.1. Determination of control value with DCEM from P stage to 1 stage Step3.2. Evaluation of state trajectory using control trajectory determined Step4. Checking the convergence of control trajectory Step4.1. If the convergence criterion is satisfied, then terminate procedure Step4.2. Else go to step3 4. ILLUSTRATING EXAMPLES There are two illustrating examples. First one is a simple problem to check whether modified IDP is able to obtain optimal control value using the comparison between exact solution and numerical solution. The other example is a real temperature control problem with parallel chemical reactions. Each example employs with piecewise constant control trajectory with 50 homogeneous time step. 4.1. Temperature control of batch reactor The following modeling equation (4) is from the batch reactor with two reactions taking kl
k2
place, A--->B, A-->C. The objective is to find optimal control profile maximizing desired product B at final time, t,,at = 1. max J(u(,)) : xB(t#,o, )
-+
+0
28 = UXA 0
(4)
x (O):l x,(o)=o
Exact solution of this problem was obtained by using Pontrygin's maximum principle [8], which has a singular-maximum sequence such as Fig. 1. The proposed modified iterative dynamic programming generate global optimum control trajectory which agrees well with exact solution. 4.2. Plug flow reactor kl
k2
The chemical reaction A --> B --> C is carried out in a plug flow reactor.
420
4t
......
m
9
Numerical Solution Exact Solution
3
t~ ,.= L t~
>
o L
J
2
r
o O
J
0.0
!
!
|
!
0.2
0.4
0.6
0.8
1.0
time
Fig. 1. Comparison between numerical solution & exact solution of example 1 Table 1 Parameters in example of plug flow reactor ,,,
i
kio (1/sec)
Ei (cal/mol)
1
65.6
10000
2
1970.0
16000
Both reactions are first order and the rate constants are related to the temperature through the Arrhenius law. The optimal control problem is to evaluate the optimal temperature profile that will maximize the conversion of the desired product B. the design problem can be formulated as equation (5) and required parameters are given in Table 1.
maxJ(u(t))= xs(t~.a,) xA =-klxA XA(0)- 1 ~ =k,x~ -k~x~ x~(0)= 0 where
(5)
ki=kio e x p ( ~ T i )
700K _
421
m
1000
1.0
950
0.8
900
o
850
C
800
o U
o.9
.........
"'.
XA XB
x= 0 . 6 ='~ 0 . 4 0.2
750
700
\
0
2
4
6
8
10
time
Fig. 2. Optimal control trajectory
12
0.0
.
0
2
.
.
4
.
6
.
8
10
12
time
Fig. 3. Optimal states trajectory
And the value of objective function is about 0.48 which is almost same compared with other researchers [8], see Fig. 2 and Fig. 3.. 5. CONCLUSIONS Modified IDP which utilizes DCEM as an optimization technique, is proposed for optimal control problems. The optimal control trajectory obtained by modified IDP can guarantee finite e-convergence. From the illustrating examples, modified IDP is successfully obtained the optimal control trajectories and the objective function value compared with almost same that of other algorithms. ACKNOWLEDGEMENT This work was partially supported by the Brain Korea 21 Project and by the Korea Science and Engineering Foundation (KOSEF) through RRC/NMR Project. REFERENCES [ 1] W. Mekarapiruk, and R. Luus, Ind. Eng. Chem. Res., 39(2000) 84. [2] W.R. Esposito, and C.A. Floudas, J. Global Optim., 17(2000) 97. [3] C.S. Adjiman, and C.A. Floudas, Comp. Chem. Engng. 22(1998) 1137. [4] Y. Kim, and T. Lee, Escape-11, 35(2001). [5] M.H. Chang, Y.C. Park, and T. Lee, Theories and Applications of Chem. Eng., 8(2002) 365. [6] R.E. Bellman, Dynamic programming, Princeton Uni. Press, Princeton, N.J., 1957. [7] R. Luus, Monographs and surveys in pure and applied mathematics 110 - Iterative dynamic programming, Chapman & Hall/CRC, 2000. [8] C. Park, and T. Lee, Chemical Engineering Communications, accepted.
ProcessSystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by ElsevierScienceB.V.
422
The optimal profit distribution problem for a supply chain network Cheng-Liang Chen, Bin-WeiWang, Wen-Cheng Lee, Hsiao-Ping Huang Department of Chemical Engineering, Taiwan University, Taipei 10617
Abatract A multi-product, multi-stage, and multi-period production and distribution planning model is formulated for a typical multi-echelon supply chain network to achieve multiple objectives such as maximizing profit of each participant enterprise, maximizing customer service level, and ensuring fair profit distribution. A two-phase fuzzy decision-making method is proposed to attain compromised solution between all conflict objectives. One numerical case study is supplied, demonstrating that the proposed two-phase fuzzy intersection method can provide a better compensatory solution for multi-objective problems in a supply chain network. Keywords
Supply chain management, fair profit distribution, multi-objective optimization
1. INTRODUCTION In traditional supply chain management, minimizing costs or maximizing profit as a single objective is often the focus when considering the integration of supply chain network. Recently, Gjerdrum et al. [1 ] proposed a mixed-integer linear programming model for a production and distribution planning problem and solve the fair profit distribution problem by using the Nashtype model as objective function. However, directly maximizing the Nash-type objective may cause the unfair profit distribution due to different scales of profits. Furthermore, today's consumers are demanding better customer service, whether it be the manufacturing or service industry. Customer service should also be taken into consideration when formulating a supply chain system. But in the traditional supply chain management of minimizing costs or maximizing profit as a single objective, it is difficult to quantify customer service into a monetary amount into the objective function. To solve this problem, we attempt to establish a production and distribution planning model that can fairly distribute profits and also take several performance indices such as customer service and safe inventory level into consideration. And this would be turned into a multi-objective programming problem. Then, we proposed a modified two-phase fuzzy intersection method [2] to solve the multi-objective programming problem. So that, we can guarantee each member of the supply chain system can go after their own maximal profit on the basis of the least of required profit.
2. PROBLEM DESCRIPTION A general multi-echelon supply chain is considered which consists of three different level enterprises. The first level enterprise is retailer from which the products are sold to customer subject to a given low bound of customer service. The second level enterprise is distribution center (DC) which uses different type of transport capacity to deliver products from plant side
423 to retailer side. The third level enterprise is plant which batch-manufactures one product at one period. The overall problem can be stated as follows: Given." cost parameters, manufacture data, transportation data, inventory data, forecasting customer demand and product sales price. Determine: production plan of each plant and transportation plan of each distribution
center, sales quantity of each retailer and inventory level of each enterprise, and each kind of cost. The target is to integrate the multi-echelon decisions simultaneously, which results in a fair profit distribution, and to increase customer service level and safe inventory level as possible. 3. MATHEMATICAL F O R M U L A T I O N 3.1. Parameters We divide the parameters into two categories: the cost parameters and other parameters such as inventory capacity, transport lead time, etc., such as shown in Table 1. 3.2. Variables Binary variables, which act as policy decisions to use economies of scale for manufacturing or shipping, and other variables can be found in Table 2. 3.3. Integration of production and distribution models Detailed formulation for constraints and objective functions for retailer r, distribution center d, and for plant p, respectively, can be found in [3].We integrate three different level enterprises to establish a mixed-integer non-linear programming model. The multiple objectives Js, s E S, variable vector, x, and the feasible searching space, f~, are stated in the following. ZZ,.t
,Vr E 7~
1'
m a x (J1 (x) ... xEl2
~
Js(x)) --
t 1 ~ E SIL~t t
,Vr E 7~
Z za,
, Vd e V
1' E
, Vd E D
SILat t Z Zpt
(1)
, Vp E P
1' t
\ x =
i k i i yk i k' li r~i vk' _i oi _i _i "1 S~t' I/t' s~t' D~t' sdrt' Qdrt, Idt, Ddt, drt, Sprit, Qpdt, "pt, "pt,--pdt' tZpt' Ppt, "rpt, ~ i E Z, fETe,, d E T), p E 79 , t E T, k E1C, k' E /C, nEJV" ~
(2)
4. FUZZY M U L T I - O B J E C T I V E O P T I M I Z A T I O N By considering the uncertain property of human thinking, it is quite natural to assume that the DM has a fuzzy goal, 3"~, to describe the objective J8 with an interval IJ;, J j ] . For the s th
424 Table 1" Indices, sets, and parameters Index/Set Dimension Physical meaning |
rETr dE79 pEP iEZ t ET k E K~
k' E E I Parameter USRi, UIC~ UHCi, UTC,k' UTC,k FTC,k' F T C,k
[7~] = [D] = [7:'] = [~=I [7-] = I/El = [/E'] =
R retailers D ldistribution centers P !plants products T periods K transport capacity level, DC to retailer K' itransport capacity level, plant to DC
9E
1
Physical meaning
{pd, dr, r} 'Unit Sale Revenue of i, p to d, etc. {p, d, r} {p, d, r}
Unit Inventory Cost of i for p, d, r Unit Handling Cost of i for p, d, r k'th level Unit Transport Cost, p to d kth level Unit Transport Cost, d to r
{pd} {dr}
{pd} {dr}
k'th level Fix Transport Cost, p to d !kth level Fix Transport Cost, d to r .Unit Manufacture Cost ofi
U M C,i OMC / I FMCi,
{p} {p} {p}
FICi, FC D i,
{p}
:Fix Idle Cost to keep p idle
{r}
Forecast Customer Demand of i
TLT, SIQi, MIC, TCL,k' T CL,k MITC, MOTC, FMQi, O M Q,i MTO,
Overtime unit Manuf. Cost of i Fix Manuf. Cost changed to make i
{pd, dr} Transport Lead Time, p to d (d to r) {p, d, r } Safe Inventory Quantity in p, d, r {p, d, r} Max inventory capacity of p, d, r {pd} k'th Transport Capacity Level, p to d {dr } kth Transport Capacity Level, d to r {d} Max Input Transport Capacity of d {d} iMax Output Transport Capacity of d {p} Fix Manufacture Quantity of i {p}
Overall fix Manufacture Quantity
{p}
Max Total Overtime manuf, period |
9
425 Table 2: Binary variables and other continuous variables for t E 7~ Meaning when having value of I
/3/,t
{pd} {dr } {p} {p}
-?,,
{p}
, O,t
{p}
Ol,i t
k'th transport capacity level, p to d kth transport capacity level, d to r manufacture with regular time workforce setup plant p to manufacture i p changeover to manufacture i manufacture with overtime workforce Physical meaning iSales quantity of i, p to d etc.
k'
TTC,t PSR, t SIL,t CSL,t
{pd} { dr} {pd, dr } {p, d, r} {r} {p, d, r } {p} {d, r} {p, d, r} {p, d, r} {d; pd, dr } {p, d, r} {p, d, r} {d}
Z,t
{p,d,r}
Q,t lit Bi,t Di,t TMC,t TPC,t TIC,t THC,t
k'th level transport quantity, p to d kth level transport quantity, d to r total transport quantity, p to d or d to r Inventory level of i in p, d, r Backlog level of i in r at end of t Short safe inventory level in p, d, r Total Manufacture Cost ofp Total Purchase Cost of d, r Total Inventory Cost of p, d, r Total Handling Cost of p, d, r Total Transport Cost of d; p to d or d to r Product Sales Revenue of p, d, r Safe Inventory Level of p, d, r Customer Service Level of r l Net profit of p, d, r
426 maximal objective, it is quite satisfied as the objective value J8 >__J;, and is unacceptable as Js < Jj-. The original multi-objective optimization problem is thus equivalent to look for a suitable decision that can provide the maximal overall degree-of-satisfaction for the multiple fuzzy objectives. Under incompatible objective circumstances, a DM must make a compromise decision that provides a maximal degree-of-satisfaction for all of these conflict objectives. The new optimization problem can be interpreted as the synthetic notation of a conjunction statement (maximize jointly all objectives). The result of this aggregation, 73, can be viewed as a fuzzy intersection of all fuzzy goals ff~, s E S, and is still a fuzzy set. The final degree-of-satisfaction resulting from certain variable set, #v(x) can be determined by aggregating the degree-ofsatisfaction for all objectives, #:rs (x), s E S, via specific t-norms such as minimum or product operators. The procedure of the fuzzy satisfying approach for the multi-objective optimization problem, Eq.(1), are summarized as follows. Step 1. Determine the ideal solution and anti-ideal solution by directly maximizing and minimizing each objective function, respectively. m a x Js m i n J,
= =
J~ (Ideal solution of Js, totally acceptable value) J~- (Anti-ideal solution of J~, unacceptable value)
(3)
Step 2. Define each membership function. Without loss of generality, we will adopt linear function for all fuzzy objectives.
i; j:_jz, J,-J~"
zJ,=
j,>j; .
O;
J~- <_ J, <_ Jg9 J, < J~-
Vs E S
(4)
Step 3. (Phase I) To maximize the degree of satisfaction for the worst objective by selecting minimum operator for fuzzy aggregation. m a x / z v = m a x min(#j1 , #j~, 999,/Zjs) = ~1 x6~
xEf~
(5)
Step 4. (Phase II) Considering satisfaction of all objectives, re-optimize the problem by selecting the product operator with guaranteed minimum degree-of-satisfaction for all objectives. max#v xE[2+
s2+
=
max~,#,yl" • #3"2 x . . . x
=
an{uj.>F~,vses}
x6f~+
-
#as)
(6)
5. NUMERICAL EXAMPLE Considering a multi-echelon supply chain consists of 1 plant, 2 distribution centers, 2 retailers, and 2 products. Numerical values of all parameters can be found in [3]. we solve the multi-objective mixed-integer non-linear programming problem by using the fuzzy approach procedure, and the results are summarized in Table 3. Table 3 shows that by selecting minimum as the fuzzy intersection operator, we can get a more balanced satisfaction among all
427 Table 3: Results of using minimum operator, product operator and two-phase method Objectives Profit r = Profit r = Profit d = Profit d = Profit p = CSL r = CSL r = SILt= SILr= SILd= SILd= SIL p =
1 2 1 2 1 1 2 1 2 1 2 1
minimum operator Obj Value Satisfaction 859,582 1,066,607 566,217 1,959, 172 4, 507, 340 0.92 0.91 0.63 0.63 0.66 0.65 0.65
0.66 0.66 0.66 0.66 0.66 0.72 0.69 0.67 0.66 0.66 0.67 0.66
product operator Obj Value Satisfaction 970, 556 1,208,310 824, 620 1,515,645 4,231,931 0.99 0.99 0.91 0.91 1.00
0.57 0.77
0.73 0.75 0.89 0.49 0.54 1.00 1.00 0.97 0.95 1.00 0.59 0.79
two-phase method Obj Value Satisfaction 845,754 1,053, 162 593,598 1,935,237 4,486,048 0.99 0.99 0.88 O.85 0.99 0.64 0.65
0.66 0.66 0.68 0.66 0.66 1.00 1.00 0.94 0.89 0.99 0.66 0.66
CSL: Customer Service Level, SIL: Safe Inventory Level
objectives where the degrees of satisfaction are all around 0.66. By using product operator directly to guarantee a unique solution, however, the results are unbalanced with the lower degree of satisfaction for d -- 2's profit and safe inventory level, and p = l ' s profit. On the other hand, the high performance objectives or goals are given a very high emphasis. To overcome the drawbacks of single phase method, the proposed modified two-phase method can combine advantages of these two popular fuzzy intersection operators. The minimum operator is used in phase I to find the least degree of satisfaction, and the product operator is applied in phase I I with guaranteed least membership value for all fuzzy objectives as additional constraints. 6. C O N C L U S I O N In this paper, we investigate the fair profit distribution problem of a typical multi-echelon supply chain network. The fuzzy set theory is used to attain the compromised solutions. We proposed a modified two-phase fuzzy intersection method by combining the advantages of two popular t-norms to solve the fair profit distribution problem. One case study is supplied, demonstrating that the proposed two-phase method can provide a better compensatory solution for multi-objective problems in a supply chain network. REFERENCES [ 1 ] J. Gjerdrum, N. Shah and L.G. Papageorgiou. Ind. Eng. Chem. Res. 40 (2001) 1650. [2 ] R.J. Li and E.S. Lee. Fuzzy Sets and Systems. 53 (1993) 275. [3 ] B.W. Wang. Master's thesis, Dept. of Chem. Eng., Taiwan University (2002).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
428
A multi-period optimization model for refinery inventory management under demand uncertainty Hong Chen, Xiaorong He, Bingzhen Chen, Tong Qiu (Department of Chemical Engineering, Tsinghua University, 100084, China) Abstract: Inventory costs are over 30 percent of the supply chain costs. Ill Therefore, the study of inventory management plays an important role in supply chain optimization. The demand of oil products fluctuates greatly from season to season, and then the demand may be much less than the lower process limit or much more than the maximum throughput in a certain period. To safeguard against overstocking or excessive shortage for customers, it is essential to reconcile and optimize inventories by multi-period programming method. In this paper, a new model for the multi-period optimization of the production and inventory management of a refinery is presented. Good result is obtained by using industrial data. Keywords: multi-period, inventory management, supply chain, refinery, demand uncertainty
1. INTRODUCTION Multi-period optimization in the chemical industry has recently received considerable attention. [21 This kind of problems involves process plants where demands vary from period to period due to market or seasonal changes. If the actual demand of customers is higher than the summation of output and original inventory, some customers will be unable to purchase purses, resulting in a loss of potential margin and low level of products availability. TM Thus safety inventory and multi-period optimization can help a supply chain improve product availability. In a refinery, the outputs of different kinds of oil products are not mutually independent, but closely correlative. For instance, while a refinery produces 500t diesel oil, the quantity of gasoline cannot be produced optionally but is restricted within a corresponding bound for different crude oil. Consequently, the inventory management of a refinery should synchronously take the market prediction and the production planning into account. The rest of the paper is organized as follows. Section 2 gives a multi-period model. Section 3 presents the results of the experiments. Finally Section 4 presents some conclusions from the research. 2. MULTI-PERIOD MODEL
In this paper, a new model for the multi-period optimization of the production and inventory management of a refinery is presented, which integrates production planning and the stock management based on the market forecast of the oil products. In the proposed model, the quantities demanded of all productions are predicted according to the historical data, and then the production planning and inventories are optimized simultaneously. The multi-period
429 model is formulated as below: Cyc
min
M
N
z :Eco,(ZV,.j t=l
* c,j * lv + Z c ;
j=l
* lS,j )
(1)
j=l
subject to
K ; j = I , 2 .... , Jk
k -< Y t ,kj <- a j,2 k a j,1
k=1,2,
Rj. 1 < r,.j < Rj. 2
j = 1,2,...K
Vj.I < IS,.j < Vj. 2
i=
1,2,...,Cyc
(5) (6)
i=1,2 .... , C y c ; j = l , 2 .... , M
P,.j : fj (r~, y~, ,6',.j ) = I,-1,s + et,s -
(4)
j : 1,2 ..... N
n(ri,y i ,ISi,ISi_l,~i ) =0
(2)
(3)
j = 1,2,..., N
u j.1 < fl,.j < uj.2
I,,j
"",
DF,,j
(7)
i : 1,2,..., Cyc; j : 1,2,...M
(8)
Equation (1), the objective function, represents the inventory cost of finished stocks and semi-products and the shortage penalties of finished stocks. Where 1
~"J = - I SP I
I,.j > 0
I,,j < 0
SP is a given coefficient corresponding to the require level of products availability and the ratio of potential margin to inventory cost. Equations (2)-(5) represent process constraints, such as process capabilities of units, boundaries of side-draw yields and capacity of semi-product storages. Equation (6) presents material balance equations. The variable Y i, r~, IS i, IS~_1 and p~ indicate all y,k,j, r,,j, IS,,j, ISl_l, j and fl,,j in period i respectively. Equations (7) and (8) are state equations. The demand forecast DFij is computed individually. Then the optimized inventory (RI) can be computed by the equation (9):
RI,.j = RI,_,j + P,.j -S,.j
i : 1,2,..., Cyc;j = 1,2,...,M
(9)
Many constraints are upper and lower limits of some variables in the model. Then Genetic Algorithm with some heuristic rules may solve the problem. 3. P R A C T I C E A P P L I C A T I O N The process technique data, outputs and sale data come from a refinery. The three-period case model includes four finished stocks, such as gasoline, diesel oil etc., and four
430 semi-products, such as heavy cut, straight-run diesel oil etc. The horizon is one month as one period. It is assumed that the maximum prediction error of current period is 5 percent and those of the later periods are 10 percent. For each multi-period optimization, only the result of the first period is saved. That is, after the first multi-period optimization for period 1 to 3, the result of period 1 is saved and the RII is calculated using equation (9), then the next calculation for period 2 to 4 continues, and all that. The optimal outputs and customer demand of finished stocks from period 1 to period 9 show as Fig.l: 120000
~ _ - - ~ - P 1 CD
............
P1
0
Out
P2 CD
*
~--P2 ~u~] ........
100000 80000 60000 40000 20000 0
J
--
J
i
J
1
2
3
i
4
5
i
i
6
i
7
8
9
Month
(a) 20000 18000 16000 14000 12000 10000 8000 6000 4000 2000 0
i_i-- A - P 3 Ci)~
&..
/
i
I
e ~:~-P3-Out_~[---~-::-p4-Cl)_--:o ~P4)Out
,.
i
2
i
3
i
4
i
5
i
6
1
7
i
8
9
Month
(b) Fig. 1 Customer Demand and Optimal Outputs Pl: diesel oil; P2: gasoline; P3: liquefied petroleum gas; P4: fuel oil CD: Customer Demand (Ton); Out: Optimal Outputs (Ton) Fig. 1 shows that the optimal outputs can safeguard against excessive shortage for customers, multi-period optimization makes it possible to prepare for the demand midseason for periods. However, shortage for customers cannot be prevented completely on account of prediction error. Showed as table 1, the inventory of gasoline in period 7 is -2805.95Ton, which is about-3.5 percent of the real customer demand in that period. The industrial and optimal inventories are showed as table 1:
431 Table 1 Comp~ison of Inventories Inventory of P 1(t) Inventory of P2(t) Inventory of P3(t) Period Ind. Opt. Ind. Opt. Ind. Opt. 1 4120.45 16700.94 23366.2 5156.26 2775.15 2818.28 2 4972.6 32447.44 22237.8 13243.5 2820.3 8484.07 3 47267.3 71695.72 55577.6 44957.4 4050.9 11182.03 4 43879.4 54826.68 56217.2 31256.47 2041.2 7047.47 5 30496.85 32159.61 45137.3 10035.97 3479.7 7413.62 6 35384.35 26281.39 52185.9 7061.34 2446.5 2913.18 7 30291 23134.44 41507.7 -2805.95 2219.7 2993.37 8 30291 17424.78 42205.8 1214.59 1173.9 618.03 9 30709.6 10021.45 39984.1 10239.8 1446.9 3689.3 Sum 257412.6 284692.5 378419.6 203521.7 22454.25 47159.35 Where Ind. presents industrial inventories, Opt. presents optimal inventories. P l--P4 is the same as Fig. I. When the Stun of inventory of P2 is calculated, .
.
.
.
,
Inventory of P4(t) Ind. Opt. 2509 2397.97 3326.5 2184.66 4720 4867.54 5589.5 2989.21 3509 644.06 4089 1785.63 5675 4741.2 5065 3752.88 5179.5 3316.69 39662.5 26679.84 The meaning of the inventory of
P2 in period 7, i.e., -2805.95, is multiplied by g7,3(=" 16) firstly. Assume the inventory cost of P1, P2 and P4 are $3/Ton and that of P3 is $3.4/Ton. Then the total inventory cost in industry is: (257412.6+378419.6+39662.5)* 3+22454.25*3.4=2.10* 106 And the total inventory cost of optimization is: (284692.5+203521.7+26679.84)* 3+47159.35"3.4=1.71 "106 The saving inventory cost is: 2.10"106-1.71 "106=0.39.106 0.39* 106/2.10* 106=18.6% That is, about $390,000 of inventory cost can be saved in 9 months through the multi-period optimization. 4. S U M M A R Y AND C O N C L U S I O N S In this paper, we proposed a multi-product multi-period model for refinery inventory management under demand uncertainty. In this model, decision variables are all process variables such as y, r, IS, 13, etc. The optimal result can be used in the process control easily. That is, the production planning is taken into account synchronously. One practice example is supplied, demonstrating that the model is efficient to reduce the total inventory cost and to satisfy the customer demand. Nomenclature Cyc numberof optimal period M numberof finished products N numberof semi-products K numberof units ~ti, j coefficientof shortage penalty
c~ inventorycost of finished stock j in period i, Yuan/ton C~ inventorycost of semi-productj in period i, Yuan/ton
432 Iij
inventory of finished stock j in period i according to
/~v
blending ratio of finished stock j in period i
demand forecast
Pij
output of finished stock j in period i
IS,j k
DFij
demand forecast of finished product j in period i
inventory ofsemi-productj in period i
Y'J
sum of yields above side-draw j of unit k in period i
k 88
feed rate of unit k in period i
Su
customer demand of finished product j in period i
REFERENCE [1] Sui Minggang, Wei Yi. Summary: Study of Supply Chain Inventory Costs and its Development Trend. Logistics Technology, 2000, 104(5): 28-30; [2] A. Ortiz-G6mez, V. Rico-Ramirez and S. Hern~.ndez-Castro, mixed-integer multiperiod model for the planning of oilfield production, Computers and Chemical Engineering 26 (2002), 703-714; [3] Sunil Chopra, Peter Meindl. 2001, Supply chain management--strategy, planning, and operation. Beijing: Tsinghua University Press (photocopy authorized by Prentice Hall Inc.), 179-185;
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
433
Multi-Objective Decision Processes under Uncertainty: Applications, Problem Formulations and Solutions Lifei Cheng, Eswaran Subrahmanian, Arthur W. Westerberg
*
Department of Chemical Engineering and the Institute for Complex Engineered Systems Carnegie Mellon University, Pittsburgh, PA 15213-3890, USA Abstract
Operating in a changing and uncertain environment, finns must make strategic and operational decisions while trying to satisfy many conflicting goals. Problems of this type arise in many important decision contexts in various industries and pose challenges for both practitioners and researchers due to their complexity. This paper is a contribution to the creation of a general framework for constructing and solving proper formulations for this class of problems. We formulate a firm's decision making in the presence of uncertainty as a multi-stage decision process. Decision makers periodically review the state of the system, which includes the internal process and external environment, and choose decisions according to certain decision rules. This formulation, along with the definition of the multiple optimality criteria, such as expected profit and risk exposure, leads to a multi-objective Markov decision problem, in which one searches for decision policies that optimize multiple objectives. We investigate two major methodologies from different research streams to formulate and solve this class of problems: optimal control and stochastic programming. We show that two methodologies are equivalent in that optimal decisions found by stochastic programming are the same as the corresponding decisions prescribed by the optimal policy found by optimal control. Both solution approaches suffer from the "curse of dimensionality" but in different ways: the former has an immense state space while the latter a large sample space. We discuss and compare the complexity and efficiency of both methods. We examine approximation schemes for each to allow one to approximately solve large-scale realistic problems, which are computationally prohibitive otherwise. Finally we propose and illustrate guidelines to aid in selecting which would be the more appropriate approach for a specific problem.
Keywords: Decision under uncertainty; Multiple criteria; Markov decision problem; Stochastic optimal control; Multi-stage stochastic programming; Curse of dimensionality; Approximation approaches 1.
INTRODUCTION
Faced with a changing and uncertain environment, firms must make decisions everyday at both the strategic level and operational level. For instance, depending on the firm's budget planning process, it may decide to expand capacity once a year. During each month, the firm must decide how much to produce to replenish the inventory and satisfy customers' demands. In this paper, we are particularly concerned with a class o f decision problems in which: (a) Decision makers are faced with the life cycle context, in which the external environment is rapidly changing over time and involves a substantial amount of uncertainty; (b) They must make decisions at different times and levels, and these decisions must be coordinated because of the interconnections among them; (c) Decision makers have multiple conflicting goals in their minds. For instance, decision makers would like to maximize expected profit while minimizing risk exposure. The difficulty lies in the conflicts (at least partly) or incommensurability among these objectives.
434 Problems of this type are called sequential decision under uncertainty, which have been widely applied to various decision contexts in different industries. Those applications include capacity planning (Eppen et al. 1989, Rajagopalan et al. 1998 and Eberly and Mieghem 1997), inventory control (Porteus 1971, Kapuscinski and Tayur 1998) and other situations where decisions are made at stages in response to uncertainty that evolves over time. Generally speaking, there are two modeling approaches in the Operations Research discipline to address the problem of "sequential decision under uncertainty." One is "multistage stochastic programming," (Birge 1997) and the other is "stochastic optimal control" (Bertsekas 1995). Our incentive in this paper is to develop a general framework to formulate this class of problems and efficient solution strategies to solve them. In this paper, we formulate this class of problems into generic multi-objective Markov decision processes. The prohibitive computational requirement of a rigorous dynamic programming algorithm necessities approximation approaches. We develop a simulation based optimization framework and multistage stochastic programming model to solve large-scale problems. 2.
A MOTIVATING EXAMPLE
In this section, we provide a small problem that we will carry from formulation through solution to illustrate the issues. Consider a chemical company that is designing a reaction process operating in a time horizon of four years. A catalyst c~ is already available, while there is a better catalyst [3 that is still under development and is expected to be available in the near future. There are two types of uncertainties with which the company is concemed, the future product demand and the arrival time of the new catalyst 13. Based on the historic data, the company predicts that the product demand in each month follows a normal probability distribution with a growing trend. The time when the new catalyst [3 becomes available is expected to follow a discrete probability distribution. The sales price for the product is likely to drop once ~ becomes available, which would reduce the cash flow of an existing process using o~. We also model the downward pricing pressure and investment losses due to the new catalyst [3 by letting the purchase price and salvage value of the reactor be dependent on the current best catalyst. Operating in such an uncertain and changing operating environment, the company must seek solutions to the capacity and production planning such that various objectives can be satisfied. For example, the company would like to maximize expected profit while minimizing the risk involved. 3.
PROBLEM FORMULATION
We formulate the problem as a finite-horizon discrete-time Markov decision process, in which decision makers periodically make decisions based on the information available. 3.1. Markov decision processes We consider a firm that employs different recourses or technology types to produce a single product in a time horizon of N years. The firm has the option to change the
435 capacity level of each resource at the beginning of each year t ~ {1..... N}. The firm reviews inventory level and makes production planning at the beginning of each month, z'~{1 ..... M}. The following summarizes the sequence of events during each year: (a) At the beginning of each year t < N. The firm chooses a new capacity level K, ~ ~Rt+; (b) At the beginning of each month ~" during this year. Based on current inventory level I,~, the firm decides the level of production needed to replenish the inventory. Demand in month arrives and is satisfied to the extent possible. Excess inventory is carried to the next month and unsatisfied demand is backlogged; (c) At the last year N . All remaining resources are salvaged and products are scrapped. The capacity and production planning become a decision process, in which decision makers periodically review the state of the system xt,, e.g., capacity level and inventory level, and choose capacity and production decisions ut, according to control policies/2t~ that prescribe the control actions for each possible state. The sequence of polices at each stage, ~r = {#10, J'/ll,'", ~'/NM} is referred to as a strategy. The decision process described above is a Markov decision process. The decision process under a given strategy then becomes a controlled Markov chain, in which the system transits from state to state according to given transition probabilities.
3.2. Multiple optimality criteria At each stage of the decision process, while the capacity and inventory level evolve after the decision is implemented; the firm receives a profit depending on the current state, decision choice and demand realization. In other words, there is a reward sequence (often referred to as Markov reward process) associated with the trajectory (state sequence) of the Markov chain. (a) Capacity related contributions: v,0 (b) Production related contributions: v,~ (c) Terminal value: vu+l,o The total reward, discounted to the beginning of the first year, given that initial state is (K 0 , I 0 ) and strategy ~r is applied, can be expressed as
v~(Ko,Io)=~_a ('-')u V,o +~_ct t=!
r-I
v,~ + I~zNM V u+,,o
(1)
r=l
It is the decision maker's incentive to choose a strategy such that those outcomes are as "good" as possible, or optimal. This necessitates the need for performance measures, or optimality criteria, to compare alternative decisions. In this work, we consider two optimality criteria: expected total discounted profit and expected downside risk. (a) Expected profit: F2 (Ko,I o)= E[v, (Ko,I o)] (2) (b) Expected downside risk: F 2 (Ko,I o )= E[max(~ - v, (Ko,I o ),0)] (3) The problem then becomes a multi-objective Markov decision problem, which searches for Pareto optimal policies that optimize multiple objectives, i.e., maximize expected profit and minimize downside risk.
436 max 4.
[F~(Ko,Io),-FZ(Ko,Io)]
(4)
SOLUTION STRATEGIES
In this section, we discuss several methodologies to formulate and solve this class of problems, methods which are primarily coming from two research streams in Operations Research, i.e., optimal control and stochastic programming. 4.1. Stochastic optimal control The optimal control approach finds the control policies, e.g. capacity and inventory policies, that prescribe a control decision for each possible state the system could occupy, such that the multiple objectives are optimized, as shown in Figure 1. We developed a rigorous multi-objective dynamic programming algorithm in a previous work to propagate the Pareto optimal frontier recursively backward in time (Cheng et al. 2001). 4.2. Multistage stochastic program Multi-stage stochastic program models allow decisions in each time stage that are based upon the uncertainty realized so far. The uncertainty information is modeled as a multilayered scenario tree, as illustrated in Figure 2. Each scenario represents a sequence of realizations of random variables. At each node, the decision makers must make a decision based on past observations. The optimization problem is to make decisions corresponding to the nodes on the scenario tree to optimize the multiple objectives.
Fig 1. Stochastic optimal control
Fig 2. Multi-stage stochastic programming
4.3. Comparisons between two methods We consider a two-period inventory control problem as depicted in Figure 2. The decisions sought in the optimal control approach are the decision polices, which prescribe the decisions one should select as a function of the state, i.e., u, = / 4 (x,), where the decision is the production and the state is the current inventory level. In a multistage stochastic program, the decisions at each stage depend on the past observations of random variables, i.e., u, =u,(D~.... D,_I), where D, is the realization of demand in period t. In the limit the decisions evaluated from the optimal policy for the state corresponding to each node on the scenario tree should be the same as the optimal
437 decision for that node found by the multi-stage stochastic program, assuming that the optimal solution is unique (a rigorous proof will be provided in the full version paper). Therefore, these two methods are essentially equivalent from a mathematical perspective. However, the control policy ,u, is defined over the entire state space, x, ~ S,, i.e., all possible inventory levels. The state space can be prohibitively large when the states, inventory levels, are continuous and multi-dimensional, which is often called "curse of dimensionality." On the other hand, multistage stochastic programming defines a decision variable for each node on the scenario tree. The size of the problem can be explosively large when the number of realizations for each random variable and the number of periods are large. Therefore, multi-stage stochastic programming also suffers from the "curse of dimensionality," resulting from a large sample space. We shall see that these methods are not competitive. Instead, they are merely complementary having different favorable and unfavorable features. The problem specific requirements may imply that one approach should be preferably. In general, for long-term strategic planning problems, where the knowledge level of future uncertainty is relatively low and decisions are made less frequently, it would be reasonable to develop a stochastic programming model based on a simple scenario tree. On the other hand, for operational planning problem, in which we have knowledge of the underlying probability distributions estimated from available data and the decisions must be made in frequently, e.g., daily, the optimal control approach works more effectively as it finds the control policies for the entire state and does not require an explosively large scenario tree. We are currently working on combining those two approaches to solve large-scale problems by using the advantages of both approaches to find the optimal first stage decisions in a computationally efficient manner. 5.
RESULTS AND DISCUSSIONS
We develop a simulation based optimization framework to find the Pareto optimal curve (Cheng et al. 2002). A multi-objective evolutionary algorithm is exploited to solve the multi-objective optimization. The algorithm maintains a population of solutions at each iteration that converges to the Pareto optimal frontier while preserving a diverse set of non-dominated solutions. As illustrated by Figure 3, the solutions are improved each generation and converge to the Pareto optimal frontier after about 500 generations in a single run. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5oo/
Convergence to Pareto optimal frontier
3O
25
~.2o
-o
Pareto Optimal Curve
(13.8, ~
3~176 i
!
200
5 52
53
54
55
i
56
57
58
59
|
(11.7, 18.3)
60 10
Expected risk
(17.1, 42.1) (22.2,~2.2) = _ = (~9
11
12
13
20
23
9
Expected Risk
Fig 3. Results fromsimuiati0n-0ptimization Fig 4. Results from stochastic programming
438 We also developed a multi-objective multi-stage stochastic program to solve a similar problem, in which the demand in each period has two realizations, sampled from an underlying probability distribution. We use the epsilon-constraint method to solve the multi-objective optimization by successively tightening the constraint on the expected downside risk while maximizing the expected profit only. Figure 4 depicts the whole spectrum of the Pareto optimal frontier, which provides the maximum information about the best potential for system performance. 6.
CONCLUSIONS
This paper is concemed with the study of a class of problems: multi-objective decision making under uncertainty. Some important applications belonging to this problem class are capacity planning, inventory control, logistics and supply chain optimization. We formulated this class of problems as generic multi-objective Markov decision processes, in which we seek Pareto optimal policies to optimize multiple objectives. We have investigated several methods from different research streams, i.e., optimal control and stochastic programming, to solve these problems. Rigorous solutions in both methodologies are computationally prohibitive due to the "curse of dimensionality;" optimal control suffers from large state space while stochastic programming suffers from a large sample space. We proposed approximation approaches for each to solve large problems. We developed a simulation-based approach to find parameterized polices. We also developed a multi-stage stochastic programming model, defined on a scenario tree obtained by random sampling from the underlying probability distributions. We exploited the epsilon-constraint method to find all solutions in the Pareto optima set. We showed that both approaches are mathematically equivalent, i.e., they result in the same optimal solutions. The distinct features of both methods determine one is more favorable and efficient than the other for a specific problem. In general, multi-stage stochastic programming is more suitable for solving long-term strategic planning problems while stochastic optimal control works better for short-term operational planning problems. References 1. Bertsekas,D.P., Dynamic Programming and Optimal Control, Vol. I, II, Athena Scientific (1995). 2. Birge,J.R. and F. Louveaux,Introduction to Stochastic Programming, Springer (1997). 3. Eberly, J.C. and J.A. Van Mieghem, "Multi-factor Dynamic Investment under Uncertainty," J. Economic Theory, 75 (1997), 345-387. 4. Cheng, L., E. Sub E. Subrahmanian, A.W. Westerberg, "Design and Planning under Uncertainty: Issues on Problem Formulations and Solutions," To appear on Computers and Chemical Engineering, 2001. 5. Cheng, L., E. Sub E. Subrahmanian, A.W. Westerberg, "Multi-Objective Decisions on Capacity Planning and Production-InventoryControl," Working Paper, 2002. 6. Eppen, G.D., R.K. Martin and L. Schrage, "A Scenario Approach to Capacity Planning," Oper. Res., 37, 4 (1989), 517-527. 7. Kapuscinski,R. and S. Tayur, "A Capacitated Production-InventoryModel with Periodic Demand," Oper. Res., 46, 6 (1998), 899-911. 8. Porteus,E.L., "On the Optimality of Generalized (s,S) Policies," Management Sci., 17, 7 (1971), 411426. 9. Rajagopalan, S., M.R. Singh and T.E. Morton, "Capacity Expansion and Replacement in Growing Markets with Uncertain Technological Breakthroughs," Management Sci., 44, 1 (1998), 12-30.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
439
S i m u l a t i o n B a s e d A p p r o a c h for I m p r o v i n g H e u r i s t i c s in S t o c h a s t i c Resource-Constrained Project Scheduling Problem Jaein Choi, a Jay H. Lee ~ and Matthew J. Realffa* aCenter for Product and Process Systems Engineering, School of Chemical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 Resource-Constrained Project Scheduling Problem(RCPSP) is a significant challenge in highly regulated industries, such as pharmaceuticals and agrochemicals, where a large number of candidate new products must undergo a set of tests for certification. In this study, we propose a novel way of addressing the uncertainties in the R C P S P including the uncertainties in task durations and costs, as well as uncertainties in the results of tasks(success or failure) by using a discrete time Markov chain. The resulting stochastic optimization problem can be solved using dynamic programming, but the computational cost renders this impractical. Instead, we develop a new way to combine heuristic solutions through dynamic programming in state space. As a result, a near optimal policy, which can take account of information states in decision making and improve the heuristics, for the problem is obtained rather than a fixed solution obtained by the previous MILP formulations. 1. I n t r o d u c t i o n One of the greatest challenges in highly regulated industries, such as pharmaceuticals and agrochemicals, is the process of selecting, developing, and efficiently manufacturing new products that emerge from the discovery phase. A large number of candidate new products in the agricultural and pharmaceutical industries must undergo a set of tests related to safety, efficacy, and environmental impact, to obtain certification. In general, the value of a product in the market decreases as the time to introduction of the product increases due to incoming new competitive products and fixed patent periods. Hence a company has to manage its various resources, manpower, lab space, capital, pilot facilities, etc to ensure its best return on its new product pipeline, with the added complication that the outcome of tasks is uncertain. This type of problem is called Resource-Constrained Project Scheduling Problem(RCPSP), in which the probability of success of each task is embedded. Besides the uncertainty in the successful outcome of the task, there are several additional stochastic parameters in real applications of the R C P S P such as duration uncertainty, resource(cost) requirement uncertainty. Most of the previous work [1-5] including an extensive review on R C P S P [6] have considered only a subset of the uncertainties in the problem. A notable exception is [7,8] where a rich set of uncertainties in the problem are addressed within simulation and optimization framework. In this study, we introduce a novel way of addressing the uncertainties in the RCPSP, based on a discrete time Markov chain, which enables us to model probabilistic correlation of the uncertain parameters observed from previous historical data of a series of tasks in a project. (For example, the probability of success of a previous task may not be independent of those for current or future tasks.) Furthermore, a novel solution method, Dynamic Programming in a subset of the states developed and illustrated in [9], is tailored for the problem to obtain high quality solutions. 2. P r o b l e m D e s c r i p t i o n The problem inputs are a set of M potential products that are in various stages of the company's R&D pipeline. Each potential product is required to pass a series of tests. For each task in project, several possible parameter modes are given. A parameter mode is defined as the set of parameter values for duration, costs and the probability of success of the tasks. These values may represent the actual *[email protected]
440 values or the mean values for the parameters. Then, transition probabilities are specified among the parameter modes for successive tasks. Furthermore, only limited resources are available to complete the tasks. In the exemplary formulation studied in this paper, the resources are represented as different Laboratories(Lab.) where the task can be performed. The reward of each project is introduced as an exponentially decaying function, i.e. the reward is R(1 - A) k at time k where, R is the discrete time reward at time k -- 0 and 0 < A < 1 is a discounting factor of the reward with time.
3. D y n a m i c P r o g r a m m i n g F o r m u l a t i o n A DP formulation requires the definition of state, action(decision), state transition rules, and objective funct ion (cost-to-go). 3.1. D e f i n i t i o n o f S t a t e Consider a R C P S P with M projects and N types of available resources(Laboratory). The following definition of state, X , will be used: X - [ 8 1 , 8 2 , " " " ,8M,tl,t2," "" ,tM,Zl,Z2," " ' , z M , L 1 , L 2 , ' " , L N , k ]
T
(1)
In equation (1), si for i -- 1 , 2 , - . . , M represents the current status of the project i: which tasks are finished(succeeded or failed) and which is on-going for the project i. Because finite n u m b e r of tasks are involved in a project, si can be represented as an integer variable. For example, there can be 10 possible states(circled number) in a project with 3 tasks as shown in figure 1. In Figure 1, si - 4 represents
......................... Failure .........................
V
..... "--/I
\
~ I
V
/
",.,.----,"
V
'.,k LC_J I
A
I
~
,
A -SucCess
$ .......................
Figure 1. Possible States of a Project with 3 Tasks task I1 has successfully finished, ti for i = 1, 2 , . . . , M represents how long the on-going task has been performed so far in project i. zi for i = 1 , 2 , . . . , M represents the information state of the project i. The other state variables, Lj for j = 1 , 2 , . . . , N represents the binary status of N resources whether they are occupied(0) or available(I) at current time. Finally, time k is added as a state variable in order to consider time-varying value of the reward of each project. 3.2. D e c i s i o n W i t h the state defined as in equation (1), the decision(action), U, can be, defined as in the following equation (2).
U---[(~1,(~2,..- ,~M] T
(2)
5i is a binary variable which represents whether or not to start a task in the project i, for i = 1, 2 , . . . , M. The decision can be made only when an appropriate resource is available, t h a t is, 3 Lj = 1 for some j = 1, 2 , . . - , N . Otherwise, the decision remains a null vector, U = [0, 0 , . . . , 0]. Because of the resource constraint, at time k, the sum of all available resources must be same as or greater t h a n sum of all tasks decided to be performed, ~N=I Lj ~_ ~-]M 1 5i for j = 1, 2 , ' ' . , N and i = 1, 2 , . . . , i .
441 3.3. S t a t e T r a n s i t i o n R u l e s In a discrete time system, the state at time k + 1, X(k + 1) can be derived from the state at time k, X(k), and the control action(decision) at time k, U(k), as shown in equation (3).
X(k + 1 ) - f(X(k),U(k))
(3)
The state transition rules for the problem are given by a procedure rather t h a n as an explicit function f. 1. S t a t e T r a n s i t i o n R u l e : W h e n Lj(k) = 0 for V j = 1, 2 , . . . , N In this case, only "null"-decision, U(k) - [0, 0 , - . . , 0], is made because all the resources are occupied with current on-going tasks. The state variables, t~(k) for i - 1,2, 999 M become ti(k + 1) = ti(k) + 1 if the tasks are still going on at time k + 1. The information state variables, zi(k) for i - 1 , 2 , . . . , M remain the same, except for the case t h a t an on-going task is completed. If any of tasks in process is completed at next time k + 1, corresponding zi(k + 1) is u p d a t e d to reflect the new information. The state variables for resource availability, Lj(k) for j - 1, 2 , . . . , N also remain same at time k + 1 except for the case when any of on-going tasks is completed at time k + 1. 2. S t a t e T r a n s i t i o n R u l e : W h e n Lj(k) ~ 0 for j - 1 , 2 , . . . , N In this case, the state evolves by the decision, U(k) ~ [0, 0 , . . . , 0], as well as the current state, X(k). According to the availability of the resources at time k, Lj(k) ~ O, the decision U(k) initiates a new task at time k. At same time, ti(k) = 0 also evolves to ti(k + 1) = 1 as the corresponding task is started at time k. The information state evolves in the same m a n n e r as explained in case 1. 3.4. O b j e c t i v e F u n c t i o n : C o s t - t o - G o The objective of the R C P S P is the maximization of the final reward of the problem after finishing all projects. This objective can be translated into a 'cost-to-go' value, which represents the expected cost to be spent from the current state to the terminal state in the D P ' s context. As described in section 2, the value of reward for each project decreases with time. This reward decrease can be considered as an increase of the cost-to-go. Therefore, the 'cost-to-go', J(X(k)), at current state X(k) is defined follows
J(X(k))
=
E(Future Cost to Complete All Remaining Projects Including the Discounts of the Rewards to
-
Current Value of Rewards for Unfinished Projects(if corresponding project is successfully
be Incurred completed ) }
(4)
To obtain initial values of the cost-to-go in the equation (4), simulations must be performed with suboptimal heuristics introduced in section 4 andt the cost-to-go values must be evaluated for all points of the state trajectories visited by the heuristics. 4. S u b o p t i m a l
Policies : Heuristics
To apply the algorithmic framework developed in [9], developing reasonable heuristics for the problem is very important. 4.1. M o t i v a t i n g E x a m p l e Let us consider a small size example with 3 projects and 2 types of resource, L a b o r a t o r y 1(Lab1) and Laboratory 2(Lab2). For each project, the following underlying Markov chain and uncertain p a r a m e t e r sets are given to describe uncertainties in duration, cost and success probability of the tasks in the project. One Markov chain is assigned for each project, with finite number of states as shown in Figure 4.1. All rewards of the projects decrease 2% of current value per week, hence R(k) - R(0)(1 - 0 . 0 2 ) k. 4.2. H e u r i s t i c 1 : H i g h S u c c e s s P r o b a b i l i t y T a s k F i r s t In RCPSP, the result(success or failure) of the task is a very i m p o r t a n t factor affecting affects the final reward as well as the remaining part of the scheduling solution. Decisions are required only case of resource conflicts, when a higher n u m b e r of tasks are ready to be performed t h a n the n u m b e r of available resources. Heuristic 1 resolves conflicts by choosing the task with the highest success probability to be performed first. 4.3. H e u r i s t i c 2 : S h o r t D u r a t i o n T a s k F i r s t One simple way to increase the final reward of the projects in R C P S P is to finish the projects as quickly as possible in order to minimize the reward decrease with time. Heuristic 2 considers this time value of the project in a greedy way by performing the task with shortest duration first in case of resource conflict.
442 I Task 1 I Lab1 Duration 111
Cost 111
Success Prob. 11 I
Project 1
.~
I
I Task2 0.3
I Task3 I L.~X
/
/.,,.
Reward 1 = 5000
Duration 211
Cost 211
Success Prob. 211
0,2
Duration 221 ~ Cost 221 Success Prob. 221
I Task 1 I '"~ Duration 112
Cost 112
7 ~
x
~
0.2
I Task 2 I L~
0.2
Duration 122 ~ Cost 122 o2 Success Prob. 122 N~.t
Success Prob, 11
Project 2
Duration 231 Cost 231 Success Prob. 231
I Task 3 I'~' = ~ t
Duration 132 Cost 132 Success Prob. 132
0.
Reward 2 = 7000
Duration 212
Cost 212
Duration 312
I Task, I Lob' Cost t t3
03 0~2
~_~____~
Duration 232 Cost 232
Duration 3 2 2 r "~\ Duration 3 3 2 Cost 322 ~.7 "~ Cost 332 Success Prob. 322 ~ . - , Success Prob. 332 0.t
Cost 312
Duration I t3
Duration 222 Cost 222
. . . .
Success Prob. 31
Project 3
~=
Duration 121 o3 Duration 131 Cost 131 Cost 121 #n'~ ' '- / "~ Success Prob, 121 v . . ~ . / Success Prob. 131
I Task2 IL~ = ~
Ou[ation 213 Cost 2t 3
Success Prob. 2t 3
Durationt23 Cost 123
Duration 223 Cost 223
.......
I Task3 I Lab2
__ 03 ~ \02
~
Durationt33 Cost 133
0~'5 \ / ~ Duration233 ~ Coat 233 ............. 0. 5
__ 03 ~ \02
I Task, It"b2 .m
Duration143 Cost t43
Duration 243 Cost 243
~
.............
Reward 3 = 13000 Duration 313
Cost 313
Success Prob. 313 ~ p . ~ - 4 p
Duration 323
Cost 323
/0(.10
Success Pmb. 323 ~
~
Duration 3,33
Cost 333
0,~ /~.10
~1
9
Duration343 Cost 343
Success Prob. 333 .(..B,4,@..~.-.~'SuccessProb. 343
Figure 2. Uncertain Parameter Changing for Project 1,2 and 3 in the Motivating Example
4.4. H e u r i s t i c 3 : H i g h R e w a r d P r o j e c t First Heuristic 3 performs the tasks in the project with the highest reward prior to the other tasks. This is a greedy decision to get high reward in a short time with the smallest reward decrease. This heuristic may work well if the project with high reward is completed successfully but at same time if the other projects are delayed too long, the rewards of the projects can be decreased significantly. 5. S i m u l a t i o n B a s e d L e a r n i n g : I m p r o v i n g H e u r i s t i c s The algorithmic framework[9] will be applied for this problem as well to obtain an (near)optimal policy for the problem. The general steps of applying the algorithmic framework will be similar to the those shown in [9]. 1. Heuristic simulation 2. Identification of the subset of the states and the first cost-to-go approximation 3. Bellman iteration in the subset 4. On-line decision making. 5.1. H e u r i s t i c S i m u l a t i o n The purpose of heuristic simulation is two-fold. First, the simulation is performed in order to obtain a meaningful subset of the states for the 'DP in the subset of the states'. Obtaining a reasonable subset is
443 critical for solving the problem because the DP over the entire state space is computationally infeasible for the given problem. Second, by simulation, we can obtain the initial 'cost-to-go' values, which will be used for the Bellman iteration step, for the states in the subset. For the simulation, 10,000 cost mode realizations are performed according to the underlying Markov chain. The heuristics introduced in section 4 were used for the simulation. The simulation results are summarized as follows in Table 1. Table 1: Simulation Results: Performance of the Heuristics for 10,000 Realizations Heuristic 1 Heuristic 2 Heuristic 3 Mean Value of the Solutions 3375.44 3391.63 3515.60 Max. Value of the Solutions 14262.69 14262.69 15185.86 Min. Value of the Solutions -3300.00 -3300.00 -3300.00 According to the results shown in Table 1, Heuristic 3 is better than the others in terms of the overall performance in the 10,000 realizations. However, none of the heuristics is absolutely dominant as shown in Table 2. Table 2: Infinite Norm of Solution Difference Between 2 Heuristics through 10,000 Realizations 1125.818 2201.256 * Solution Obtained by the Heuristic 1
1172.238
2502.642
2502.642
1095.698
The results shown in Table 2 implies that it may be possible to improve the solutions given by the three heuristics by searching over the subset of the states obtained by the heuristics to optimally combine the state trajectories from different solutions. 5.2. I d e n t i f i c a t i o n o f t h e s u b s e t o f t h e s t a t e s a n d t h e first c o s t - t o - g o a p p r o x i m a t i o n As a results of heuristic simulation in 5.1, 37311 states are visited by the heuristics, which is about 0.003% of the entire state space.(Note that the total number of states of the problem is approximately 1.220 x 109 according to the definition of state in the equation (1).) For each state in the subset, the expected cost-to-go is obtained according to the 'cost-to-go' definition in (4). 5.3. B e l l m a n I t e r a t i o n in t h e S u b s e t
The 'cost-to-go' obtained at previous step is used as an initial 'cost-to-go' for the Bellman Iteration step. We iterate the following equation (5) for each state X(k) in the subset, until j~ converges. J~+l(X(k))
= minE{r
+ 3i(X(k + 1)[Z(k),u(k))}
(5)
r u(k)) represents the current cost incurred by the decision u(k) for the state X(k). The converged cost-to-go values, J*(X(k)), is used for on-line decision making. For the states in the subset, 48 iterations ji+l _ji are performed to meet the convergence criteria, I1 ji 11~ < 0.01, and the computational time for the iterations is 76982 seconds(about 21 hours) on a Pentium III at 800 MHz, 512MB RAM. During the Bellman iteration, if there is any state transition that leads to states not in the subset, the state transition probabilities are normalized only for the visited states. 5.4. O n - l i n e d e c i s i o n m a k i n g The 'converged' cost-to-go values obtained in the previous step is used for the on-line decision making. However, the decision cannot be made by the 'cost-to-go' values only because not all next states are in the subset for some states. This insufficient number of states included in the subset is due to an insufficient amount of simulation. For most cases, if the problem has a complicated stochastic nature, 'sufficient amount of simulation' may take too long for obtaining a subset with all the relevant states. Therefore, for more reliable on-line decision making, a systematic method for dealing with insufficient number of states in the subset has to be developed. In this work, we suggest to use following 2 different approaches in on-line decision making. 5.4.1. M e t h o d 1 : O n - l i n e d e c i s i o n m a k i n g w i t h c o s t - t o - g o b a r r i e r A fixed high cost-to-go value is assigned to all states outside the subset, thereby making a decision leading to a state outside the subset highly unlikely.
444 5.4.2. M e t h o d 2 : O n - l i n e d e c i s i o n m a k i n g w i t h a g u i d i n g h e u r i s t i c In this approach, we allow the state to step outside the subset. We use an on-line heuristic whenever decision has to be made for states which are not in the subset. The best heuristic(Heuristic 3 in section 4.4), in terms of the mean value of the reward, is used for the decision. Once the state comes back into the subset, the decision making is switched to the minimization of the cost-to-go. 5.4.3. C o m p u t a t i o n a l R e s u l t s Computational results of the proposed on-line decision making policies are summarized in Table 3. The on-line decision making policies(Method 1 and 2) are then tested with the states and the converged cost-to-go values. With 37311 states obtained by 10,000 realizations, the performances of the both online decision making policies are worse than the best of heuristics(heuristic 3) because the simulation did not render enough number of states. However, by increasing the number of realizations to 100,000 and obtaining a larger subset of the states, the proposed policies led to solutions better then any of the heuristics alone.
Table 3: Computational Results : Comparison of Heuristics VS. On-Line Decision Making # of Realizations
# the of Subset State in
Hl(Mean) ~
10,000 37311 3375.44 100,000 67140 3381.21 a Mean value of rewards with Heuristic 1(4.2) b On-line decision making with cost-to-go barrier(5.4.1) c On-line decision making with a guiding heuristic(5.4.2)
H2(Mean)
H3(Mean)
On-Line 1 b
On-Line 2 c
3391.63 3394.74
3515.60 3532.62
3360.50 3648.64
3510.20 3657.33
REFERENCES
1. V Jain and I.E. Grossmann, Resource-Contrained Scheduling of Tests in New Product Development, Industrial and Engineering Chemistry Research, Vol. 38, NO. 8, 3013-3026 (1999). 2. C.W. Schmidt and I.E. Grossmann, Optimization Models for the Scheduling of Testing Tasks in New Product Development, Industrial and Engineering Chemistry Research, Vol. 35, No. 10, 3498-3510 (1996). 3. C.W. Schmidt and I.E. Grossmann, Thc Exact Overall Time Distribution of a Project with Uncertain Task Duration, European Journal of Operational Rcsearch, Vol. 126, No. 3, 614-636 (2000). 4. G.E. Blau and B Mehta and S Bose and J.F. Pekny and G Sinclair and K Kuenkcr and P Bunch, Risk Management in the Development of New Products in Highly Regulated Industries, Computers and Chemical Engineering, Vol. 24, No. 2-7, 659-664 (2000). 5. C.T. Maravelias and I.E. Grossmann, Simultaneous Planning for New Product Development and Batch Manufacturing Facilities, Industrial and Engineering Chemistry Research, Industrial and Engineering Chemistry Research, Vol. 40, No. 26, 6147-6164 (2001). 6. P Brucker and A Drexl and R Mohring and K Neumann E Pesch, Resource-Contrained Project Scheduling : Notation, Classification, Models and Methods, European Journal of Operational Research, Vol. 112, No. 1, 3-41, (1999). 7. D Subramanian and J.F. Pekny and G.V. Reklaitis, A Simulation-Optimization Framework for Addressing Combinatorial and Stochastic Aspects of an R&D Pipeline Management Problem, Computers and Chemical Engineering, Vol. 24, 1005-1011 (2000). 8. D Subramanian and J.F. Pekny and G.V. Reklaitis, A Simulation-Optimization Framework for Research and Developement Pipeline Management, AIChE Journal, Vol. 47, No. 10, 2226-2242 (2001). 211-227 (1993). 9. J Choi and M.J. Realff and J.H. Lee, An Algorithmic Framework for Improving Heuristic Solutions Part II: A New Version of the Stochastic Traveling Salesman Problem, Computers and Chemical Engineering, Submitted (2002).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
445
Hybrid methods using genetic algorithm approach for crude distillation unit scheduling Dhaval Dave and Nan Zhang* Department of Process Integration, UMIST, Manchester, M60 1QD UNITED KINGDOM
Abstract This paper presents a novel approach for solving special kind of large scale mixed integer linear programming problem with large integrality gap. Virtues of both stochastic and deterministic methods are utilised in order to achieve near global solutions within practical solution time. Two level optimisation method is proposed where upper level optimisation determines key integer decision variables using genetic algorithm and lower level optimisation determines rest of the variables using deterministic method. The capabilities of algorithm were tested with the crude distillation unit scheduling problem. Hybrid approach generated satisfactory solutions with considerably less computational time compared to deterministic methods. Keywords genetic algorithm, mixed integer linear programming, crude distillation unit, scheduling 1. INTRODUCTION The vast majority of important application in science and technology are characterized by the existence of integer variables. These problems, due to their combinatorial nature, are considered difficult problems. This paper concerns with optimisation of mixed integer linear programming problems, with has inherent advantage of linear structure. Unlike Linear Programming (LP) with the simplex algorithm, no one good IP algorithm has emerged yet. Different algorithms prove better with different types of problems, often by exploiting the structure of special classes of problem. It seems unlikely that a universal Integer Programming (IP) will ever emerge [1]. The most common and successful approach is LPbased branch and bound method, which has been implemented in popular codes (OSL, CPLEX and XPRESS, etc.). The recent review on MILP is available t2j. As the name suggests, branch and bound methods branch on binary variable and employ lower and upper bounds of objective functions over sub-regions of the search space. In each sub-region, the lower bound is obtained from the solution of the corresponding relaxed convex problem. The projection of lower bound to the original problem gives the starting point for original non convex problem. The global optimum lies between the highest lower bound and the lowest upper bound over all sub-regions in the search space. Certain sub regions are dynamically refined, while others are excluded from consideration based on optimality and feasibility criteria. The bounds are developed over a range of values of the variable involved and become stricter as the search is confined to smaller sub-regions. *To whomcorrespondence should be addressed. Email:[email protected]
446 This method is well understood to guarantee global optimality for MILP problems. However, this method may take a long time to verify optimality. Algorithm works sluggish with formulation with large integrality gaps and that of larger number of discrete variables. This limitation provided motivation to find an alternative approach to overcome the sluggishness. In recent years, Evolutionary Algorithms (Genetic Algorithms (GA), Simulated Annealing (SA), Evolutionary strategies etc.) have emerged as an increasingly popular tool for global optimisation. These algorithms differ from conventional approaches, since only the objective functional values are required. Genetic Algorithms are known to offer significant advantage over traditional methods by using simultaneously several search principles and heuristics, whose most important are: a population wide search, a continuous balance between convergence and maintenance of diversity [3]. Although good, famous for its extensive search and global optimisation, GAs are also suffered by excessively slow convergence before providing an accurate solution, because of its 'blind' search approach and not utilising local information (exploitation of mathematical structure). Some attempts have been made to accelerate GAs performance by amalgamating with other deterministic methods like Newton method [4] interval analysis [51 etc. In context of MILP problem with large integrality gaps, there exists an inherent trade off between global optimality (with deterministic methods) and practical solution time (with taboo search heuristics and excel programming). In present work, we propose a hybrid approach which utilises virtue of both stochastic and deterministic methods. 2. HYBRID GENETIC ALGORITHM
A general structure of the algorithm for solving MILP problems involves two level of optimisation. Top level optimisation determines production sequence or sequence of feeding using GA and then bottom level program determines the rest of the variables using deterministic method. It is not necessary that all the integer variables needed to be determined at top level optimisation. It is possible that bottom level optimisation problem having both integer and continuous variables. In such cases, bottom level optimisation problem is MILP and can be solved with branch and bound (B & B) based deterministic methods. The key idea is to separate the integer decision which contribute large number of variables, equations and most importantly the integrality gap. Fig 1 illustrates the structure of hybrid genetic algorithm in detail. GA is used to determine key decisions, since this step involves optimisation in the space of integer variables, for with GA is inherently suited. After initialising key integer variables, rest of the variables (continuous or continuous + integer) are determined by deterministic methods (simplex or B&B based LP). Solution of bottom level problem provides the functional values. Since GA requires only functional value for search, now key integer variables can be upgraded by GA operators such as reproduction, cross over and mutation. The key decisions are manipulated by these operators to maximise/minimise average functional values. The procedure is continued until a termination criteria is satisfied (generally number of generation or the progress of average functional value). It should be noted that some times bottom level optimisation provides infeasible solution. In such cases, large penalty values are added to the objective functions to avoid spread of infeasibility.
447 I
Start
]
Initialise integer decision variables
,.[ Determine rest of the integer and "[ continuous variables using deterministic method (simplex or B & B)
!
GA operators: crossover and mutations
New set of key integer decision variables Figure 1 Hybrid genetic algorithm structure 3. APPLICATION OF HYBRID GENETIC ALGORITHM A typical refinery operates with 4-5 different kinds of crude oils. The problem of crude oil scheduling involves transfers from the pipeline to the crude tanks, intemal transfers among tanks and charges to the distillation units (Fig. 2) [6,7]. As discussed previously, determination of charging schedule and crude oil charging change-over require large number of constrains and equations; can be efficiently coded by GA. Typical GA works with strings of binary coded variables. For the case of crude distillation unit each bit of the string can represent the type of crude oil being charged at certain time point and the length of string can be set to the total number of time points used in the model (Fig. 2). This string is then improved by hybrid genetic algorithm as described above. Different units of refinery produce several streams, whose major specifications are based on their physical and/or chemical properties such as composition of key components, density and/or viscosity among others. These streams are usually blended to satisfy final product specifications. Typical system is shown in Fig. 3., which includes charging pipelines from C distillation columns to storage area, S storage tanks ( generally S = 2C) and finally L oil pipelines that may transfer P grade of products [81. In this case, application of GA involves L number of strings and length of each string can be the total number of time points used in the model, each bit of the string represents the grade of product transferred at corresponding time point.
448 4. CASE S T U D Y - SCHEDULING OF CRUDE DISTILLATION UNIT
The capabilities of hybrid genetic algorithm (HGA) were then tested with crude oil scheduling problem available in the literature. The problem [61 involves optimal operation of crude oil unloading, its transfer from storage tank to the charging tanks and the charging schedule for each distillation unit. A uniformly discretised t i m e - MILP model was used for an industrial size problem involving 3 crude vessels, 3 storage tanks, 3 charging tanks and 2 distillation columns. The results obtained from HGA includes optimal CDU charging schedule and various flows and continuous. Because of space limitation the optimal schedule is not mentioned. It should be noted that final value of the objective function, is same as what is obtained after using only deterministic methods. Table 1 shows a comparison between the performance of HGA and deterministic methods. It can be clearly seen that HGA required less number of linear programming (LP) problem calculations then solution procedure involving only deterministic methods. Consistency of the performance of HGA was also checked by running HGA with different initial populations. It was found that HGA gave global optimum solution 38 times out of 50 HGA runs. The worst solution obtained by HGA (= 310) was also not too far from global optimum value( = 297.15).
Figure 2 String representation of crude distillation scheduling problem
449
Figure 3 String representation of refinery product blending problem
Table 1 Comparison of the results HGA" LP calculations 2000 Objective value 297.15
Deterministic Method # >20000 297.15
* Population size = 100, no. of generation = 20, crossover and mutation probability 1 and 0.25 respectively # GAMS [ 9],Solver = OSL 5. CONCLUSION Hybrid genetic algorithm has been proposed to solve mixed integer linear programming problems involving large integrality gaps. The capabilities of algorithm were demonstrated by case study performed on crude distillation unit scheduling problem. HGA gave satisfactory results within practical solution time. REFERENCES
[1] H.P.Williams, John Wiley and Sons (3 rd edition revised), Model building in mathematical programming, 1990. [2] E.L. Johnson et. al., INFORMS Journal on Computing,12(2000),122-138. [3] D.E. Goldberg, Addition-Wesley, Genetic Algorithms in Search, Optimisation, and Machine Learning., MA,1989 [4] Renders, J.M. and Flasse, S.P., IEEE Transactions on Systems, Man and CyberneticsPart B: Cybernetics (1996), 26(2), 243-258. [5] Sotiropoulos, D.G. et. al., Nonlinear Analysis, Theory, Methods and Applications(1997), 30(7), 4529-4538.
450 [6] [7] [8] [9]
H.Lee. et al., Industrial & Engineering Chemistry Research(1996), 35, 1630-1641. N.Shah, Comp. Chem. Engg.(1996), 20 (Suppl), S1227-S1232. J.M. Pinto et. al., Comp. Chem. Engg.(2000), 24, 2259-2276. A. Brooke et. al.., Scientific Press, GAMS - A User's guide, San Francisco, CA, 1992.
Process SystemsEngineering2003
B. Chen and A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
451
Reverse Problem Formulation Based Techniques for Process and Product Synthesis and Design M.R. Edena, S.B. Jergensen a, R. Gani a and M.M. EI-Halwagi b
aCAPEC, Computer Aided Process Engineering Center, Department of Chemical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark bChemical Engineering Department, Texas A&M University, College Station TX 77843, USA In this paper, a new technique for model reduction that is based on rearranging a part of the model representing the constitutive equations is presented. The rearrangement of the constitutive equations leads to the definition of a new set of pseudo-intensive variables, where the component compositions are replaced by reduction parameters in the process model. Since the number of components dominates the size of the traditional model equations, a significant reduction of the model size is obtained through this new technique. Some interesting properties of this new technique is that the model reduction does not introduce any approximations to the model, it does not change the physical location of the process variables and it provides a visualization of the process and operation that otherwise would not be possible. Furthermore by employing the recently introduced principle of reverse problem formulations, the solution of integrated process/product design problem becomes simpler and more flexible. Abstract
Keywords
Model reduction, model analysis, visualization, constitutive equations
1. INTRODUCTION As the trend within the chemical engineering design community moves towards the development of integrated solution strategies for simultaneous consideration of process and product design issues, the complexity of the design problem increases significantly. Mathematical programming methods are well known, but may prove rather complex and time consuming for application to large and complex chemical, biochemical and/or pharmaceutical processes. Model analysis can provide the required insights that allows for decomposition of the overall problem into smaller (and simpler) sub-problems as well as extending the application range of the original models. In principle, the model equations representing a chemical process and/or product consist of balance equations, constraint equations and constitutive equations [11. The nonlinearity of the model, in many cases, is attributed to the relationships between the constitutive variables and the intensive variables. The model selected for the constitutive equations usually represents these relationships, therefore it would seem appropriate to investigate how to rearrange or represent the constitutive models. In the following a novel framework for model reduction is presented. The method is based on decoupling the constitutive equations from the balance and constraint equations.
452 2. REVERSE P R O B L E M FORMULATION CONCEPT By decoupling the constitutive equations from the balance and constraint equations the conventional process/product design problems may be reformulated as two reverse problems t2J. The first reverse problem is the reverse of a simulation problem, where the process model is solved in terms of the constitutive (synthesis/design) variables instead of the process variables, thus providing the synthesis/design targets. The second reverse problem (reverse property prediction) solves the constitutive equations to identify unit operations, operating conditions and/or products by matching the synthesis/design targets. An important feature of the reverse problem formulation is that as long as the design targets are matched, it is not necessary to resolve the balance and constraint equations.
Fig. 1. Decoupling of constitutive equations for reverse problem formulation. The model type and complexity is implicitly related to the constitutive equations, hence decoupling the constitutive equations from the balance and constraint equations will in many cases remove or reduce the model complexity. Furthermore since the constitutive equations (property models) usually contain composition terms, it is beneficial to solve for the constitutive variables directly, thus removing the composition dependency from the problem. 3. COMPOSITION FREE DESIGN METHODS By rearranging the constitutive equations in a systematic manner, the composition terms can be eliminated from the balance equations. The principal requirement of this new model reduction technique is the choice of a model for the constitutive equations where the constitutive variables, which are calculated through a set of reduction parameters, are linear functions of component compositions. The well-known cubic equations of state such as the Soave-Redlich-Kwong or the Peng-Robinson equations of state satisfy this property requirement. Customized constitutive models may also be generated for this purpose. Michelsen TM presented a composition free method for simple flash calculations, where the composition dependent terms of a cubic equation of state are lumped by solving directly for the constitutive variables. This method was extended by Gani and Pistikopoulos t41 to composition free distillation design. A novel way of representing the constitutive (property)
453 variables of a system is the concept of property clustering [5' 6], where the compositions are eliminated from the process model by characterizing the process streams using physical and chemical properties. The clusters are tailored to possess the two fundamental properties of inter- and intra-stream conservation allowing for consistent additive rules. Eden et al. developed cluster based models for fundamental processing units such as mixers, splitters and reactors, through which most sections of process flowsheets can be modeled [71. The cluster based balance models are derived from the original composition based balance models by systematically removing the composition dependency using property relationships. As long as the stream properties can be obtained, the process can be represented. The clustering approach utilizes property operators defined as: "'. U/j(P)M,X) = Y'~Xs'Vj(Pjs) s--,
F~ , xs = N,, ZF~
, njs = vj(Pjs) ~ ~;er
(1)
s=l
An AUgmented Property index (AUP) for each stream s is defined as the summation of all the dimensionless property operators: NP
AUPs : ~ ~js
(2)
j=l
The property cluster for propertyj of stream s is defined as: ~"2js
Cjs : ~ AUP s
(3)
The mixture cluster and AUP values can be calculated through the linear mixing rules given by Eq. ( 4 ) - (5)" CjM,X : )..~13s .Cjs s=!
, 13s :
Xs '
AUP~
AUPMIx
(4)
Ns
AUPM,x = ~ x s 9AUP s
(5)
s=l
In Eq. (4) [3s represents the cluster "composition" of the mixture, i.e. a pseudo-intensive variable, which is related to the flow fractions (xs) through the AUP values. An inherent benefit of the property clustering approach is that design strategies developed apply to a wide range of problems, e.g. first principles models, data-based models and simple correlations. 4. CASE S T U D Y - R E C Y C L E O P P O R T U N I T I E S IN P A P E R M A K I N G To illustrate the usefulness of constitutive or property based modeling, a case study of a papermaking facility is presented. Wood chips are chemically cooked in a Kraft digester using white liquor (containing sodium hydroxide and sodium sulfide as main active ingredients). The spent solution (black liquor) is converted back to white liquor via a recovery cycle
454 (evaporation, burning, and causticization). The digested pulp is passed to a bleaching system to produce bleached pulp (fiber). The paper machine employs 100 ton/hr of the fibers. As a result of processing flaws and interruptions, a certain amount of partly and completely manufactured paper is rejected. These waste fibers are referred to as broke. The reject is passed through a hydro-pulper followed by a hydro-sieve with the net result of producing an underflow, which is burnt, and an overflow of broke, which goes to waste treatment. It is worth noting that the broke contains fibers that may be partially recycled for papermaking. |! !
Wood Chips
Kraft Digester
White Liquor
Pulp
I Black Liquor Chemical Recovery Cycle
|! |
! =| .=. . . . . .
Waste ~
: Broke ..
v
=, 1
;--.---,
L.
Paper
Product
Reject 1
Hydro
~..
Sieve
I-"
I I
Hydro Pulper
Fig. 2. Schematic representation of pulp and paper process. The objective of this case study is to identify the potential for recycling the broke back to the paper machine, thus reducing the fresh fiber requirement and maximize the resource utilization. Three primary properties determine the performance of the paper machine and thus consequently the quality of the produced paper tT' 8, 9]: 9 Objectionable Material (OM) - undesired species in the fibers (mass fraction) 9 Absorption coefficient (k) - measure of absorptivity of light into paper (m2/g) 9 Reflectivity (ILo) - defmed as a reflectance compared to absolute standard (fraction) In order to convert property values from raw property data to cluster values, property operator mixing rules are required tS' 6]. The property relationships can be described using the KubelkaMunk theoryi7]. According to Brandon tS], the mixing rules for objectionable material (OM) and absorption coefficient (k) are linear, while a non-linear empirical mixing rule for reflectivity has been developed [9]. Table 1 Properties of fibers and constraints on paper machine feed Property Operator Fibers Broke OM (mass fraction) k (m2/g) R~ Flowrate (ton/hr)
OM k (R~) 592 .....
0.000 0.0012 0.82 100
0.115 0.0013 0.90 30
Paper machine 0.00 0.00115 0.80 100
-
0.02 0.00125 0.90 105
Reference 0.01 0.001 1 .....
From these values it is apparent that the target for minimum resource consumption of fresh fibers is 70 ton/hr (100-30) assuming that all the broke can be recycled to the paper machine. The problem is visualized by converting the property values to cluster values using Eq. (1) (3). The paper machine constraints are represented as a feasibility region, which is calculated
455 by evaluating all possible parameter combinations of the property values in the intervals given in Table 1. The resulting ternary diagram is shown in Fig. 3, where the dotted line represents the feasibility region for the paper machine feed. The relationship between the cluster values and the corresponding AUP values ensures uniqueness when mapping the results back to the property domain. Cz 0~
C2 ~e
e.2
u
ol
ae
o~
eJ
eJ
oe
Le
ts
O4
ee
u
el
o.1
e.a
o4
Its
u
a)
u
c,
u
oz
91
Fig. 3. Ternary problem representation.
0~
~a
u
04
es
oy
u
oe
c,
Fig. 4. Optimal feed identification.
Since the optimal flowrates of the fibers and the broke are not known, a reverse problem is solved to identify the clustering target corresponding to maximum recycle. In order to minimize the use of fresh fiber, the relative cluster arm for the fiber has to minimized, i.e. the optimum feed mixture will be located on the boundary of the feasibility region for the paper machine. The cluster target values to be matched by mixing the fibers and broke are identified graphically and represented as the intersection of the mixing line and the feasibility region in Fig. 4. From the calculation of the feasibility region the cluster and AUP values for the mixing point are known. Using these results the stream fractions can be calculated from Eq. (5). The resulting mixture is calculated to consist of 83 ton/hr of fiber and 17 ton/hr of broke. C2
0.1
0.2
0 3
0.4
0.5
0 6
0.7
0.8
0.9
Fig. 5. Identification of property interception targets.
456 Direct recycle can reduce the fiber usage from 100 ton/hr to 83 ton/hr, i.e. it does NOT achieve the minimum fiber usage target of 70 ton/hr. Therefore the properties of the broke will have to be altered to match the maximum recycle target. Assuming that the feed mixture point is unchanged, and since the fractional contribution of the fibers and the intercepted broke are 70% and 30% respectively, the cluster "compositions" (13s) can be calculated from Eq. (4). Now the cluster values for the intercepted broke can be readily calculated from Eq. (4), and the resulting point is shown on Fig. 5. This reverse problem identifies the clustering target, which can then be converted to a set of property targets as given in Table 2. Note that for each mixing point on the boundary of the feasibility region, a clustering target exists for the intercepted broke, so the reverse problem formulation technique is actually capable of identifying all the alternative product targets that will solve this particular problem. Table 2 Properties of intercepted broke capable of matching maximum recycle target Property Original Broke Intercepted Broke OM (mass fraction) 0.115 0.067 k (m2/g) 0.0013 0.011 ILo 0.90 0.879 Solution of the second reverse problem, i.e. identification of the processing steps required for performing the property interception described by Table 2, is not presented in this work. Most processes for altering or fine tuning paper properties are considered proprietary material, however the interception can be performed chemically and/or mechanicallytT' 81. 5. CONCLUSIONS Decoupling of the constitutive equations from the balance and constraint equations, allows for a conventional forward design problem to be reformulated as two reverse problems. First the design targets (constitutive variables) are identified and subsequently the design targets are matched by solving the constitutive equations. By employing recent property clustering techniques a visualization of the constitutive (property) variables is enabled. A case study illustrates the use property clusters as well as the benefits of reverse problem formulations. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9]
B.M. Russel, J.P. Henriksen, S.B. Jorgensen and R. Gani, Comp. & Chem. Eng., 24 (2000). R. Gani and M.R. Eden, Proceedings of WWDU 2002, (2002). M.L. Michelsen, Ind. Eng. Chem. Process. Des. Dev., 25 (1986). R. Gani and E.N. Pistikopoulos, Fluid Phase Equilibria, 194-197 (2002) M.D. Shelley and M.M. E1-Halwagi, Comp. & Chem. Eng., 24 (2000). M.R. Eden, S.B. Jorgensen, R. Gani and M.M. E1-Halwagi, Proceedings of Distillation and Absorption (2002). -. . . . . . C.J. Biermann, Handbook of Pulping and Papermaking, Academic Press (1996). C.E. Brandon, Pulp and Paper Chemistry and Chemical Technology, 3ra Edition, Volume III, James P. Casey Ed., John Wiley & Sons, New York, NY (1981). W.R. Willets, Paper Loading Materials, TAPPI Monograph Series, 19, Technical Association of the Pulp and Paper Industry, New York, NY (1958).
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
457
Analyzing Chemical Process Design Using an AbstractionDecomposition Space C. Foltz and H. Luczak
Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University, Bergdriesch 27, D-52062 Aachen, Germany, e-mail: {c.foltz I h.luczak} @iaw.rwth-aachen.de ABSTRACT Process Systems Engineering is concerned with the understanding and development of systematic procedures for the design and operation of chemical process systems [ 1]. Progress in PSE is closely related to computer-based tools for improved decision making. While support tools emerged in different, isolated areas in the past [2], this procedure is no longer feasible to meet today's challenges of process and plant design [3]. In this paper an approach from Cognitive Systems Engineering [4] is adopted. Therewith it is possible to analyze chemical process design in total and to derive design implications for support tools considering both the functional constraints of the work domain chemical engineering and the cognitive constraints of the people involved. 1. INTRODUCTION To compete in the global markets process industries are forced to develop safer, more operable, more reliable, and more integrated processes and plants as cost-effectively and quickly as possible [5,6]. Meeting this needs by providing methods, tools, and people is a compelling aspect of Process Systems Engineering [ 1]. Consequently, the integration of process design, process control, and process operability is one important step [7]. Finally, a holistic view [8] or life cycle perspective [9] on chemical engineering has to be taken. Beyond doubt, information systems play a special role in supporting decision making in all areas of chemical product, process, and plant design [ 1,9]. However, many tools are not suited for the 'normal' process engineer [ 10]. Furthermore, different authors state that today's information systems support is not sufficient. For example, incompatible model formats [ 11 ] - still existing despite of the efforts of CAPE-OPEN - or a high effort for mathematical modeling [12] require new computer tools to integrate information [13] and to bridge the gaps between existing ones [2]. Taking a closer look a lot of questions arise. Which information should be integrated and how? What are the most important gaps to be bridged? Which functions should new tools have? How can the use of old tools be integrated with new ones? ...? To answer these questions and to derive design implications for support tools a careful description and analysis of the process of chemical process design is essential. Moreover, this
458 approach should not only focus on technological aspects but also consider individual and organizational needs in a prospective manner. 2. APPROACHES TO WORK ANALYSIS Work or task analysis is a crucial step for information system design [ 14]. Hence, a technique meeting the above sketched requirements has to be chosen from many different types of work analysis techniques [14,15]. As far as system design is concerned a categorization highlighting both similarities and differences of work analysis techniques is useful [ 15]. First, normative models ('The one best way?') prescribe how a work system should behave. They can be found in textbooks about chemical process design principles [16,17]. The emphasis is on identifying what workers should be doing to get the job done. However, the relatively well-ordered transformation from problem formulation to equipment does not describe in sufficient detail what engineers are really doing. In particular, normative models describe novice rather than expert performance [4]. To avoid misunderstandings, normative models are important for curriculum purposes but not sufficient to derive implications for computer support design. Second, descriptive models ('What workers really do') seek to understand how workers actually behave in practice. This goal is accomplished by conducting field studies. However, as far as chemical engineering is concerned these studies are very rare [18,19]. Moreover, a very great effort is necessary to gain insight into a complete development process from the cradle to the grave. Although exactly this is demanded by the above mentioned holistic view, it would only be an unique example. Even a development process with a similar problem formulation may differ significantly from the other. Furthermore, results from those studies are limited in at least two ways. Current practice is, on the one hand, always tied to existing technology, i.e. it is device-dependent and contains workaround activities that are caused by inappropriate computer support [ 15]. On the other hand, workers are adaptive, so the introduction of a new design results in new practices. This interdependence is known as task-artifact cycle in the Human-Computer Interaction literature [20]. In summary, descriptive techniques are important and useful in understanding what workers really do and what they would like to do. Nevertheless, there are serious limitations in extracting design implications from descriptive models. Third, formative models ('Workers finish the design') focus on identifying requirements, both technological and organizational, that need to be satisfied if a device is going to support work effectively. The workers will be given some responsibility 'to finish the design' locally as a function of the situated context. Therefore, these approaches overcome the difficulties which occur when normative or descriptive approaches for system design are used [ 15]. Based on the work of Rasmussen [4] Vicente [15] proposed a formative approach comprising five steps. Table 1 visualizes in a more illustrative than definitive or exhaustive way the five conceptual distinctions of Cognitive Work Analysis. The first concept, work domain, represents the system being controlled, independent of any particular worker, automation, task, goal, or interface, i.e. the work domain shows the possibilities for action.
459 Table 1 Relationship between the five phases of Cognitive Work Analysis [ 15] 1. Work Domain What information should be measured? What information should be derived? How should information be organized? 2. Control Task What goals must be pursued and what are constraints on those goals? What information and relations are relevant for particular classes or situations? 3. Strategies What frames of reference are useful? What control mechanisms are useful? 4. Social-Organizational What are the responsibilities of all of the actors? How should the actors communicate with each other? 5. Worker Competencies What knowledge, rules, and skills do workers need to have?
The second concept, control task, are the goals that need to be achieved, independently of how they are achieved or by whom. In other words, the focus is on identifying on what needs to be done, independently of the strategy (how) or actor (who). The third concept, strategies, are the generative mechanisms by which particular control tasks can be achieved, independently of who is executing them. They describe how control tasks goals can be effectively achieved, independently of any particular actors. The forth concept, social organization and cooperation, deals with the relationship between actors, whether they be human workers or automation. This representation describes how responsibility for different areas of the work domain may be allocated among actors, how control tasks may be allocated among actors, and how strategies may be distributed across actors. Finally, the fifth concept, worker competencies, represents the set of constraints associated with the workers themselves. Different jobs require different competencies. Thus, it is important to identify the knowledge, rules, and skills that workers should have to fulfil particular roles in the organization effectively. 3. WORK DOMAIN ANALYSIS FOR CHEMICAL PROCESS DESIGN Chemical process design is characterized as a complex, iterative, and creative activity typically starting as an ill-defined problem [ 19]. In order to create new processes and plants or to retrofit existing ones an interdisciplinary team [2] develops and uses different models [21,9]. Decisions in chemical process design arise from goals and constraints incorporated in these models constituting knowledge about the process and the plant respectively. Therewith, this knowledge describes the work domain the process designer is acting on. To develop a representation of the work domain of chemical process design an abstraction-decomposition space (ADS) will be used [4]. The abstraction hierarchy (AH) or functional means-ends dimension supports knowledge-based reasoning and decision making in terms of functional relationships among the information objects. Each level in the hierarchy represents the goals or ends for the functions of the level below and potential resources or
460 means for the level above. In other words, the AH spans the gap between purpose and material form [15]. Allowing a many-to-many relationship between the levels there is room for choice or decision making. To form a two-dimensional problem space, orthogonal to the AH a decomposition dimension specifies the part-whole relationship of a system. Each of these levels represents a different level of granularity. Therewith, two main strategies for problem solving - abstraction and aggregation - can be covered [4,15]. Certainly, very often a change of the decomposition level is coupled with a change in the abstraction level, nevertheless, these two dimensions are conceptually separate. For chemical process design the part-whole dimension contains the following five levels: system (e.g., plant), subsystem (e.g., separation system), functional unit (e.g., distillation column), subassembly (e.g., valve tray), and component (e.g., float valve). Similarly, the functional means-ends dimension discriminates five levels (Fig. 1): 9 Functional Purpose textual description of what is desired like "production of Polyamide6, residue of water less 0.01% ..." for the system or "allow part load" for a float valve on the component level 9 Abstract Function chemical reaction paths, basic functions (react, separate, etc.), physical property data 9 Generalized Function mostly unit operations like continuous stirred tank reactor, plug flow reactor for "react" or distillation column, evaporator for "separate" but also new combined operations; assumptions are necessary due to lack of some data in advance; calculations with linear mass- and energy balances; short-cut methods 9 Physical Function rigorous process models in different refinements; physico-chemical phenomena included; complex mass-, momentum, and energy balances; assumptions about data (e.g., recycling rate, physical properties) are replaced 9 Physical Form 2D-drawings and 3D-models of all equipment, plant layout Indeed, there are further important aspects like controllability, operability, costs, safety, reliability, etc. which impose constraints on process design. However, they are secondary objectives always related to the functional representation. Hence, they can be characterized as supplementary layers linked to the proposed problem space. 4. CONCLUDING REMARKS The work domain analysis is step one of five in the cognitive work analysis. However, it is a very important one providing an event-independent and device-independent description of the work domain. The presented description does not violate existing systematic chemical process design principles [16,17]. In contrary, combining the ADS with this design principles (see control task analys!s and strategies analysis in Table 1) allows both to derive design implications for new information systems and to detect deficiencies of existing computer systems. Hence, support systems build on this framework will neither limit human problem solving nor force the developer to follow a certain design procedure. Instead, problem solving is aided
461
Fig. 1. Abstraction-decomposition space (ADS) for describing and analyzing chemical process design
through the development of adequate mental models. In addition, a developer is free to choose a strategy that suites best depending on his basic knowledge and individual experience. Finally, it should be mentioned that the ADS has relevance for the implementation of Concurrent Engineering which has been shown for mechanical design [22]. ACKNOWKLEDGEMENT The authors thank B. Bayer, W. Marquardt, M. MiJhlfelder, L. Schmidt, and R. Schneider for many fruitful discussions and gratefully acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center 476 IMPROVE. REFERENCES
[1] [21
[31
[4]
I.E. Grossmann and A.W. Westerberg, Research Challenges in Process Systems Engineering, AIChE Journal, 46 (2000) 9, 1700-1703. M. Nagl and W. Marquardt, SFB-476 IMPROVE: Informatische UnterstiRzung tibergreifender Entwicklungsprozesse in der Verfahrenstechnik, in: M. Jarke, K. Pasedach, K. Pohl (eds.), Informatik '97 - Informatik als Innovationsmotor, Springer, Berlin, 1997, 143-154. R. Schneider and W. Marquardt, Information Technology support in the chemical process design life cycle, Chemical Engineering Science, 57 (2002) 10, 1763-1792. J. Rasmussen, A.M. Pejtersen and L.P. Goodstein, Cognitive Systems Engineering, John Wiley & Sons, New York, 1994.
462 [5] [6] [7]
[8] [9] [ 10] [ 11 ]
[ 12]
[13] [ 14] [ 15] [16] [17] [ 18] [ 19] [20]
[21] [22]
[23]
P.I. Barton and C.C. Pantelides, Modeling of Combined Discrete/Continuous Processes, AIChE Journal, 40 (1995), 966-979. A.E. Fowler (2000), Process Development in the New Millennium...Hands on or Modeled, In-sourced or Out-Sourced, Solo or Shared, in: [23], 1-4. J. van Schijndel and E.N. Pistikopoulos, Towards the Integration of Process Design, Process Control, and Process Operability: Current Status and Future Trends, in: [23], 99-112. H. Schuler, Prozel3ftihrung, Chemie Ingenieur Technik, 70 (1998) 10, 1249-1264. W. Marquardt, L. von Wedel, B. Bayer (2000), Perspectives on Lifecycle Process Modeling, in: [23], 192-214. H.H. Mayer and H. Schoenmakers, Integrated use of CAPE tools - an industrial example, in: [23], 466-469. G. Schopfer, L. von Wedel and W. Marquardt, An Environment Architecture to Support Modeling and Simulation in the Process Design Lifecycle, A/ChE Annual Meeting, Modeling and Computations for Process Design, Los Angeles, Nov 12-17, 2000, 8 p. M. Jarke and W. Marquardt, Design and Evaluation of Computer-Aided Process Modeling Tools, in: J.F. Davis, G. Stephanopoulos and V. Venkatasubramanian (eds.), International Conference on Intelligent Systems in Process Engineering, AIChE Symposium Ser. 312, Vol. 92, 1996, 97-109. R. Batres and Y. Naka, Process Plant Ontologies based on a Multi-Dimensional Framework, in: [23], 433-437. H. Luczak, Task Analysis, in: G. Salvendy (ed.), Handbook of Human Factors, 2 nd Edition, John Wiley & Sons, New York, 1997, 340-416. K.J. Vicente, Cognitive Work Analysis, Lawrence Erlbaum, Mahwah, 1999. L.T. Biegler, I.E. Grossmann and A.W. Westerberg, Systematic Methods of Chemical Process Design, Prentice Hall PTR, Upper Saddle River, 1997. W.D. Seider, J.D. Seader and D.R. Lewin, Process Design Principles, John Wiley & Sons, New York, 1999. C. Hales, Analysis of the Engineering Design Process in an Industrial Context, 2 nd Edition, Gants Hill, Eastleigh, 1991. A.W. Westerberg, E. Subrahmanian, Y. Reich, S. Konda, and the n-dim group, Designing the Process Design Process, Comp. Chem. Eng., 21 (1997) Suppl., S 1-$9. J.M. Carroll, W.A. Kellog and M.B. Rosson, The task-artifact cycle, in: J.M. Carroll (ed.), Designing Interaction: Psychology at the Human-Computer Interface, Cambridge University Press, Cambridge, 1991, 74-102. S. Sundquist, J. lime, L. M~i~itt~i, and I. Turunen, Using models in different stages of process life-cycle, Comp. Chem. Eng., 24 (2000) No. 2-7, 1253-1259. M. Helander, Models of Design for Concurrent Engineering, in: P.T. Kidd and W. Karwowski (eds.), Advances in Agile Manufacturing: Integrating Technology, Organization and People, lOS Press, Amsterdam, 1994, 21-26. M.F. Malone, J.A. Trainham, B. Carnahan (eds.), Fifth International Conference on Foundations of Computer-Aided Process Design, AIChE Symposium Ser. 323, Vol. 96, 2000.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
463
OPTIMAL GRADE TRANSITIONS POLYETHYLENE REACTORS
FOR
A. Gisnas, B. Srinivasan, and D. B o n v i n
Laboratoire d 'A utomatique Ecole Polytechnique Fgdgrale de Lausanne CH-1015 Lausanne, Switzerland
A b s t r a c t : In fluidized-bed gas-phase polymerization reactors, several grades of polyethylene are produced in the same equipment by changing the operating conditions. Transitions between the different grades are rather slow and results in production of a considerable amount of off-specification polymer. In this paper, the problems of minimizing the transition time and the amount of off-spec material are considered. It is shown that, in most cases, both the optimal steady-state operation and the optimal grade transitions are determined by operational and process constraints. K e y w o r d s : Polymerization, Polyethylene, Grade transition, Dynamic optimization.
1. INTRODUCTION Polyethylene is widely used today in a multitude of products and is produced continuously in gasphase fluidized-bed reactors (Choi and Ray, 1985). The variety of polyethylene products call for the production of various polymer grades, which can be accomplished by changing the operating conditions of the reactor. Often, a considerable amount of off-specification polymer is produced during grade transitions (Debling et al., 1994). The goal of this work is to analyze and characterize the grade transition problem from the point of view of minimizing the transition time or the amount of off-spec polymer. The grade transition problem has been studied extensively. Debling et al. (1994) tested different grade transition operations using the simulation package POLYRED. McAuley and MacGregor (1992) and Wang et al. (2000) calculated optimal grade transition strategies in a gas-phase fluidized-bed reactor by applying the control vec1 Anders Gisnas was an exchange student from the Norwegian University of Science and Technology, Trondheim.
tor parameterization (CVP) method to approximate each manipulated variable profile by a series of ramps. Takeda and Ray (1999) also used the CVP method to find optimal grade transitions for a slurry-phase loop reactor. However, in all of the aforementioned works, the cost function that is optimized corresponds to the integral squared error from a pre-defined transition trajectory and not to an economic objective. In this work, the optimal grade transition problem will be formulated using either the transition time or' the amount of off-spec material as the economic objective function. Another important aspect will be to interpret the various intervals that constitute the optimal solution in terms of the objectives and the constraints of the optimization problem. The paper is organized as follows. Section 2 provides a brief description of the process and a simplified mathematical model. In Section 3, various grades are defined and the steady-state operating points that maximize production are computed. In Section 4, the optimal grade transition problem is cast into a dynamic optimization framework, and the optimization results are discussed. Finally, conclusions are drawn in Section 5.
464 2. PROCESS DESCRIPTION In this study, the polymerization of ethylene in a fluidized-bed reactor with a heterogeneous Ziegler-Natta catalyst is considered (Choi and Ray, 1985; Kiparissides, 1996). A schematic diagram of the reactor system is shown in Figure 1. Ethylene, hydrogen, nitrogen (inert), and catalyst are fed continuously to the reactor. The gas phase consisting of ethylene, hydrogen and nitrogen provides the fluidization of the polymer bed and transports heat out of the reactor through a recycling system. A compressor pumps the recycle gases through a heat exchanger and back into the bottom of the reactor. The fresh feeds are mixed with the recycle stream before entering the reactor. The single pass conversion of ethylene in the reactor is usually low ( 1 - 4%) and hence the recycle stream is much larger than the inflow of fresh feeds. Excessive pressure and impurities are removed from the system in a bleed stream at the top of the reactor. Fluidized polymer product is removed from the base of the reactor through a discharge valve. The rate at which the product is removed is adjusted by a bed-level controller that keeps the bed level or, equivalently, the polymer mass in the reactor at the desired set point. bleedvalve position, Vp ,b
compressor
t
heat exchanger
d[H2] _ FH . . . . Vg dt mwH
d[I]
['i
b[H2]
(1)
-b[I]
(2)
diM] FM - b[M] - kpY[M l (3) Vg d---~--mwM dY OpY = Fy - B---~ - k d Y -- k/[H2]Y dt khkfB~[H2]Y + (4) khBw + kpmwMY dBw (5) -kp[MlYmwM -Op where [H2], [I] and [M] are the molar concentrations of hydrogen, nitrogen (inert) and monomer, respectively. Y is the number of moles of active catalyst sites in the bed. FH, FI, FM and Fy are the fresh feeds of hydrogen, nitrogen, ethylene and catalyst, respectively, b is the pressure-dependent volumetric bleed rate at the top of the reactor, and Vg the volume of the gas phase. Bw denotes the polymer mass in the reactor bed, and Op is the outflow rate of polymer product from the reactor. mwi are the molecular weights of the different species, i - { H , I , M}. kp is the reaction propagation constant, kh the ethylene site reactivation constant, kd and kf the deactivation constants for the catalyst and hydrogen sites, respectively. The pressure-dependent bleed rate is set via the bleed valve position, Vp:
[atm]
P = ([M] + [H2] + [ I ] ) R T (
b=
i polymer
/
/
mass, Bw
mer productoutflow,Op ["
e-~-~nefeed, F• hydrogen feed, F H nitrogen(inert)feed,FI
Fig. 1. Gas-phase polyethylene reactor. A simplified first-principles model (McAuley et al., 1995; McAuley and MacGregor, 1991) of a fluidized-bed polyethylene reactor can be derived under the following assumptions: (i) The gas and solid phases in the fluidized bed are well mixed; (ii) The temperature in the reactor is uniform and perfectly controlled at its set point; (iii) The time lag associated with the recycle flow through the heat exchanger and recycle lines can be neglected; (iv) The feed rates and valve positions can be changed instantaneously. The balance equations read:
VpCv v / P - Pv
[m3/h]
(6)
(7)
where P is the pressure of the gas phase in the reactor, T the temperature, Cv the valve coefficient, and R the gas constant. The specification of polyolefin products is often characterized in terms of the melt index number, i.e. the amount of melted polymer that can be squeezed through a standard orifice in 10 minutes. It is an inverse measurement of viscosity that depends on the molecular-weight distribution, temperature, and the shear rate. The instantaneous and cumulative melt indices, M I i and MIc, are calculated as:
M I i --- kT
[H2])3.5 kl + k2~--~ L
dt
(8)
.$
T
where ~- - - k p [ M ] Sw YmwM is the solid-phase residence time and thus the time constant for the cumulative melt index MIc. kT is the chain transfer rate
465 constant, and kl and k2 are melt index constants. The distinction between the instantaneous and cumulative melt indices is necessary since the polymer chains are produced very quickly compared to the residence time of the polymer in the reactor. The numerical values of the model parameters and the operating conditions used in this study are given in Table 1. Parameter
Cv (atm-~ kmol / h) kl
Value 27 0.4 0.33 0 316.8 3600 306000 0.166 28.05 2.016 28.00 0.0821 17 360 5OO
FI. F y is maximum to increase productivity, FH is determined from the melt index requirement, Op is set to keep the polymer mass at its reference value, and the bleed rate Vp is minimum. Thus, the six decision variables are determined by six active constraints, an indication that the objective function exhibits no curvature.
4. DYNAMIC O P T I M I Z A T I O N OF G R A D E TRANSITIONS
k2 ka(h -1) kf(m3/kmolh) kh(ma/kmolh) kp(m3/kmolh) kT mwM(kg/kmol) mwH (kg/kmol) mwi(kg/krnol) R(atm m3/kmolK) P.(atm) T(K) v~(ma) Table 1. Model parameters and o 3erating conditions.
4.1 Grade Belt Before addressing the dynamic optimization problem, it is important to introduce the concept of a grade belt. In practice, it is common to specify a polyethylene grade as a range of acceptable values around the desired nominal value. This range corresponds to a grade belt for the cumulative melt index: ( 1 - 7)MIc,rei <_ M I c <_ (1 + ")')MI~,~el, with " / a specified parameter (here, - / = 0.2). The time at which M I c enters the desired grade belt is denoted by tbelt. Through the definition of the sign function s:
3. STATIC O P T I M I Z A T I O N OF GRADES s = During steady-state production of polyethylene, the goal is to maximize the outflow rate of polymer while meeting operational and safety requirements. The optimization problem can be formulated mathematically as follows:
1 for increasing MIc, A --+ B - 1 for decreasing MIc, B --+ A
( 11)
tbett is defined as the time for which MIc(tbett) = (1 -- S~/) MIc,rel
(12)
4.2 Formulation of the Optimization Problem max Op F, ,Op,yp
s.t.
(10)
r.h.s of equations (1)-(5) = 0 equations (6)-(9)
M I c = MIc,rey Bw = Bw,reI Pmin _< P <_ Pm~x Fi,min _< F~ _< F~. . . . . i e { H , I , M , Y }
Vp,min ~ Vp<~ Vp. . . .
The optimization is carried out under the following constraints: 9 Reactor operation must satisfy safety and operational requirements. In contrast to the static optimization problem, the polymer mass is allowed to vary within bounds. 9 The instantaneous melt index should not go past the exterior limit of the grade belt, s M I i ( t ) < s(1 + sT)MIc,r~y.
Op ~ Op . . . .
The optimization of two different objective functions is considered:
The optimal operating conditions for two grades (A and B) are shown in Table 2 along with the values of the upper and lower bounds used in Problem (10). Though, in principle, Vp can be manipulated between 0 and 1, industrially it is preferable to have a non-zero bleed at steady state to handle impurities. So, Yp,min ~-- 0.5 is used here.
(1) Minimization of the time needed to get to the grade belt, i.e. Jtime -- tbelt 9 (2) Minimization of the amount of off-spec material during grade transition, i.e. JofI = otb~'t Opdt.
Op,min ~
Clearly, increasing F M increases the production of polyethylene, and thus FM is maximum. The pressure is on its lower bound so as to minimize the waste of monomer through the bleed, which fixes
It is common practice not to manipulate the monomer and inert feed rates. Thus, throughout the transition, FM is kept at its upper bound and Fz at its final steady-state value. Among the other decision variables, two sets of inputs are considered:
466 Lower
MI .... f (g/lOmin)
Bound
0.009 70 20 1.1 495 30 10 0.5 29.86
B. . . . I (103kg)
P (atm) FH (kg/h) FI (kg/h) FM (103kg/h) Fy (lO-3kmol/h) Vp
A
20 0 0 0 0 0.5 21
Upper Bound 0.09 70 20 15 281 30 10 0.5 29.84
Active Constraint
25 70 500 30 10 1 39
MIc,rel Pmin
FM . . . . Fy . . . . Vp,min
Op (103kg/h) B .... / Table 2. Optimal operating conditions and active constraints for grades A and B, as well as upper and lower bounds used in Problem (10). (1) b/feed = {F y, FH} - only the feed rates are manipulated, (2) /gaU - {Fv, FH, Op, Vp} - the outflow and bleed rates are also adjusted. The four problems ((Jtime,Ufeed), (Joff,/gfeed),
(J.~,u..), (Jofy,/gau))/ are studied for both transitions A - , B and B --~ A. The optimization problems can be stated mathematically as follows: min
Jtime or Jofl
s.t.
(1)-(9)
u(t),tb~,
(13)
MIc (tb~L,) = (1 -- s~) MSc,~S s M I i (t) <_ s (1 + s~) MSc,ref Umin ~ u ( t )
<~_ U . . . .
?A E ~greed o r ~gall
Brain <~- V (t) <~ Vmax
9 Transition A ---. B: As shown in Figure 2, the optimal solution consists of two intervals. Initially, both the hydrogen and catalyst feed rates are maximum to increase [H2] and thus M I i as quickly as possible (8). Once M I i reaches the upper limit of the grade belt, FH is reduced to keep M I i on that limit until MIc is inside the grade belt. 9 Transition B ~ A: The optimal solution in this case also consists of two intervals (Figure 3). Initially, the hydrogen and catalyst feed rates are minimum to decrease [/-/2] and the consumption of monomer, respectively, thereby allowing M I i to decrease rapidly. This, however, increases [M] and with it also the pressure. Once the pressure limit is reached, catalyst is added to promote the reaction, thereby decreasing [M] and keeping the pressure at its limit.
B~,m~. <_ B ~ (t) <_ B . . . . .
/
~" 101 t
Parameter
Min
(l~iML~ro! MIc'.ret
.
(1 y)MIc.ref
.
.
= ,m "
.
.
,p ~ . . . . . . . .
=/
] ~.~
Max
Bw(lO3kg) 56 84 Vp 0 1 Table 3. Additional bounds used in the dynamic optimization. The bounds are given in Tables 2 and 3. Only the problem of reaching the grade belt is detailed here. Once the system is inside the grade belt, though the material that is produced is acceptable, the solution is not necessarily optimal in terms of the production rate. Further transition to the optimal operating point via tracking is required, but this is not addressed in this paper. 4.3 Optimization Results The numerical approach is described elsewhere (Gisnas, 2002) and is not given here. Solution w i t h ~greed --" {FY, FH } For the input set/gleed, the optimal solutions are identical for both objective functions Jtime and Joff. The structure of the solution for the two transitions is discussed next:
0
1
2
3
4
5
~
~
~
~
6
!~176 t
i
O 005
~
Fig. 2. Optimal profiles for the transition A --~ B with the input s e t l~feed : {Fy, FH } S o l u t i o n with/gall = {FY, FH, Op, Vp}
For the input set/gaU, the optimal solution is the same for both objective functions, except for the outflow rate Op in the transition A --+ B. The structure of the solution for the two transitions is discussed next:
467
1 0 ' ~
"
'
' '= =" " "Instantaneous
'1
t
i
10~
0.Ol I
~ o 005 0
2
4
6
8
10
00
~
.~~176.
.
.
.
.
o oo~
"d3 -~-
4
I
I
~2o
x 104
,
,
o! .,
Fig. 3. Optimal profiles for the transition B --~ A with the input set l~[.fee d -- { F y , FH }
9 Transition A -~ B: With the objective function ,]time, the optimal policy has four intervals (Figure 4). Initially, FH and F y are maximum and the bleed valve is completely closed. At the same time, Op is maximum to reduce the polymer mass in the reactor as quickly as possible. Once Bw has reached its lower limit, Op is adjusted to keep it there. The third interval starts when the pressure reaches its upper limit. The bleed valve is then opened to about 60% to keep the pressure there. Eventually, MIi reaches the upper limit of the grade belt. A reduction of FH is then necessary to keep MIi on that limit. Also, the bleed valve opens completely. When Jolf is optimized, the only difference is that the polymer mass is not reduced to its minimum, but to some intermediate compromise-seeking value. The reason for this will be discussed later. 9 Transition B ---, A: As shown in Figure 5, only three intervals are needed. For all three, the bleed valve is fully open and FH is turned off to reduce [H2] as quickly as possible. F y is set to zero initially in order to reduce MIi. Furthermore, Op is maximum to reduce the polymer mass. Once B~ has reached its lower limit, Op is adjusted to keep it there. The third interval starts when the pressure reaches its upper limit. The pressure is kept there by increasing F y . In this case, the optimal transitions using Jtime and JoH are identical. A compromise-seeking interval exists for the manipulated variable Op in the transition A --~ B when the objective function Jo:ff is used. This can be explained as follows. Increasing Op has two opposing effects on JoH: (i) A smaller polymer mass
3 t[h] 4
iI
5
Fig. 4. Optimal profiles for the transition A ~ B with the input set//all = {FY, FH, Op, Vp} 7or . . . . 60
~o ~. "t-
0
ioo { ....
, -
1
2
3
4
;
~~176176176 .~ 2
o;
;
i i
;
1
2
3 t[h] ;
6
t
1
>=o5
0~
Fig. 5. Optimal profiles for transition B --, A with the input set/gau = {FY, FH, Op, gp} results in a shorter solid-phase residence time and thus quicker transition and less off-spec material; (ii) Removing off-spec material from the reactor to reduce Bw corresponds to a wasted opportunity. If that material had not been removed, it could have been mixed with better quality polymer to produce useful product. A compromise between these two effects is sought through some intermediate value of Bw and a corresponding value of Op in the A ~ B transition. In the B -~ A transition, however, since the reduction in transition time dominates the loss of material, this compromise does not exist.
468 4.4 Summary of the Results The results for the transitions A --~ B and B --~ A are summarized in Tables 4 and 5, respectively. For all policies, the times to reach the grade belt and the amounts of off-spec material are compared.
~feed does a fairly good job for the A ~ B transition. Including Vp and Op as additional manipulated variables reduces the transition time tbelt by less than 10% and the off-spec material by only 23%. Also, a comparison of (Jtime, L~all) and (Jofy, Uau) shows that the compromise-seeking value of Op improves the amount of off-spec material only marginally (by about 1%). On the other hand, optimization using U/eed does not result in a satisfactory B --, A transition. This is because there is no simple way of quickly eliminating the hydrogen when only flow rates are used. Thus, opening the bleed valve helps significantly. The improvement using /gaZt is therefore more than 40% in both transition time and amount of off-spec material. Policy
(Jtime,//feed) (Joys, Llfeed) (Jr . . . . L/au)
tbelt 6.53 h 6.53 h 5.95 h 6.05 h
fJ0their Opdt
tbelt
fOtbelt Opdt
194 194 190 188
103kg 103kg 103kg 103kg
(Jo//, L/att) Table 4. Comparison of optimal transition policies for the transition A ~ B. Policy
(Jt~me, Ufeed) 11.54 h 333 103kg (Joff, Ll/eed) 11.54 h 333 103kg (Jr ..... ~'~all) 6.81 h 200 103kg (Jo//, ua,) 6.81 h 200 103kg Table 5. Comparison of optimal transition policies for the transition B --, A. 5. CONCLUSIONS Using a tendency model for a gas-phase fluidizedbed reactor, it was possible to show that the optimal steady-state operating conditions are determined by constraints related to the grade specification, the amount of polymer and the pressure in the reactor, and the valve sizes. Similarly, the optimal grade transition problem is completely determined by process constraints, except for one decision variable in one case (the production rate seeks a compromise when the melt index is increased while trying to minimize the amount of off-spec material). In all other cases, the optimal transition corresponds to a bang-bang type solution. The next step is to study how this qualitative knowledge about optimal grade transitions can
be used for on-line implementation. The effect of model uncertainty and disturbances need to be considered. If concentration measurements in the gas-phase are available, a control scheme that tracks the active constraints using simple feedback control is a promising way to carry out grade transitions (Srinivasan et al., 2003).
ACKNOWLEDGEMENTS The authors wish to thank Dr. M. Hillestad, Cybernetica AS, Trondheim, Norway, and Dr. T. McKenna, ESCPE-Lyon, France, for their inputs regarding the problem formulation.
6. R E F E R E N C E S Choi, K. Y. and W. H. Ray (1985). The dynamic behaviour of fluidized bed reactors for solid catalysed gas-phase olefin polymerization. Chem. Eng. Sci. 40, 2261-2279. Debling, J. A., G. C. Han, F. Kuijpers, J. Verburg, J. Zacca and W. H. Ray (1994). Dynamic modeling of product grade transitions for olefin polymerization processes. AIChE J. 40(3), 506-520. Gisnas, A. (2002). Optimal grade transitions .for polyethylene reactors. Diploma project, Department of Engineering Cybernetics, Norwegian University of Science and Technology. Kiparissides, C. (1996). Polymerization reactor modeling: A review of recent developments and future directions. Chem. Eng. Sci. 51(10), 1637-1659. McAuley, K. B. and J. F. MacGregor (1991). On-line inference of polymer properties in an industrial polyethylene reactor. AIChE J. 37(6), 825-835. McAuley, K. B. and J. F. MacGregor (1992). Optimal grade transitions in a gas-phase polyethylene reactor. AIChE J. 38(10), 1564-1576. McAuley, K. B., D. A. Macdonald and J. F. MacGregor (1995). Effects of operating conditions on stability of gas-phase polyethylene reactors. AIChE J. 41(4), 868-879. Srinivasan, B., D. Bonvin, E. Visser and S. Palanki (2003). Dynamic optimization of batch processes: II. Role of measurements in handling uncertainty. Comp. Chem. Eng. 27, 27-44. Takeda, M. and W. H. Ray (1999). Optimal grade transition strategies for multistage polyolefin reactors. AIChE J. 45(8), 1776-1793. Wang, Y., S. Seki, S. Ohyama, K. Akamatsu, M. Ogawa and M. Ohshima (2000). Optimal grade transition control for polymerization reactors. Comp. Chem. Eng. 24, 1555-1561.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
469
Integrating pricing policies and risk management into scheduling of batch plants G. Guill6n b, M. Bagajewicz a, S. E. Sequeira b, R. Tona b, A. Espufia b and L. Puigjaner b aUniversity of Oklahoma. School of Chemical Engineering and Materials Science, 100 E. Boyd St., T-335, Norman, OK 73019, USA. bUniversitat Politrcnica de Catalunya, Chemical Engineering Department, ETSEIB, Diagonal 647, 08028 Barcelona, Spain
Keywords scheduling, pricing policies, uncertainty management, risk management Abstract In this work a new strategy for integrating pricing decisions and market share information with the schedule of batch plants is introduced. Several parameters such as product demand and the relationship between prices and demand are considered uncertain and a two-stage stochastic programming model is used. Financial risk is managed. 1. INTRODUCTION Although most of the methodologies to obtain optimal schedules in batch plants use a variety of objective functions, they always use fixed prices as parameters, and therefore stochastic models can be constructed, even considering demand-price relationships. In this paper, a different approach is taken: prices are considered as decision variables instead of parameters while demands are obtained as a function of prices. A motivating example is used to illustrate the traditional methodology used to perform pricing and scheduling highlighting its drawbacks. The integrated pricing policy and scheduling model is then proposed and extended to consider uncertainty. Finally means to manage financial risk are discussed and illustrated. 2. PRICING To determine prices, production costs, legal constraints, the empirical relation between demand and prices among other parameters is taken into account. While the objectives pursued b~r pricing are tied to enterprise goals in a variety of ways, for simplicity, we will use benefit I~ .
2.1 Single product pricing: Consider the beneft equation:
B:I-C
(1)
A typical model for income is I = p.q, where the demand q is typically related to the price p by the following demand equation:
q(p)=k'p-e
(2)
The parameters k and e are empirical. As general rule the elasticity of the demand e is positive (with the exception of Giffen goods). When this coefficient is lower than one, the income grows with the price and when is greater than one, the income diminishes with price.
470 In general, e is also a function of the price p, but in practice, given the narrow ranges of prices used, this relationship is ignored. In addition, one can linearize the expression and use the following model: (3)
q(P)=qo-m.p
which is also used in this article. In addition, the classical model for the costs C is" (4)
C(p) = C F + Cv'q(p) = C F + Cv'(qo - m'p)
Equating the first derivative of (1) to zero and solving one obtains the optimal price p *: q0 --+C p. = m 2
v
(5)
2.2 Simultaneous multiple product pricing: For the case of several products, one may determine individually the prices in a similar way. However, because the resources (machine and utilities availability, capacity, etc.) are finite should be included explicitly as constraints. The pricing model is: maXp, Br = ~ (Ij - Cj)
(6)
J
(7)
s.t. " ~ r~j.qj(pj) < R~
Vi
J
In addition, box constraints for process and demands can be added. In constraint (7) r~/is the portion of production resource i utilised by product j within the time horizon. In discrete or batch manufacturing environments, the variable costs coefficients Cv, the resource coefficients r 0. and even the total availability of each resource Ri depend on the way the production is organised. Thus, the solution of the problem requires an iterative procedure where the pricing model is run using coefficients obtained nmning a scheduling model or an integrated model where pricing and scheduling is solved simultaneously. We introduce the scheduling model, discuss the integrated model and solve an example showing the results of both procedures.
2.3 Scheduling Model We consider a multiproduct batch plant that fabricates three different products in three stages. The batch plant operates under zero wait policy. The objective function consists of minimising the total cost associated to the fabrication of the amount of products, TSalesp, determined by the pricing model (25). minEZOpt.cv p
p +ZZOtp
t
p
-" -" '-.-, .Avlnvpt + Z Z Z b i n t p
t
p
p
"bintp,'PCWpp,
(8)
l
s.t."
Tfin o = Tini o + topo
V l, j
Tinil l = 0 Tini o > Tfin~_tj
(9) (1 O)
V l, j
(11)
471
Tinild+, = Tfin O
V l, j
(12)
Tfin u < H
(13)
Z bintp >-Z bin,+,.p p
l = 1...L - 1
(14)
p
Z bintp -< 1
VI
(15)
p
topo = ~ top pj . bin~p
V l, j
(16)
p
Opt =
~] bs~ . bintp . bin2tt
V p,t
(17)
1
bs, = ~] bsp . bintp
Vl
(18)
p
bin2,, .T,_ 1
Vt, l
(19)
bin2,, = 1
Vl
(20)
Tfinu, = Tfin,j
Vl
(21)
V p,t
(22)
l
t
Salespt < Demandpt Salesp, < Qpt
v p,t
(23)
Salespt <_ Qpt + Inlnvpt_~
V p,t _>2
(24)
Sales p, >_TSales p
V p
(25)
AvInvp, = InInvp, + QP' 2
V p,t
t
(26)
InInvpt = InInvpt_ l + Qp,-i - Salesp,-i V p,t
(27)
2.4 Sequential iterative approach A hierarchical iterative procedure consists of assuming certain values of Cv, r o. and Ri and determining a set of optimal prices. Next, the scheduling model is solved and a new set of parameters is obtained. This sequential approach may not converge. An example was solved (see below) and after a few iterations, the solution oscillated between two alternatives, even when damping factors were used. The best of these two alternatives was chosen for comparison.
2.5 Integrated Model The objective function consists of maximising the profit. The sales of each product must be lower than the market demand (33). The prices are taken as decision variables (29). Constraint (15) of the scheduling model must be removed. The new constraints added to the model are shown below. max profit
(28)
profit = Z E Salespt . pricep - E E Q p, . CVp - Z Z Ctp . A vInvptp P
P
t
p
t
p
t
L-I
- E E ~ bintp . bintp, . PCWpp, - ~" Z ExtSalesp, . Extpricep p
s.t."
p
1
p
t
472 price p = ~
D i s c r e t e pr
"priCepr
Vp
(29)
V p,t
(3o)
V p, r
(31)
Vp
(32)
V p,t
(33)
r
D e m a n d p t = Z D i s c r e t e p r "D e m a n d p r r
D e m a n d pr = A p - B p 9Pr iCe pr D i s c r e t e pr = 1 r
S a l e s pt + E x t S a l e s pt <_ D e m a n d pt Salespt > 0
V p,t
(34)
ExtSales
V p, t
(35)
pt ~> 0
2.6 Illustration An example of a plant that fabricates different products was solved. Table 1 and figures 1 and 2 compare the results and depict the corresponding Gantt charts. The number of batches produced using the integrated approach is higher, hence the difference in profit. In other words, the solution obtained using the iterative method does not utilise the full capacity of the plant. ..Table 1. Objective function Iterative Method Model Profit (C) 6.782 11.643
Figure 1. Gantt Chart using the iterative Figure 2. Gantt Chart using the proposed method model
2.7 Integrated Scheduling-Pricing model under Uncertainty In this model all variables associated to the schedule and the ones associated to the pricing policy of a product, are considered as first stage decisions. Sales of products are taken as second stage variables. The uncertain parameters are the values of the coefficients that relate the price with the market share. The objective function is now the expected net profit. For the case of the example the expected profit is 10411 E. The Gantt chart is given in figure 3.The resulting schedule implies a mixed product campaign and not a single product campaign as occurred in the deterministic case. This schedule seems to be more robust. 2.8 Financial Risk The financial risk associated with a design project under uncertainty is defined as the probability of not meeting a certain target profit (maximisation) or cost (minimisation) level referred to as f~. Downside risk [a], is used to manipulate risk in the way done by Barbaro and
473 Bagajewicz [5-6j. In order to show the capability of the proposed formulation of risk management, the problem is modified so as to reduce the risk associated at low targets. The Gantt chart corresponding to one solution with lower risk and consequently lower expected profit (9809 E) is shown in figure 4. The risk curves of both the stochastic solution (SP) and the one with lower risk are shown in figure 5. The risk at low expectations (profits under a target of 6500 ~) has been reduced to zero.
Figure 3. Gantt Chart for the stochastic case
Figure 4. Gantt Chart for the new solution
100% ................................................................................ 90% 80% 70%
SP
60% .-~ 50% n," 40% 30% 20% 10% 0% 5200
7200
9200
11200
13200
P r o f i t (~)
Figure 5. Risk Curves
3. CONCLUSIONS In this work a new strategy for integrating pricing decisions and market share information with the schedule of batch plants has been introduced. The starting point is the modelling and forecasting of the relation between product prices and market share. The consequences of incorporating pricing as a decision variable, as opposed to earlier models are optimistic. Although there is a significant simplification in the case studied used for illustrations purposes, it clearly shown how the integration provides better solutions and avoid oscillatory or non-feasible decisions. In addition, since future predictions related to the market behaviour cannot be perfectly forecasted, several parameters of this schedule problem such as product demand are considered uncertain. The used risk management approach allows to properly handling this situation, that is commonly found in practice. The potentiality of the proposed approach seems to offer better results as the complexity of the production process grows, because the simplification of the upper level in a sequential approach are likely to fail.
474 4. NOMENCLATURE t time periods p products ! batches j stages s scenarios r discrete values CVp: unitary variable cost p PCWpp, : cost of the wastes generated when changing from p to p' ap : unitary inventory cost of p Tsalesp : total sales ofp Salespt: sales ofp in t ExtSalespt: external sales ofp in t Discretepr: binary variable (1 if discrete price r is selected for p, 0 otherwise) Pricepr : discrete price r ofp pricep :price ofp Extpricep : external price of p
Demandpr : discrete demand r of p Demandpt: discrete demand o f p in t bsp: batch size ofp bst: batch size l tinitj and tfin O. : initial and final time of j involved in the fabrication l
tfintjt : final time ofj involved in the fabrication of l in t toppt: processing time ofp inj H : time horizon Qpt: amount of p to be fabricated in t Avlnvpt : average inventory of p in t Inlnvpt: initial inventory ofp in t bintp : binary variable (1 if l belongs to p, 0 otherwise) bin2tt : binary variable (1 if l is finished in t, 0 otherwise)
5. REFERENCES [1 ] Dorward, N. (1987). The price decision: economic theory and business practice. Harper & Row, publishers, London. [2] Hirshleifer, J. (1976). Price theory and applications, Prentice-Hall, Inc., New Jersey. [3] Mas-Collel, A., Whinston, M., and Green, J. (1995). Microeconomics Theory, University Press, Oxford. [4] Eppen G. D., Martin R. K., Scharge L., "A Scenario Approach to Capacity Planning", Operation Research, 37:517-527, 1989. [5] Barbaro A. F. and M. Bagajewicz (2002). Managing Financial Risk in Planning under Uncertainty, Part I: Theory, AIChE J., Submitted. [6] Barbaro A. F. and M. Bagajewicz (2002). Managing Financial Risk in Planning under Uncertainty, Part II: Application, AIChE J., Submitted. ACKNOWLEDGEMENTS Financial support received from European Community (project GROWTH Contract G 1RDTCT-2000-00318) is fully appreciated. Support from the Ministry of Education of Spain for the sabbatical stay of Dr. Bagajewicz is also acknowledged.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier ScienceB.V.
Production and Distribution of Considering Demands of Warehouses
475
Polyvinyl
Chloride
Soon-Ki Heo a, Hong-Rok Son a, Kyu-Hwang Lee b, Ho-Kyung Lee c and In-Beum Lee a, Eui-Soo Leed aDepartment of Chemical Engineering, Pohang University of Science & Technology, San 31 Nam-Gu Pohang Kyungbuk, 790-784, Korea bLG Chemical Ltd. Research Park, Yeosu, Jeonnam, 555-280, Korea r
Chemical Ltd. Research Park, Yu Seong, Daejeon, 305-380, Korea
dDepartment of Chemical Engineering, Dongguk University, Pildong, Jung-Gu, Seoul, Korea
Abstract: This paper deals with an integrated production and distribution of a chemical company who operates PVC plants with EDC and VCM processes. The problem is composed of two or more multi-site VCM and PVC plants, and several warehouses. There follows a case study. There are five domestic PVC plants and two are in abroad. More than 4-type PVC are produced in seven different plants and are supplied to fifteen warehouses. Eleven warehouses are domestic and the others are in abroad. The objective is defined as maximizing the net profit of the chemical company, i.e. the total expected selling price minus the production costs, the expected shipping costs and imported costs of insufficient EDC and VCM. Mixed integer linear programming (MILP) is presented to obtain an optimal production and distribution of PVC. The effectiveness of its application is illustrated with a case study. The problem is solved with CPLEX 7.0.0.
Key Words; production, distribution, PVC, transportation, warehouse, MILP, CPLEX 1. INTRODUCTION In recent years, physical distribution systems are interested in the industrial world. It is DRP(Distribution Resource Planning) that is the solution of requirements of the systems, which is developed involving inventory control, transportation and positioning logistic centers. This concept expanded to SCM(Supply Chain Management) which includes global supply as well as transportation between body and branch companies. There are a several papers including SCM problems. Tsiakis et al. E~I solved mathematically somewhat large problem which considers
476 suppliers, plants, warehouses, distribution centers and customer zones for some products in Europe.
supplier
plant
warehouse
Fig. 1. Supply chain network However, in the viewpoint of a company this form including several steps of transportation requires high cost. So, a company usually manages only three sectors of supplier, plant and warehouse, shown in Fig. 1. Therefore, this paper applies a mathematical model to a chemical company in Korea considering its business network and situation. The following sections are consist of production of PVC, introducing the business network, problem definition, mathematical model, example and conclusions. 2. PRODUCTION OF PVC Polyvinyl chloride (PVC) is one of the largest commodity chemicals and a general purposed plastic material consumed in great quantities around the world. PVC became a major plastic after World War II, replacing flexible products such as elastomers, leather, and so on. Later, application of PVC to rigid products such as wood moldings and pipes was introduced with the development of formulating and processing technology. Nowadays, PVC is generally known to have advantages of low ingredient cost, wide processing versatility, high decorative potential, and so forth. And PVC is used to manufacture various types, not only highly rigid products but also very flexible. Usually, chemical companies operate plants for polyvinyl chloride, vinyl chloride monomer (VCM) and ethylene dichloride (EDC) and purchase EDC and VCM from another chemical companies in order to maintain the inventories of material. The general operating steps, like Fig. 2, are used in the ethylene based process for the production of vinyl chloride. Ethylene is chlorinated to EDC by the processes of oxychlorination and direct chlorination. EDC is purified in the EDC purification section and is fed to the cracker which produces vinyl chloride monomer. Vinyl chloride from the cracker is purified in the vinyl
477 chloride purification section. The recovered hydrogen chloride and EDC are recycled. The purified VCM is transferred to PVC plant. PVC is primarily classified into straight, paste and special, normal PVC, as shown in Fig. 3. Each synthetic resin has several grades for various uses.
[ Crude oil~-P~ Naphtha ~-P~ Ethylene~
NaCI
EDC ~
PVC }
CI2 J
Fig. 2. Manufacturing process of PVC normal - HOMO - PVC Straight PVC
special<.-"~ CO - Polymer Blend Resin
normal - HOMO - PVC Paste
PVC J ~ special -
CO - P o l y m e r
Fig. 3. Types of PVC product 3.
BUSINESS NETWORK
Fig. 4. Business network for PVC (circle: warehouse, rectangle: plant) There are many warehouses of this company in many countries in the world. Warehouses
478 only in Korea and China are treated in this paper. We assume that there is deterministic demand of each warehouse, all unit of warehouse are to be sold. In Fig. 4, there are three suppliers of ethylene which is raw material of PVC, four plants of PVC and eleven warehouses in Korea. There are two suppliers, two plants and four warehouses respectively in China. There are both supplier and plant in Onsan and Yeochun,
Ulsan has all
facilities of supplier, plant and warehouse. Tianjian and Ningbo in China also have both suppliers and plants.
4. P R O B L E M D E F I N I T I O N There are several suppliers, plants, and warehouses as formerly introduced. The amount of material from ethylene supplier is unlimited. The capacities are given with respective to all plants. The price of four types PVC is applied quantity discount, shown in Fig. 6. Transportation costs of supplier-plant and plant-warehouse, inventory costs and production costs in plants are given. Demands of each warehouse with respect to 12 months are determined according to four types. The demands must be satisfied. Under this condition, we must determine how much to produce and to transport and how much to store at each plant for inventory to meet the demand of later month. The demand of Shanghai, China, one of 15 warehouses, is drawn in Fig. 5. We'll omit the other parameters due to the lack of space.
Fig. 5. Demand at Shanghai with respect to 12 months
5. M A T H E M A T I C A L M O D E L Mass balance is similar to that of Lee et al. t21 and published book TM. It includes limitations of plants' capacities. The other equations are omitted due to small space. Quantity discount model proposed by Tsiakis et al [11 is applied to this problem.
479
Z. ={'1'/f ~ e [~"-'' ~" ] O, otherwise
(1)
Q~_,Z~ <_O~ <_O~Z,
(2)
NR
=1
EZ~
(3)
r=l
NR
(4)
O = ~-] O,.
Fig. 6. Quantity discount model
r=l
~E
C -- E
~=,
"-Cr-lZr q-
t~
~
_ -'~r_lZr
t~ -Qr-, 1 r -- __Cr-I
(5)
Qr
Fig. 7. The case of Ulsan Plant
480 6. EXAMPLE AND RESULT The defined mathematical is solved by a commercialized optimization solver, CPLEX 7.0.0. and hardware, Pentium III 1000MHz PC. In case of Ulsan plant, one of the seven is presented in Fig. 7 including production and transportation with respect to each month. There includes 16,212 continuous variables, 10,080 binary variables and 26,424 constraints in this model. CPU time is 7.3sec and objective value is about 4.1e+9.
7. CONCLUSIONS In this research previous mathematical model is used, which includes quantity discount price with binary variables. Especially, it is applied to SCM problem, which is based on the business situation of a chemical company in Korea.
NOMENCLATURE C
cost as a variable
Cr
cost as a parameter at discrete point r
NR
number of discrete point r
Q
quantity as a variable
Qr Z~
quantity as a parameter at discrete point r binary variable represents discrete point r
ACKNOWLEDGMENT This work was partially supported by grant 1999-1-307-002-3 from the Korea Science & Engineering Foundation and partially supported by Brain Korea 21 Project.
REFERENCES
[1] Tsiakis, P., Shah, N., Pantelides, C. C., Design of Multi-echelon Supply Chain Networks under Demand Uncertainty, Ind. Eng. Chem. Res. 40, 3585 (2001) [2] Lee, K., Heo, S., Lee, H., Lee, I., Scheduling of Single-Stage and Continuous Processes on Parallel Lines with Intermediate Due Dates, Ind. Eng. Chem. Res. 41, 58 (2002) [3] Elsayed, E.A., Boucher, T.O., Analysis and Control of Production Systems, Prentice Hall, New Jersey (1994)
Process SystemsEngineering2003 B. Chenand A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
481
Decision making in the methanol production chain A screening tool for exploring alternative production chains Paulien M. Herder and Rob M. Stikkelman
Delft University of Technology, PO Box 5015, 2600 GA Delft, The Netherlands Abstract We have executed a study that explored a variety of carbon-based feedstocks such as natural gas, oil, coal, organic waste streams and renewable resources (biomass) for the production of methanol, in order to find and demonstrate the potential of new, sustainable methanol production routes. We modeled the methanol production chain as an assembly of various independent operations, and we have automated the process of assembling and adjusting the methanol chains. We screened and compared various methanol production chains with respect to key performance indicators and their sensitivity towards various future environmental and economic policies. Our work shows that straightforward modeling of processing chains can contribute significantly to decision making in the methanol chain. Keywords: production chain modeling, sustainable fuels, methanol, biomass
1. INTRODUCTION The worldwide transportation sector depends on oil for 98% of its operations. Oil-based fuels contribute considerably to urban air pollution in the form of emissions of CO2, ground-level ozone precursors (NOx), carbon monoxide (CO) and particulate matter (PM), see Table 1. The application of these fuels in conventional combustion engines is also a source of noise pollution. A promising option for improving the sustainability of the road transportation sector is the application of methanol in conventional combustion engines and in fuel cells, as methanol would increase the energy efficiency and it would decrease noise and emission levels. A comparison of emissions from using various fuels in different power plants (internal combustion engines and fuel cells) was taken from [ 1] and it clearly shows the advantages of methanol over diesel or natural gas. The viability of implementing a methanol fuel cell into cars has been demonstrated, among others, by DaimlerChrysler [2], who has developed a series of demonstration models (NECAR). Methanol may be preferred over hydrogen for use in cars, for methanol is easy to handle (liquid) at existing pumping stations. In addition, methanol can be introduced gradually into the fuel distribution system, as conventional internal combustion engines would also be able to run on methanol. This may solve the chicken-and-egg problem of not having enough cars running on methanol fuel cells to justify the implementation of a large-scale methanol distribution system, and not having a large-scale methanol distribution system may hamper the widespread introduction of methanol fuel cells. When methanol is to be applied broadly in the transportation sector, the methanol demand will increase far beyond the current world production levels of approximately 30 million metric tonnes per year for the downstream production of fuel additives, such as
482 Table 1 Results of emission tests (grams/brake horsepower-hour) (Source: [ 1]) Fuel Power Plant Hydrocarbons CO NOx Particulate Matter 1998 EPA Transit 1.30 15.50 4 0.05 Bus Standards Diesel DD Series 50 0.1 0.9 4.7 0.04 CNG DD Series 50 0.8 2.6 1.9 0.03 Diesel Cummins C8.3 0.2 0.5 4.9 0.06 CNG Cummins C8.3 0.1 1 2.6 0.01 Methanol 94 Fuji Fuel Cell 0.09 2.87 0.03 0.01 Methanol 98 IFC Fuel Cell <0.01 <0.02 0.0 0.0 99 XCELLSiS Methanol <0.02 0.0 0.0 0.0 Fuel Cell
MTBE, and adhesives and solvents, such as formaldehyde and TAME. Some largest plants, i.e. over 2 million tonnes/year, are located in Chile and New Zealand. In order to address this increasing methanol demand, we set out to find and demonstrate the potential of new, sustainable methanol production routes. We have executed a study in which we explored a variety of carbon-based feedstocks such as natural gas, oil, coal, organic waste streams and renewable resources (biomass) for the production of methanol. The goal of this study for the International Methanol Producers and Consumers Association (IMPCA) was to quantitatively model and evaluate various methanol production chains with respect to their economic and environmental performance. 2. MODELING THE M E T H A N O L PRODUCTION CHAIN
2.1 Theoretical background In order to model the methanol production chain, we explored the literature for theoretical approaches to modeling production chains. First, we examined the supply chain modeling body of knowledge. This area mainly deals with questions concerning the pricing and logistics of discrete products in the manufacturing industry, and has its roots in the industrial engineering area. It has been argued [3] that supply chain management is a customer-focused business strategy aimed at improving and optimizing (a) the production and distribution of products and (b) the information exchange among the business entities. The first aim in particular would be useful in our efforts to model the methanol chain. The chemical processing industry, however, has been lagging behind in modeling and optimizing their supply chains using the specific supply chain management body of knowledge, as the processing industry is claimed to have very distinct features [4], such as longer chains, less flexible plants and fewer stakeholders. In addition, the focus of our project was not on the logistics of the chains, but rather on the production and conversion steps, and ignoring the time dependent logistic questions. Following the authors of Ref. [4] we concluded that supply chain modeling could not be simply adopted by the processing industry, and we discarded it for use in this project. We, therefore, turned to a more conventional systems engineering approach to modeling the methanol supply chain, using mass and energy balances for the chain and its subsystems. This approach has been adopted from earlier work [5], which was reported to be a valuable way of modeling and analyzing production networks and chains. The authors used existing
483 process simulators, using mass and energy balance calculations to build and analyze chains. In our work, however, we decided to develop a dedicated tool, based on spreadsheets, because process simulators would not be available for the final users of the tool. Finally, from the Life Cycle Assessment (LCA) literature we were able to extract some important issues that were to be addressed when modeling production (and consumption) chains [6]: system boundaries, data quality and uncertainty, and indicators and allocation. The following section describes how we addressed each of these issues in modeling the methanol production chain. 2.2 Considerations in modeling the methanol chain 2.2.1. System boundaries We modeled the methanol production chain as an assembly of various independent operations (building blocks), covering the entire production chain from feedstock production and harvesting, via various transportation steps, to the end-point being local methanol storage in tanks. We included independent steps such as tree harvesting, sludge production, transport, gasification, methanol synthesis, and storage. We excluded those processing steps that are not directly part of the methanol production chain, such as the production of fertilizers for biomass production or the production of transportation fuel for ships, trucks and trains. Table 2 gives an overview of the independent steps that are currently available in our methanol chain model. It needs to be pointed out that the modular design of the model and the tool, allows new steps to be added at any time.
2.2.2. Data quality and uncertainty We have tumed to various sources of literature to obtain the data for our building blocks. Since we aimed at developing a screening tool, as opposed to a tool that would be correct to the minute detail, we set out to look for general data and did not venture into getting the exact data for each and every variation of the chain steps. For the methanol plant, for example, of which many different technologies exist in the world, we chose to collect data concerning the ICI-LP process as this one is widely used in practice [7]. To ensure that realistic results would be obtained, the data collected and the steady state behavior of the building blocks have been validated through reviewing the individual building blocks by experts from the appropriate fields (e.g., methanol production, transportation, gas winning, etc.) and through modeling and reviewing an existing methanol supply chain by a group of IMPCA experts. 2.2.3. Indicators and allocation A number of economic and environmental performance indicators are calculated for each step and for the entire production chain, based upon the mass and energy balance and on the
Table 2 Independent production and transportation steps for the methanol chain Raw materials Prelimin. Transport Conversion operation Coal Chipping Trucks Gasifier Flared gas Drying Trains Methanol plant Natural gas from well Barge Synthesis loop Bagasse biomass waste Tanker Waste water sludge Pipeline Eucalyptus biomass
Storage Biomass storage Methanol tank
484 parameters. The performance indicators relevant in the context of this study are: o o o o o o
Costs/Price Yield Efficiency Employment CO2 emissions SO2 emissions
Cost of executing the particular step in the chain (S/ton methanol) Mass efficiency based upon carbon (mass/mass %) Energy efficiency (energy/energy %), incl. utilities Number ofjobs needed for executing the particular step (ftu) (Ton/ton methanol) (Ton/ton methanol)
3. METHANOL CHAIN ANALYSIS TOOL We have automated the process of assembling chains from individual processing steps, varying the steps' parameters and calculating the performance indicators by developing a flexible software tool. Mass and energy balances, reflecting a blackbox input-output model, and programmed into a spreadsheet underlying the user interface of the programme, form the basis for each building block of the methanol chain. By introducing manually adjustable parameters to each building block, e.g., natural gas composition, plant capacity, transportation distance or pay and tax rates, we allow users to tune the building blocks to the specifics of any methanol chain. The user can also easily add and store new building blocks. The construction and analysis of the various methanol chains is assisted by an easy to use 'click-and-drag' interface in which the user can select standard chain building blocks from the library, adjust its parameters to the specific chain at hand, and go straight to the chain analysis. The system warns the user in case there are any unspecified parameters. Figure 1 shows the tool's user interface. The upper part of the screen contains the methanol chain constructed, and the left hand side shows the parameters for each particular step. The graph shows the cumulative values of the performance indicators for the whole chain. The values of the performance indicators can also be shown numerically as total values and as values per ton of methanol for the whole chain or for each particular step. 4. RESULTS We have used the tool for screening and comparing various methanol production chains with respect to their key performance indicators, and with respect to the sensitivity of chains towards various future environmental and economic policies, e.g., CO2 taxation. As per
Figure 1. The Methanol Chain Analysis tool screen.
485 Table 3 Performance of three methanol chains (normalised) Performance indicator Chile Chain Groningen Chain Cost 100 97 Yield 100 100 Efficiency 100 111 Employment 100 32 CO2 emissions 100 58 SO2 emissions 100 53
Eucalyptus Chain 132 58 73 776 -267 35
illustration, Table 3 shows the performance of three different methanol production chains. The first chain, labeled 'Chile Chain,' describes an existing chain of producing methanol from natural gas in Chile, by means of the ICI-LP process, and transporting the methanol to the Rotterdam (The Netherlands) harbor by means of the 'Millennium Explorer,' the largest methanol tanker in the world (100,000 dwt). The methanol is finally stored in tanks. The second chain, labeled 'Groningen Chain', concerns the production of methanol from natural gas in the north of the Netherlands. The chain comprises the winning of natural gas, the production of methanol by means of the ICI-LP process and storage of methanol. Finally, the third chain demonstrated in this paper is labeled the 'Eucalyptus Chain.' This chain represents the growing and harvesting of eucalyptus in Australia, storage, transportation and preparation (chipping) of the biomass, gasification of the biomass into syngas, conversion of the syngas into methanol, and transportation to Singapore, where it is stored in tanks. The results in Table 3 show that the performances of the two natural gas based chains are comparable with respect to the cost of the methanol produced. In taking a closer look at the cost structures of the two natural gas based chains, it became clear that for the Chile Chain the main cost component is the transportation of the methanol (22% of total cost), whereas the main cost component for the Groningen Chain is the cost of natural gas (46% of total cost). Despite the two largely different cost structures, the total costs of the two "kinds" of methanol are competitive. Table 4 shows the cost structure of the three sample chains. When we analyzed the performance of the Eucalyptus Chain, we observed that the total costs for the biomass-based chain are 30% higher. The Eucalyptus Chain performs better than the other two chains with respect to the CO2 emissions and job creation. The negative CO2 emissions can be explained by the fact that we modeled CO2 consumption into the growing of eucalyptus, in order to reflect that using biomass gives a net result of zero CO2 emission to the environment. We also varied the cost of CO2 emissions; in so doing we simulated possible future C02 taxation policies. It was to be expected that the performance of methanol chains based on biomass as their feedstock would economically outperform the conventional feedstock based chains when the level of taxation would be sufficiently high. We found that when the CO2 emission per ton is roughly priced at 20% of the methanol cost, the biomass based chains become competitive over the chains based upon non-renewable resources. Table 4 Relative cost structure of three methanol chains Cost item [% of total] Chile Chain Feed stock 9 Transport 22 Storage 8 Pretreatment feedstock 0 Conversion 60
Groningen Chain 46 1 3 0 50
Eucalyptus Chain 24 13 14 5 44
486 5. DISCUSSION AND CONCLUSIONS The methanol chain analysis tool proved to be very useful in assessing roughly the viability of various methanol production and transportation chains. The model contained some uncertainties with respect to the data and parameters introduced. Some of these uncertainties, for example regarding data aggregation, are unavoidable [8] and inherent to using large amounts of data. In addition, spatial and temporal variations could only be modeled roughly into the methanol chains by manually adjusting parameters such as pay and tax rates. The goal of modeling the chains, however, was not to obtain ever more accurate economic and environmental performance indicators, but rather to show the viability and competitiveness of various new methanol production chains. As the tool is a vehicle for discussion within the IMPCA, we have made no attempt at automating the construction of viable methanol chains. We have shown that methanol production chains with biomass feedstocks may well become competitive in the near future, and may be stimulated by the introduction of tougher regulations on CO2 emissions. The Methanol Chain Analysis tool proved that straightforward modeling of supply chains can contribute significantly to actual discussion and decision making in the methanol production and consumption chain. REFERENCES [1] Emissions Summary, Advanced Vehicle Development, Georgetown University, fuelcellbus.georgetown.edu/overview3.cfm, October 2002. [2] Study Cites Low Cost for Methanol Refueling Stations, methanol.org, March, 2002. [3] Hokey Min, Gengui Zhou, Supply Chain Modeling: Past, Present and Future, Computers and Industrial Engineering, 43 (2002) pp. 231-249. [4] Garcia-Flores, R., Wang, X.Z., Goltz, G.E., Agent-based Information Flow for Process Industries' Supply Chain Modelling, Comp. Chem. Engng, 24 (2000) pp. 1135-1141. [5] P. Radgen, E.J. Pedernera, M. Patel, R.Reimert, Simulation of Process Chains and Recycling Strategies for Carbon Based Materials Using a Conventional Process Simulator, Computer and Chemical Engineering, 22 Suppl. (1998) pp. S 137-S 140. [6] Huisingh, D., Cleaner Production Tools: LCA and Beyond (Editorial), Journal of Cleaner Production, 10 (2002), pp. 403-406. [7] MethRo Report: Methanol from Natural Gas, Conceptual Design and Comparison of Processes, Final report of the MethRo Collective Design Project, Delft University of Technology, Faculty of Chemical Engineering and Material Science, TwAiO Postgraduate education, Delft, February 1995. [8] G.W. Sonnemann, M. Schumacher, F. Castells, Uncertainty assessment by a Monte Carlo simulation in a life cycle inventory of electricity produced by a waste incinerator, Joumal of Cleaner Production, 11 (2003), to be published. ACKNOWLEDGEMENTS The authors wish to thank the Intemational Methanol Producers and Consumers Association (IMPCA), and in particular Herry Eilerts de Haan, for their sharing of expertise and financial support. The authors would also like to acknowledge the valuable contributions to the project of Mari~lle den Hengst, Lydia Stougie and Natascha Hemke.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
487
The Coevolutionary Supply Chain Ben H U A a, Jinbiao Y U A N a, David C.W. Hui b China University of Technology, Guangzhou, 510640, P. R. China Tel : +86-20-87113744, Fax" +86-20-85511507, Email:[email protected] b Hong Kong University of Science & Technology, Clear Water Bay, Hong Kong a South
Abstract The new millennium presents many challenges for the enterprises in supply chains. Competitive advantages no longer result from special designs or proper operations of supply chains, but rather the survival of supply chains. This paper attempts to make a biologic digital supply chain paradigm with coevolutionary mechanisms from a new viewpoint. Keywords
supply chain management, supply chain design, coevolution
1. INTRODUCTION In the challenging economy and chaos business, the enterprises are moving focus onto their supply chains. Information technology contributes the most to the presentation and the development of supply chain management. Software vendors present many supply chain management suits. These suits help to manage the whole supply chain rather than to make a good supply chain, i.e. better performance of a supply chain does not come from concession to software. There are two methods to make a good supply chain, one is transform, and the other is evolution. Business Process Reengineering is the representative of transform methods, which often seeks radical redesign and drastic improvement of processes. BPR gurus Michael Hammer and James Champy[1~ note in their book Reengineering the Corporation that only about 30 percent of reengineering projects they have reviewed are successful. One of the primary reasons for this low success rate is that the analyses behind performance estimates are often prepared with flowcharts and spreadsheets. In contrast, evolution methods tend to focus on incremental change and gradual improvement of processes and units of supply chains, and then smooth evolution of supply chains themselves. There are many indicators showing what is a good supply chain. None of the indicators would decides respectively whether a supply chain is good. Because the processes and unites of supply chains always interact to make collective effects on supply chain performance. We attempt to seek the evolution of collective solutions to the problems with coevolution mechanisms. More importantly, coevolution emphasizes interactions of the members; just like that supply chain management emphasizes coordination of the units.
488 2. THE ECOLOGICAL AND BIOLOGICAL SIMILARITY OF SUPPLY CHAINS In the ecosystem, there are individuals, populations, communities, and ecosystem itself at different levels. Correspondingly, we can make such a hierarchy about the markets: individual units (organizations, firms, companies, etc.), so-called suppliers, manufacturers, and purchasers, supply chains, and the whole markets. Ecological and economic systems share many characteristics. [21 Both are complex networks of component parts linked by dynamic processes. Both contain interacting biotic and abiotic components, and are open to exchanges across their boundaries. We reduce the time and space boundaries to ecological communities of ecological systems and supply chains of economic systems. There are still many similarities between supply chains and ecological communities. A supply chain is composed of many units, which may be organizations, companies, or enterprises on the basis of common interests. The units collaborate and synchronously interact with the external markets as a whole supply chain to gain their advantages in the competing markets. The corresponding units in ecological communities are ecological groups. They compose ecological communities based on their survival requirements. In a supply chain, the units are always sorted into supplier, manufacturer, and purchaser. They are connected by flows, such as cost flow, information flow and material flow. E.g. cost flow starts from the purchasers, through the manufacturers, ends at the suppliers. Correspondingly, an ecological group is a species with certain characteristics. This is just like a supply chain unit. These species live harmoniously among a community in an ecosystem. They can also be sorted into hosts and parasites, predators and preys, etc. like the suppliers and the purchasers in a supply chain. They are also connected to each other by flows such as flow of energy, E31flow of information, etc. Fredrik Moberg t4] identified ecological goods and services of coral reef ecosystems, and studied the delivery of ecological goods and services. A supply chain is correspondingly focus on the delivery of goods and services. The need for economics to embrace evolutionary change rather than portray economies as mechanistic systems has long been recognized. [5-6] This means biological and economic systems share many characteristics as ecological and economic systems do. John Foster. [7] studied the economic dynamics based on the biological analogy to economic self-organization. There are many works on communities modeling. E8-91On the other hand, the behavior of social system is similar to that of ecological one in many points, t~ol Every commodity in a market shows time dependent behavior, which is sometimes similar to a life cycle of biological individual. The growth process of a new technology can be understood by the logistic curve, which describes the growth process of a biological species. Industrial structure shows time evolution, which can be related with the change of landscape of ecological community known as the ecological succession. The communities in ecosystems undergo coevolution. (I.e. evolution of two or more interdependent species, each adapting to changes in the other. It occurs, for example, between predators and preys and between insects and the flowers that they pollinate.) Peter T. and Bruce T. Milne [11] described an model of many-species communities to study the principles of ecological community assembly, this enlightened us to study the coevolutionary mechanism of supply chains and
489 the principles which govern a supply chain, such as the competitive advantages of a supply chain corresponding to determining the dominant species in a community. E12-13~.There are other interesting problems; e.g. studying supply chain structure which is similar to an ecological community assembly problem. [141 3. COEVOLUTIONARY COMPUTATION Supply chain management is based on information technology. Computation of coevolution is necessary for studying supply chains from coevolutionary viewpoint. Evolutionary process is used to explore the possibilities inherent in medium it is embedded in. Evolving populations of replicators constantly explore variations around their current forms, without the limitations of preconceptions. GAs and GPs are traditional computational evolution. Their common feature is that the "fitness function" evaluates the genomes in isolation from the other members of population. The genomes interact only with the fitness function (and in some cases the data), but not with each other. This arrangement precludes the evolution of collective solutions to problems, which can be very powerful in supply chain performing. [151 Andr6 L.V. Coelho said: "Artificial coevolution seems very suited for simulating cooperation and/or competition behavior among multiagent entities." in his study on Emergence of multiagent spatial coordination strategies through artificial coevolution. [161 Two basic fl71 classes of coevolutionary algorithms have been developed: competitive coevolution in which the fitness of an individual is determined by a series of competitions with other individuals (see, for example, Rosin and Belew (1996) [181Rosin and R. Belew. New methods for competetive coevolution. Evolutionary Computation, 5(1):1-29, 1996.), and cooperative coevolution in which the fitness of an individual is determined by a series of collaborations with other individuals (see, for example, Potter and De Jong (2000)[19] Potter and K. De Jong. Cooperative coevolution: architecture for evolving coadapted subcomponents. Evolutionary Computation, 8(1): 1-29, 2000. ). Both types of coevolution have been shown to be useful for solving a variety of problems. Evolution proceeds independently, except for evaluation. Since any given individual from a subpopulation represents only a subcomponent of the problem, collaborators will gen = 0
for each species s do Pops(gen) = initialized population evalutate(Pops(gen) )
while not terminated do gen ++
for each species s do Pops(gen)<-select(Pop~(gen- 1)) recombine(Pops(gen) ) evaluate(Pops(gen ) ) survive(Pop~(gen ) )
Fig. 1 The structure of a Cooperative Coevolutionary Algorithm (CCA).
490 need to be selected from the other subpopulations in order to assess fitness. Each generation, all individuals belonging to a particular subpopulation have their fitness evaluated by selecting some set of collaborators from other subpopulations to form complete solutions. Afterward, the CCA proceeds to the next subpopulation, which will in turn draw collaborators from each of the other subpopulations. A simple algorithm of this process is outlined below in figure 1. The competitive coevolutionary algorithm[20-21] is somewhat different from the cooperative like this: in the competitive coevolution, the fitness of an individual in one population is calculated by direct competition with individuals in another population. Individuals of each population must take turns in evaluating and being evaluated for renewing the fitness. By keeping on predominating over each other, some populations will altematively become better than the other. Coevolutionary Algorithm has been used into many fields. [22-24]We attempted to use it into supply chain problems. 4. SUPPLY CHAIN SURVIVAL In ecosystems, Abundance of resources can affect community structure and species abundance, especially through the size of the habitat. Communities can be described as assemblages of interacting populations of species. The structure of a biological community, that is the number (or diversity) and types of species and the number of individuals (or density) in a population is determined by the complex interaction of several factors. Corresponding to the ecological problem, supply chains has the same problem. Managing or understanding the flows between buyers and sellers in supply chains involves design, planning and control of supply chains. On this kind of problem, the beer game is the most famous tool. Here, we suppose another simple beer game to discuss the survival of supply chains. In the survival beer game, it is supposed that there is nothing but supply chain units, i.e. we only discuss the survival of suppliers, manufacturers, and customers in supply chains strategically. "Survival of the fitness" is the basic of Darwin's evolutionary theories. When the thought is used into supply chains, it is interpreted as "Survival of the performance". Supply chain performance is necessary to understand the supply chain and its characteristics. There are many complex supply chain performance measures. We simplify the game's measure as cost, quality and lead-times. Therefore, the survival beer game supposed in this paper is: there are many customers, manufacturers, and suppliers; they interact behind the game, i.e. the performance measures are used to determine the survival of any individual unit among the populations of suppliers, manufacturers, and customers. The outline of the survival beer game is drawn by cooperative coevolution algorithm as Fig. 2. In the Fig. The specifications of the survival beer game are made firstly. The objective that we suppose the survival beer game is to explain the coevolutionary mechanisms of supply chains through cooperative coevolutionary algorithm. In the game, if we initiate the populations with real supply chain units, i.e. suppliers, manufacturers and
491 Gen: the generations of iteration Species: supply chain units, including: supplier, manufacturer and customer Pop: the assembly of species, namely suppliers, manufacturers and customers gen = 0 for each species s do Pops(gen) = initialized population //To initialize it with varies characteristics which will later directly determine the supply //chain performance, i.e. create many varies suppliers, manufacturers and customers evaluate(Pops (gen)) //The evaluation depends on collaborators selected from the other subpopulations in //order to assess fitness of the supply chain performance of the original generation. //(e.g. population of manufacturers selects collaborators from customers and suppliers.) while not terminated do gen ++ for each species s do Pop (gen)<-select(Pops (gen-1)) //To randomly select varies units from the last generation of suppliers, manufacturers //and customers for the new generation. recombine(Pops (gen)) //To form the new populations of current generation.(This implies a new supply chain //constitution, it is better than the that of last generation.) evaluate(Pops (gen)) //To evaluate the supply chain performance of current generation through the //collaborators selected from the other populations. survive(Pops (gen)) //To determine the survival of the units in all populations and consequently let the //supply chain survive the performance.
Fig. 2 The supply chain survive the performance in the survival beer game by Cooperative Coevolutionary Algorithm customers, the game will probably help to determine the selection of supply chain partners. In this way, the game can be particularly used to study smaller ranges of problems, such as evaluating the performance of an individual supply chain unit through using the historical data. This may help a supply chain gain long-term advantages. 5. C O N C L U D I N G R E M A R K S There are many ecological and biological similarities in supply chains. This mades us consider the coevolutionary mechanisms of supply chains. Artificial coevolution provides full supports for the studies of supply chain management problems,
such as the
determination of supply chain structure. The studies of coevolutionary supply chain will promote the progress of academics and practices of supply chain management. ACKNOWLEGEMENT The manuscript benefited greatly from project 79931000 of NSFC and project G2000263 of the State Major Basic Research Development Program. The authors would like to thank Dr. Ming L. Lu. from Aspen Technology, Inc. U.S.A. and Dr, X. Z. Wang from
492 University of Leeds, United Kingdom for giving constructive suggestions. We would also like to extend our thanks to the anonymous referees for their very helpful suggestions.
REFERANCE [ 1] Michael Hammer and James Champy, Reengineering the Corporation: A Manifesto for Business Revolution, 1993. [2] Complex systems and valuation, Ecological Economics 41 (2002) 409-420 [3] Jouni Korhonen a, Margareta Wihersaari, Ilkka Savolainen, Industrial ecosystem in the Finnish forest industry: using the material and energy flow model of a forest ecosystem in a forest industry system, Ecological Economics 39 (2001) 145-161 [4] Ecological goods and services of coral reef ecosystems, Ecological Economics 29 (1999) 215-233 [5] Matthias Ruth, Evolutionary Economics at the Crossroads of Biology and Physics, Journal of Social and Evolutionary Systems Volume: 19, Issue: 2, 1996, pp. 125-144 [6] R.B.Norgaard, The process of loss: exploring the interactions between economic and ecological systems, American Zoologist, 34(1), 1994, pp. 145-158 [7] John Foster, The analytical foundations of evolutionary economics: From biological analogy to economic self-organization, Structural Change and Economic dynamics 8(1997)427-451 [8] Diego J. Rodriguez, A method to detect higher order interactions in ecological communities, Ecological Modelling 117(1999) 81-89 [9] Walter K. Dodds, Geoffrey M. Henebry, Simulation of responses of community structure to species interactions driven by phenotypic change, Ecological Modelling 79(1995) 85-94 [ 10] A Niche Theory of Social Systems, THE 2nd Orwellian Symposium (Karlovy Vary, August, 1994) [ 11 ] Peter T. Hraber, Bruce T. Milne, Community assembly in a model ecosystem, Ecological Modelling 103(1997) 267-285 [ 12] Tilman D., Bildiversity: Poopulation versus ecosystem stability.Ecology 72(2), 350-363,1996 [ 13] Theoretical comparisons of individual success between phenotypically pure and mixed generalist predator populations, Ecological Modelling 82 (1995) 175-191 [ 14] Peter T. Hraber, Bruce T.Milne, Community assembly in a model ecosystem, Ecological Modelling 103(1997) 267-285 [ 15] Paredis J. Coevolutionary computation. Arti.cial Life, Journal 1996;2(4):355-75. [ 16] Emergence of multiagent spatial coordination strategies through artificial coevolution, Andr6 L.V. Coelho, Daniel Weingaertner, Ricardo R. Gudwin, Ivan L.M. Ricarte, Computers & Graphics 25 (2001) 1013-1023. [ 17] R. Paul Wiegand, William C. Liles, Kenneth A. De Jong, An Empirical Analysis of Collaboration Methods in Cooperative Coevolutionary Algorithms, Proceedings of the Genetic and Evolutionary Computation Conference, 2001. [18] Rosin and R. Belew. New methods for competetive coevolution. Evolutionary Computation, 5( 1): 1-29, 1996. [19] Potter and K. De Jong. Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation, 8(1): 1-29, 2000. [20] Claverie, J.M.; De Jong, K.; Sheta, A.E, Robust nonlinear control design using competitive coevolution, Evolutionary Computation, 2000. Proceedings of the 2000 [21 ] Moeko Nerome, Koji Yamada, Satoshi Endo and Hayao Miyagi, Competitive Co-evolution Based Game-Strategy Acquisition with the Packaging, 1998 Second International Conference on Knowledge-Based Intelligent Electronic Systems, 21-23 April 1998, Adelaide, Australia. Editors, L.C. Jain and R.K, Jain [22] Haoyong Chen and Xifan Wang, Cooperative Coevolutionary Algorithm for Unit Commitment, IEEE Transactions on Power Systems, Volume: 17 Issue: 1, Feb. 2002 Page(s): 128 [23] Qiangfu Zhao, A Co-Eovlutionary Algorithm for Neural Network Learning, International Conference on Neural Networks, Volume: 1,1997 Page(s): 432 -437 vol.1 [24] Weicker, K.; Weicker, N., 1999 Congress on Proceedings of the Evolutionary Computation, 1999 -1632 Vol. 3
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
493
General Approach to Connect Business and Engineering Decisions in Plant Design Markku Hurme 1, Mari Tuomaala ~, Ilkka Turunen 2 1Helsinki University of Technology, Laboratory of Plant Design, P.O. Box 6100, FIN-02015 HUT, Finland 2Lappeenranta University of Technology, Laboratory of Process Systems Engineering, P.O. Box 20, FIN-53100 Lappeenranta, Finland
Abstract The connection of engineering and business decisions to include sustainability criteria (such as safety, health and environment; SHE) in engineering evaluations is discussed in the paper. The practice how SHE aspects are considered depends on the company's approach to value environmental issues. In the highest level of involvement the SHE criteria are seen as a competitive advantage and optimized together with (or as) economic factors. Keywords decision making, business, sustainability
1. INTRODUCTION A common argument in real life operations is that engineering and plant personnel does not appreciate business and economic facts as they should or that business factors are not reflected in R&D. On the other hand the understanding of product or process design may be insufficient by the sales staff. Obviously this may reflect a some kind of a problem in the integration of engineering and business decisions. An additional complication is present, since also the sustainability aspects are to be included into the decision making, even economic factors are the prime tool to control the business. Since the company goals and therefore also the decisions should be coherent through the organization, there is much to do in the integration of engineering, economic and SHE aspects in the decision making. The paper discusses ways to include these aspects in practical process development and design in a consistent way. 2. SUSTAINABILITY Sustainability may be considered as one of the major forces affecting process industry in the future. Therefore it would be beneficial to find its connection to economic aspects. Sustainability is often defined as 'meeting the needs of the present without compromising the ability of future generations to meet their own needs' [1 ]. A generally accepted division of sustainability is to divide it into 1) economic, 2) environmental and 3) social sustainability.
494 The discussion of the needs of future generations in business may sound very farsighted, since the business planning and predictability is often valid only a couple of years and the business perspective is becoming even shorter with the shortening lifecycles. Despite the difference in time perspective many western world companies have committed to sustainability in their strategies by joining the Responsible Care program [2]. It is understandable that 'generations' mean 'new customers', which mean 'new market opportunities' and 'more business'. Therefore the challenge of sustainability from business point of view is to develop and design processes and products that meet the needs of present customers and make the continuous long lasting business possible from both the economic, SHE and social point of view. 3. FROM COMPANY GOALS TO ENGINEERING GOALS Business planning starts from business objectives, which then will be translated into business strategy. Strategy is defined "an integrated and coordinated set of commitments and actions designed to exploit core competencies and gain a competitive advantage" [3]. Strategy reflects the visions and values of the company. Figure 1 presents one way of thinking on the relations between values, vision and strategy. It is seen that the values affect all working procedures.
VISION
I
I
.IsTRATEGY
[
J PROJECTS
I
"L
I
"1
I Fig. 1. From vision to engineering project work.
In order to incorporate sustainability into a business strategy, decisions have to be made on long term basis, which emphasizes a paradox between payout time and 'sustainability time'. The company wants a short payout time but sustainability and a long term business requires a long time window. Since the business strategy is the key factor to steer all the operation in the company, it is an important question, how the strategy penetrates the organization from the management to the practical engineering decision making, even the latter is quite engineering goal oriented. Since there are typically difficulties to incorporate even the company business goals into the everyday work, how to include the newer SHE goals? A basic requirement is that company objectives should be clear, since the interpretation of business strategy to project goals has to be made in every project definition phase, during which the project goals are determined (Fig. 2). These goals vary from project to project. The 4 th generation R&D principles emphasize how the goals are set: The organization should be integrated from the customer side, not from the company R&D side even these interact. Therefore the customer 'pull' comes leading not the technology 'push' aspects. 4.COMPREHENSIVE PROCESS INTEGRATION AS AN ENGINEERING TOOL At the interface of decision making and design there are the engineering tools and approaches. One way to optimize different operations and reach goal settings is process integration. Process integration has now been defined as 'design, operation and management of industrial processes with system-oriented and integrated methods, models and tools' [ 1]. The definition
495 covers the material, energy and information flows of the plant as well as the business and technical processes during the whole life cycle from process development to plant retrofit. The scope of the definition has come along a long way from the first interpretation at the end of 70's, when the heat recovery pinch gave birth to the field. For the clarity the new wide concept is called here comprehensive process integration.
r roje t --
F"...... t Strate=y I
~
--:
l Process conce Pt I-, I *
i
~ I Process parameters I~ o ,} 0
[ Simulation
I ...............................................................
i
{Evaluation
[~g~
I----~, [ Changing ~ ' i parameters and j structure N N [ ~ [ ~ [ Go / no go decision
I ewp r~
i i i
I
Fig. 2. The phases of engineering design and their connection to decision making. Comprehensive process integration deals with integration of 1) different company functions (such as management, business, engineering and operation), 2) geographic areas (different units and plant and society) and 3) balancing of material and energy i.e. raw materials and utilities to optimize plant performance according to company strategic goals
over the value chain. Process integration tools are used in the process synthesis and analysis phases as engineering tools (Fig. 2). It is obvious that managing comprehensive process integration also helps to solve the business / engineering integration and decision making problems mentioned earlier. This thinking has also profound effect on process engineering. The three basic types of tools required in comprehensive process integration are: 1) Methods for evaluating and balancing process performance. 2) Methods for decision making and multiobjective optimization. 3) Methods for integrating different aspects; a) business and engineering, b) utilities and other factors of production, c) various design and simulation methods. 5. PROCESS PERFORMANCE CRITERIA Performance criteria are the way to evaluate the process performance. If the strategy includes also sustainability demands, a further question is, how to measure the sustainability of the engineering decisions and finally; how these factors can be optimized in decision making. In general process performance criteria can be divided into three categories [ 1]: 1) economy 2) safety, health, environment, quality (SHEQ)
496 3) technological performance (technological novelty, operability, availability and technical performance; a) Operability is further divided into controllability and flexibility, b) Availability is divided to reliability and maintainability The three groups have obviously strong interactions. The idea is that one group includes purely economic aspects, the second classical sustainability aspects (SHE) and the third technical aspects (described in non-economic terms). 6. EFFECT OF LEVEL OF RESPONSIBILITY TO DECISION MAKING The aim of the company is in all cases to make money to the shareholders. The level of inclusion of wider stakeholder point of view is varying and therefore the measures and strategies vary. Stakeholders are customers, shareholders, environmental groups, the public and regulators. The levels of SHE responsibility are:
Competitive advantage Self responsibility Consideration of expectations of customers and s o c i e t y Legal responsibility
increasing responsibility
The effect of company's position in these levels obviously affects the general decision making.
6.1. Legal responsibility In this level the optimization is done merely on the economic basis in the operating window given by the legislation and environmental etc. regulations. T h e company pushes the responsibility of the SHE aspects to the society by thinking; the only responsibility is to fulfill the legal requirements; "as long as we follow the laws, everything is ok". The business evaluations are made only on an economic basis considering only a short term profit. The decision making problem is solved by constrained economic optimization by considering the emission rates etc. as constraints in maximizing the present value with the given profit rate. In utmost cases this thinking leads into situation in which polluting industry is located in countries, where the level of SHE requirements is low. 6.2. Consideration of expectations of customers Optimization is made on short term economic principles by considering government regulations but also the non-economic requirements of the customers such as the life cycle and quality system aspects. The customer requirements often originate from the end user (consumer) requirements. For instance the reader of newspaper does not prefer that the paper is made from wood of rain forests as a raw material. Often the question is to provide feasible life cycle information to the customers. In principle this means that not always the cheapest raw material or process is used, but the optimal one when considering the incremental value with e.g. feasible LCA properties. The economic optimization considering also SHE aspects is based on the assessment, how much the market size and price depend on the extent of fulfillment of the customer needs. In calculations market is divided into market sectors: I) For requirement A; segment size * price - economic volume, 2) for requirement B; segment size* price = economic volume, etc. The optimization is done by considering, if the extra investment is worthwhile for the extra price and volume gained.
497 6.3. Self responsibility Responsible operation in terms of SHE is taken as an essential part of company values, but it is not seen as the main competitive advantage. Responsibility is obeyed voluntarily (e.g. responsible care principles [2]) even the society does not demand that, but because they are included in company values. For example a multinational company operates using the same principles in all countries. The decision making is done on economic terms by taking into account the responsible care principles as company values. Responsible care program however does not set any clear objectives or give tools for decision making, except that there is an aim into a continuous improvement.
6.4. Competitive advantage In this level the operating view is farsighted. Sustainability is seen as a prerequisite for a long term profitable operation. The needs of future generations are seen as today's market opportunities. The sustainability is believed to be the evident trend, which everybody will follow finally. Therefore the challenge of sustainability from business point of view is to develop & design products and processes to meet the needs of present and future customers. This makes a continuous long lasting business possible, since the company benefits from the trend of SHE and social aspects in society. The aim is to be faster and better than the competitors by integrating sustainability into business quickly. The challenge of adoption of sustainability is equivalent to the problem of early adoption of any developing technology. The general aim is to optimize the competitive advantage. Another aspect to be included is the consideration of the whole value chain. The idea is that, if the value chain is optimal, it presents largest potential both to the end user and the producer. Decision making can be done in this level in principle by three ways: 1) multicriteria optimization with weightings, 2) multicriteria decision making by human cognition or 3) by expressing all aspects in financial terms. 1) Decision making is done by evaluating the company preferences (derived from company values and strategies; as seen in Figures 1 and 2). The problem is that the evaluation criteria can not be easily expressed in a single basis. The preferences can be approximated 1) by giving weights to criteria in a simple weighted score method or 2) by using pair-wise comparisons of criteria in an analytic hierarchy process (AHP) [4]. These methods give finally an single figure as an answer. The problem is that the determining of weights is very fuzzy and depending of the case and the person giving the weights. AHP is more systematic method, since it is based on pair-wise comparison of criteria and not on an aggregated evaluation. There has also been criticism on these approaches based on the experience, that it is difficult to evaluate the criteria without knowing their influence on the result. 2) The multicriteria decision making by human cognition, which is supported by visualization, graphics etc. tools to aid the decision situation by graphs, interdependency curves etc. In principle no formal decision aids are used in this approach but only human cognition. In practice methods 1 and 3 are often first used to 'tune' the human mind for making the final decision by approach 2. 3) The most logic way would be to express all factors (also SHE) in monetary terms as their estimated financial potential to business. This is based on the idea that the sustainability is a competitive advantage, meaning the level of its adoption is optimized in monetary term in a long term view for product and production systems, since the economic potential of sustainability is the driving force. Competitive advantage (which is much embedded as a future potential) is optimized by multi-objective optimization. The future potential of new technologies means that the new technologies are in a different point of learning curve and therefore possess a higher ability to improve and answer new demands in future than the established technologies.
498 How to include the sustainability aspects in economic evaluation? The short payout time is still required also in a sustainable business. The practice to lay an extra fee on unsustainable raw materials and waste material & heat would promote sustainable decisions in companies and extrapolate the business to the future. This would be e.g. an internal fee used in profitability estimates, which allows the company to guide the investment to take into account the future demands of increasing sustainability by simulating the situation with an extra fictive cost. 7. CONCLUSIONS The paper discusses the connection of engineering and business decisions in an environment where sustainability criteria (such as SHE) have to be included in the business and engineering evaluations. The connection of business and engineering decisions has traditionally involved problems, because of the different organizational cultures and because the company aims have not been clear or penetrated through the organization. The company strategy has a crucial role, since it determines the way company sees the business. The company strategy has to be interpreted in the beginning of every project as a part of project goal. How this is done is determined by the way, how the sustainability principles are integrated into the company's goals. In the highest level of commitment, sustainability is seen as a business opportunity. In this level the SHE criteria are understood as a competitive advantage and optimized together with (or as) economic factors. For optimizing the process performance criteria are needed. Integrated design methods such as process integration tools are required for reaching the practical engineering goals in design. REFERENCES [ 1] M. Tuomaala, M. Hurme, I. Hippinen, P. Ahtila, I. Turunen, Proceedings of 6th World Congress of Chemical Engineering, Melbourne 2001. [2] www.cefic.be [3] M.A. Hitt, R.D. Ireland, R.E. Hoskisson, Strategic Management, Competitiveness and Globalization [4] T.L. Saaty, European Journal of Operational Research, 48 (No. 1), 9-26, (1990).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
499
Sharing Benefits of Eco-Industrial Park by Multiobjective Material Flow Optimization Jin-su KANG, Hyeong-dong KIM, and Tai-yong LEE Department of Chemical and Biomolecular Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Republic of Korea
Abstract The goal of an Eco-Industrial Park (EIP) is to improve the economic performances of the participating companies while minimizing their environmental impacts. But various socio-economic problems are obstacles for realization. From the economic point of view, EIP has the benefit sharing problem in which the interest of each company may conflict depending on the configuration of material flow network of an ecosystem so that it is not recommended to insist on one-sided concession. That is, to make a compromise is the key of success of an EIP. In this study, the concept of robust optimal design is applied for construction of EIP so that reasonable alternatives are proposed to distribute the benefit of each company without conflicts in an EIP at the point of Pareto optimality. The industrial example constructed by EIP is addressed to show the performance of the proposed approach.
Keywords
eco-industrial park, material flow network, pareto optimality
1. INTRODUCTION An eco-industrial park (EIP) or estate is a community of manufacturing and service businesses located together on a common property. Member businesses seek enhanced environmental, economic, and social performance through collaboration in managing environmental and resource issues. The benefits of this arrangement are that wastes or byproducts of one company can be reused by manufacturing process of other companies. By working together, the community of businesses seeks a collective benefit that is greater than the sum of individual benefits each company would realize by only optimizing its individual performance [1-2]. The goal of an EIP is to improve the economic performances of the participating companies while minimizing their environmental impacts. Components of this approach include green design of park infrastructure and plants (new or retrofitted); cleaner production, pollution prevention, energy efficiency, and inter-company partnering. An EIP also seeks benefits for neighboring communities to assure that the net impact of its development is positive. Currently, there are a number of EIPs being planned around the
500 world. It is expected to construct an EIP in petrochemical complex like Ulsan, Yucheon, and Daesan in Korea. It can induce not only increasing the compatibility of each company but also reducing wastes, however, systematical problems and benefit sharing problem are obstacles for realization. In case of benefit sharing problem, the interest of each company may conflict depending on how to construct the closed loop of an ecosystem so that it is impossible to insist on one-sided concession. That is, to make a compromise is the key for success of an EIR An EIP also raises the question of whether the desire to reuse waste streams comes at the expense of adhering to pollution prevention principles calling for the elimination of wastes at the front end of the process. Those kinds of characteristics may lead to construct the problem as a multiobjective problem that is not possible to find a single solution that would be optimal for all the objectives simultaneously, namely, Pareto optimality [3]. In this study, the definition of robust optimal design is applied for constructing EIP so that reasonable alternatives are proposed to distribute the benefit of each company without conflicts in an EIP in the point of Pareto optimality. A decision-making methodology is also proposed which reduces the number of robust design alternatives to select the final robust alternative. The robust optimization problem is considered as a multiobjective problem in which each objective is given by benefit to each business. Many businesses are assumed to recycle and donate or sell recovered materials. The solution of multiobjective problem consists of two phases, namely analysis phase and decision-making phase. At the analysis phase, the Pareto optimal set or its subset is determined which is a collection of alternative optima. At the decision-making phase, the preferred best solution among all alternative optima is selected according to make a compromise between companies in an EIR The industrial example constructed by EIP is addressed to show the performance of the proposed approach. 2. MODEL OF MATERIAL FLOW OF EIP To investigate the distribution of profits between industries in EIP, the multiobjective problem is reformulated based on the assumptions as follows. 1. The wastes from each company is reused by another company, or cleaned by recycling units of EIP or thrown away outside. 2. Each company uses 'raw wastes' from other companies directly, or 'cleaned wastes' by recycling units of EIP, or purchased raw materials from outside. 3. Recycling units of EIP treat wastes from each company and deliver them to each company or throw them away. Based on these assumptions, the material flow of EIP can be represented like Fig. 1 below.
501
9-'"
o ~ o_. o_o o~. . _o o_o -~- . o! OOOOo.oo.o ~ _ ~t V m k oo r 9~
-_
%
."
~176176176176176176~ ,ooooooooooO ~~176176176176176 Figure 1. Material flow network 3. MATHEMATICAL M O D E L
The material flow network of EIP is formulated as a mathematical model based on the assumption above. Waste j
~--]xj, +~-]wjk +uj =Uj
i
(1)
k
Raw material k
t J Q(FL) ,k Vk < Z 1
j
O ( W ) wjk + ~--]Q). (E).Vmk-=y < X~lk )Vk j
J
(3)
.
xj, = ~.,Y,k + z, k
O(r,+) , Z
Variables
n ( FU
n (hT ) " Ytk + Z ~
--
Recycling unit i
(2)
m
(4)
< ~O(W).. .~..IJ dt"Ji < n(m) Z x j t Y
(s)
J
Q~L)Z z, < Z O~,r)z, < QY' E z, l t i Vmk, Wjk, Xj,, Y,k >-0
(6) (7)
Qt shows the quality of water. The economic effect of n company by constructing an EIP is considered as follows. f . = g. + h.
(8)
where
g. = ~~[a s (U s - uj )-(~-] cj, xj, + E eJkwj~)] jedn
i
h, = ~-][bkVk -(~-]d,y,k + ~.ejkwjk + ~f]bmvmk)] keKn
t
(9)
keK'j
jeJ'k
(10)
,
and g, is saving cost of wastes treatment in n company and h, is saving cost of buying raw materials of n company. Overall saving is represented like below.
502
F =.~_.w,f,
(11)
n
where w, is a weighting factor according to the size of a company. In the viewpoint of EIP, it may be effective to maximize F but the saving cost of some company may not be considered resulting in a difficulty of inducing their participation. This observation shows us the importance of constructing EIP to make each company feel the need of EIP. This kind of investment problems that the interest of each industry conflicts have been studied for a long time in finance starting from the portfolio selection theory of Markowitz [4]. In this study, EIP is formulated as a by-objective optimization problem which has overall cost saving objective, F, and the lowest cost saving of an industry, F*, based on partial mean model by Fishburn [5]. max
(F, F*)
(12)
subject to (1)-~ (11) where
F* = ~-" w. min{f.,f" }
(13)
n
and f '
is a target value of companies with low cost saving.
4. RESULT AND DISCUSSION In this study, an EIP is composed of 5 wastes flows from 5 companies, 4 reuse material flows, and 3 recycling units. The modeling system GAMS [6] was used to implement the model. The objective function (Eq. 12) of the bi-objective problem can be reformulated as follows. max
(1-a)F
+aF"
(14)
Fig. 2 describes the relationship between cost saving of each company and the average total cost saving by changing c~ from 0 to 1. Dotted lines show cost saving of each company and solid line shows average total saving considered the size of the company. If we are interested in maximizing average total cost saving (ct=0), the maximum cost saving among companies is 402.5 Mwon/yr and the minimum cost saving is 153.58 Mwon/yr. When each company has similar amount of cost reduction (ct=l) by maximizing the least cost saving, the average total cost saving decreases 9.3 % and the least cost saving increases from 153.58 Mwon/yr to 239.66 Mwon/yr, 56 %. Generally, the final solution which is Pareto optimal and satisfies the needs and the requirements of the decision maker should be decided from compromise between individual company and government based on the analysis above. However, it is expected to choose the final solution where most companies can get reasonable satisfaction. In this case, it is desirable to select the average total cost saving 258.78 Mwon/yr which is 0.05 % less than the maximum average total cost saving with and 56 % of increment
503 in the least cost saving.
r 0
&9 9 '~ 9 9
350
Company 1 Company 2 Company 3 Company 4 Company5
. . . . . ..I
..It 300
(1) t'(D
250
tY1
200
.
9 el
9
.' ..
-o' .A
. 9
- "
9 ..-
.#
... . . . .
-'9
~
. . . . . .
~
.o
. -' 9 "
9~ , .
:':
..
: :g.. ....... "w ....
250
255
260
265
270
275
F Fig. 2. Benefit of companies with F 5. C O N C L U S I O N In this study, the economic effectiveness of EIP is represented by proposing a material flow optimization model. The objective function consists of the average total cost saving in EIP and partial mean by Fishbum [5] so that this proposed model can consider cost reduction and benefit distribution, together. NOMENCLATURE
aj
cost of throwing a waste j" away outside EIP
b~
cost of buying a raw material k from outside EIP
Cj~
cost of treatment of a waste j in a recycling unit i
d,k
cost of buying a treated material in a recycling unit i inside EIP
elk F
cost of j -~ k in trading inside EIP
L
cost saving of n company
g. h.
cost saving of waste treatment in n company cost saving of buying raw materials in n company
J.
set of wastes produced in n company
K. Uj
set of raw materials needed in n company the amount of a waste j ~ d. produced in n company
Uj
the amount of a waste j ~ d. thrown away from n company
v~
the amount of a raw material k ~ K. needed in n company
Vmk
the amount of a raw material k ~ K. needed among materials m bought from
cost reduction inside EIP
outside EIP
504
Wjk
the amount of j ~ k in trading inside EIP
Xjt
the amount of a waste j treated in a recycling unit i
Ytk
the amount of materials used for a raw material k which is produced in a recycling unit i the amount of water discharged from a recycling unit i outside EIP
Indices i j k l m n
recycling unit wastes raw materials 1- temperature, 2- pH, 3-COD, 4-SS materials bought outside EIP companies
ACKONWLEDGEMENT
This work was partially supported by the Brain Korea 21 Project, by the Korea Science and Engineering Foundation(KOSEF) through RRC/NMR Project, and by grant No. IMT200000015993 from Ministry of Commerce, Industry and Energy (MOICE). REFERENCES
[ 1] R. Chisholm, Developing Networking Organizations: Learning from Practice and Theory, Addison-Wesley, Reading, Massachusetts (1998). [2] E. Lowe, Designing, Financing and Building the Industrial Park of the Future Workshop, San Diego (1995). [3] M. Suh and T. Lee, Ind. Eng. Chem. Res., 40(2001) 5950. [4] H. Markowitz, The Journal of Finance, 7(1952) 77. [5] P.C. Fishburn, The American Economic Review, 67(977) 116. [6] A. Brooke, D. Kendrick and A. Meeraus, User's Guide, The Scientific Press, Redwood City, USA (1988).
Process SystemsEngineering2003 B. Chenand A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
505
A case study on the integration of process simulation and life cycle inventory for a petrochemical process L. Kulay a, L. Jim~nez b, F. Castells b, R. Bafiares-Alcfintara b and G. A. Silva a Chem. Eng. Dept., University of Sao Paulo, Av. Prof. Luciano Gualberto tr.3 - 380, 05508900 Sao Paulo, Brazil. E-mail: {luiz.kulay, gil.silva}@poli.usp.br
a
b Chem. Eng. Dept., University Rovira i Virgili, Av. dels Paisos Catalans 26, 43007 Tarragona, Spain. E-mail: {ljimenez, fcastell, rbanares }@etseq.urv.es
Abstract This paper presents the use of process simulation and the eco-matrix methodology to obtain the environmental impacts of a chemical process. The case study was the i-pentane purification process from the REPSOL-YPF refinery (Tarragona, Spain). First, the model was validated with plant data. Second, the key results establish the life cycle inventory, where the consumption of natural resources and the releases to the environment were quantified. The environmental aspects included in the eco-vector were divided into two categories: consumption of natural resources and wastes. The environmental loads were evaluated and compared for eight alternative scenarios of electricity generation and steam production. Cogeneration and heat recovery were the best alternatives from the environmental point of view. Although the life cycle approach does not allow the differentiation between local and global impacts, it is a valuable tool for strategic decision-making at a managerial level. Keywords life cycle assessment, simulation, process analysis 1. INTRODUCTION During the last decade many efforts have been done to perform the Life Cycle Assessment (LCA) of products. In this context, several methodologies to quantify the environmental loads and translate their effects on the ecosystem and the human health have been proposed. The interest of companies to gain new markets offering green products reinforces the application of the environmental evaluation of a product throughout its entire life cycle as a support tool at different levels of decision-making, i.e. management, technical and operational. The LCA approach has to be followed by an environment analysis to estimate the local, regional and global damages. For example, the CO2 is used as a indicator for the climate change in the global warming potential. One kilogram of CO2 generated by a industrial process in the different stages of a product life cycle contributes equally to the climate change. This is not the case for site-dependant impacts, as the potential impact of the acidification measured as H+. To do that, weighting factors across the endpoints have to be selected, what is far beyond the objective of this work Ell. This paper presents a methodology in which the evaluation of the environmental inventory of products is carried out to differentiate among scenarios in order to prioritize the environmental profile of different alternatives of electricity generation and steam production for the chemical process industry.
506 2. B A C K G R O U N D
According to ISO 14040, the LCA methodology comprises four steps[2]: goal definition and scope, inventory analysis, impact assessment and interpretation. In the first one we should state the system boundaries, the objective(s) of the study and the audience. The inventory analysis involves data collection and calculation procedures to quantify the relevant inputs and outputs of a process, called Environmental Loads (EL). Typical EL include the use of natural resources and the releases to air, water and land. One of the most effective approaches to obtain the Life Cycle Inventory (LCI) of a process is the eco-matrix, proposed by Sonneman et al. [1]. An eco-matrix has various eco-vectors. An eco-vector consists of mathematical operators whose elements depend on the EL selected. Each process stream (input, output, intermediate or waste; material or energy) has an associated eco-vector whose elements are expressed as environmental loads (EL, i.e. SO2, NOx ...) per functional unit (i.e. ton of main product). All input eco-vectors (material or energy streams) have to be distributed among the output streams of the process (or subsystem). In this sense a balance of each EL of the eco-vector can be stated similarly to the mass-balance (inputi -- outputi + generationi). This is the reason why all output streams are labeled as products or emissions. The eco-vector has negative elements in the streams of emissions and/or waste, corresponding to the pollutant they contain. Figure 1 illustrates an example of a chain of three processes that produces a unique product. The third aspect of LCA is the impact assessment to consider the significance of the potential impacts using the results from the LCI. This process associates inventory data with specific environmental impacts and helps to understand the effect of those impacts in human health, natural resources and the ecosystem. Finally, the results from the inventory analysis and the impact assessment are combined. In the case of LCA, the conclusions should include recommendations or corrective actions. 3. METHODOLOGY The methodology seeks to integrate process analysis and environmental assessment. To compute the LCI all environmental loads of the process have to be quantified. The use of process simulators to obtain the LCI guarantees a robust approach that allows LCA to exploit their advantages in terms of availability of information and planning. This approach favors both, the design of new processes and the optimization of existing installations, and minimizes the uncertainty of the information being used, as average values from the literature are used just for environmental loads at trace level. Nevertheless, the predictions of all models in our case study (for the naphtha plant, utility network and heat recovery system) were validated using plant data.
Fig. 1. Life cycle inventory analysis according to the eco-vector principle.
507 The methodology to compute the LCI is as follows: first, the model of the plant were built using Hysys.Plant | [3], and their predictions were validated with plant data. This model includes some modifications to consider fugitive emissions. The key simulation results were transferred to a spreadsheet (Microsoft| Excel) through macros programmed in Visual Basic t4]. Finally, the environmental loads of the LCI were evaluated in order to provide the environmental profile of the process. TM
4. CASE STUDY The methodology was applied to the debutanizer and depentanizer columns of a naphtha mixture processed in the REPSOL-YPF refinery located in Tarragona (Spain). A C4-rich naphtha stream (= 28.3 ton-h -1) is fed to the first column to remove n-butane and lighter components (= 0.50 ton'h~). The bottom product is mixed with a C5-rich naphtha (= 71.5 ton.h l ) stream and fed to the second distillation column, where i-pentane (- 16.3 ton.h 1) is obtained as the top product. The bottom stream (-- 83.0 ton.h ~) is sent to the blending section of the refinery. The plant has four heat exchangers, and two of them (HX-1 and HX-3) recover heat from the process. Both condensers are air coolers, and therefore plant operation requires electric power and steam. The production of these two utilities consumes additional natural resources and generates additional releases to the environment, and thus they need to be included in the overall analysis. For each one of the scenarios, the eco-vector was divided into the three different plants: steam production, electricity generation and naphtha plant. Figure 2 presents a simplified diagram of the system. Despite the fact that emissions were produced in different locations (e.g. those related to the extraction, transport and refining of the raw materials), the eco-vector is unique for each stream, and thus it does not considers site-dependant impacts. The aspects included in the ecovector were divided into two categories: 9 Consumption of natural resources: depletion of fossil fuels (fuel-oil, gas-oil, carbon, natural gas, oil) and consumption of electricity and water. The plant consumes high pressure steam and produces medium and low pressure steam that are used within the petrochemical complex. The eco-vector that corresponds to these streams is also considered. The environmental loads of the process inputs were retrieved from the ETH Report [5] and the TEAM database [6]. TM
Raw material
I
I
..~1 Electricity q generation
9 9
I "
~Wastes
!
Wastes ... v
C4-rich naphtha | C.~-rich naphtha
r] Distillation
.3
] 9
vt
~
I
Raw material
J |
9 i
[
|
| l
i
9 i
" I
process
I
Steam production 9 I
9
i
i-Pentane ,.. By-products
] I 9
i
9
i
r
~Wastes
I
System bounda~.
Fig. 2. Simplified block diagram of the plant from a LCA perspective
508 Table 1 Main characteristics of the scenarios studied Scenario Electricity generation I II III IV V VI VII VIII
Steam production Downgrading of quality of the high pressure Cogeneration steam Expansion of the steam Cogeneration Downgrading of quality of the high pressure Cogeneration + heat recovery Expansion + heat recovery Cogeneration Fuel oil & fuel gas burning Expansion of the steam in a turbine Fuel oil & fuel gas burning + heat recovery Spanish energy matrix Fuel oil & fuel gas burning Spanish energy matrix Downgrading of quality of the high pressure Spanish energy matrix + heat recovery
9 Generated wastes: air (CO2, SO2, NOx, and volatile organic compounds (VOC) were estimated as fugitive emissions), wastewater (chemical oxygen demand, COD) and solids (particulate matter and solids). The use of different scenarios allows the comparison of altemative environmental performances. The scenarios were configured by considering different sources of steam and types of electricity generation (Table 1). Concerning the electricity generation, three alternatives were explored: cogeneration, expansion of steam and data supplied by the Spanish energy matrix (that is, average environmental loads to produce 1 kW of electricity in Spain). Three of them focus on the environmental impacts of the original process with some variations related to steam production (scenarios VI, VII and VIII). All other cases compare alternatives for future possible implementation, e.g. those considering cogeneration to produce electricity. Also lamination was considered (downgrading of the quality of the high pressure steam to fit the plant needs). 5. ANALYSIS OF RESULTS The environmental loads for the different scenarios considered were compared and evaluated. Some of the scenarios do not contribute to the consumption of some raw materials and/or environmental loads, as they may not be present in the scenario. Figure 3 shows that most of the consumption of fossil fuels is found in scenarios V (fuel oil and fuel gas) and VII-VIII (coal, crude oil and electricity). The best alternatives in terms of water consumption were obtained for scenarios III, IV, VI and VIII, where heat recovery was considered. The natural gas consumption by the cogeneration process in scenarios I and II must also be highlighted. The best alternative concerning resource consumption results from the use of cogeneration and downgrading of quality of the high pressure steam with heat recovery (scenario III). The most significant emissions concerning the atmospheric releases were observed in scenario VII (Figure 4). Nevertheless, the releases of NOx, SO2 (scenario V) and VOC's (scenario VIII) must be highlighted. Scenarios I-II show the highest indices of water contamination, while the worst performance regarding solid waste generation was observed in scenario VIII. The study suggests that the best alternative is the cogeneration one using natural gas together with downgrading of quality of the high pressure or expansion with heat recovery (scenarios III and VI). Scenario III is the best one if all impacts are considered simultaneously, while the
509 worst altemative is the one when electricity is obtained from the Spanish matrix and the steam is produced by fuel oil and fuel gas combustion (scenario VII). 6. IMPACT OF THE APPROACH ON BUSINESS DECISION MAKING The consideration of Life Cycle Inventory and Environmental Loads prediction in the early stages of a design or retrofit project would be a valuable addition to the traditional criteria already being used, i.e. cost and technical criteria.
Fig. 3. Percentage of the total EL with respect to raw materials consumption for each scenario
Fig. 4. Percentage of the total EL with respect to waste generation for each scenario
510 One of the main advantages of the proposed methodology resides in its simplicity of application and the availability of the input information it requires, namely data produced by a commercial simulation system (such as Hysys.Plant| and easily accessible data on the environmental loads of the process inputs (from the ETH Report [51 and the TEAM database[6]). The analysis of the results from the case study indicate the sort of insights that managers and designers can derive from the application of the methodology. TM
7. CONCLUSIONS The methodology has proved to be adequate in establishing the environmental product profile and in comparing several alternatives. The LCA approach should be improved with the inclusion of local, regional, and global impacts and their specific effects in human health (for example, computing the years of life lost or the acute mortality). These indicators appear to be very useful in deciding among different alternatives. The potential impacts derived from Figures 3 and 4 can be used to compute the damage impact assessment by considering a transport model of the pollutants on a local scale or the concentration and deposition of acid species on a wide scale. Concerning the case study, the use of cogeneration and downgrading of quality of the high pressure with heat recovery would significantly improve the environmental product profile. ACKNOWLEDGEMENTS One of the authors (L. Kulay) thanks the Coordenar de Aperfeir de Pessoal de Nivel Superior (CAPES) from the Ministry of Education of Brazil for the financial support. We also acknowledge the cooperation of REPSOL-YPF, and Hyprotech (now part of Aspentech) for the use of an academic license of Hysys.Plant| REFERENCES [1]
[2] [3] [4] [5] [6]
G.W. Sonneman, F. Castells and M. Schuhmacher. Framework for environmental of an industrial process chain, Journal of Hazardous Materials B77 (2000), 91. International Organization for standardization. Environmental management - life cycle assessment: principles and framework: ISO 14040. Genrve, 1997. AEA Technology. User Manual for Hysys.Plant| AEA, Calgary, Canada, 1997 I. Herrera, L. Kulay, L. Jimrnez, and M. Schuhmacher. 2nd Meeting of the International Environmental Modelling and Software Society. Lugano, Switzerland, 2002. R. Frischknecht, U. Bollens, S. Bosshart, and M. Ciot. ETH Ziirich, PSI Villigen, 1996. Ecobilan Group, TEAM | Paris, France, 1998.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
511
Optimal production planning under uncertain market conditions P. Li, M. Wendt and G. Wozny
Institut ftir Prozess- und Anlagentechnik, Technische Universit/at Berlin, KWT 9, 10623 Berlin, Germany Abstract: We propose to use a dynamic stochastic optimization approach to address production planning problems under uncertain market conditions. The problem is formulated as a dynamic mixed-integer chance constrained optimization problem which can be relaxed to an equivalent deterministic MILP formulation. Using this approach, a quantitative relationship between profit achievement and risk of constraints violation can be received, through which the sensitive uncertain variables can be identified. An optimal decision with a desirable trade-off can be made for the future purchase, sales and operation.
Keywords: Production planning," market condition; uncertainty; chance constraints; optimization 1. INTRODUCTION In the present economical environment, a company always has some uncertain supplies (inflows) such as feedstocks as well as utilities, and uncertain product demands (outflows). In production planning, the availability of the uncertain inflows has to be considered. Besides, it is very important to satisfy customer's demands. If their demands are not satisfied, they will resort to other suppliers. A conservative decision is usually made due to overestimation of the uncertainties, which leads to a considerable profit cutback. In other cases, because of profit expectations, an aggressive decision may be taken, which will probably result in constraint violations and lead to unstable operations. A proper production planning should be decisions with which the profit will be maximized, and meanwhile the uncertain inflow availability as well as the uncertain outflow demands will be satisfied. The current industrial operations planning is based on analysis of scenarios due to varying loads in comparison to base cases. Moreover, buffer tanks are commonly utilized to dampen variations of flows in the process industry. An extra degree of freedom, the storage stock, can be used for the profit optimization: different storage stocks can be allocated in different time periods. The current industrial situation is that buffer tanks tend to be sized via intuition and experience. There are hidden buffers of materials and capacity throughout the system as people protect themselves from uncertainty [1]. Therefore, it is necessary to introduce systematic methods to develop optimal dynamic production policies to enhance the efficiency of buffer tank capacities. The most previous studies on process planning under uncertainty used the two-stage solution approach [2-4]. It allows constrain violations which will be penalized in the profit function. A proper penalty function is required but it is in most cases not available. Batch process planning under chance constraints was considered in [5]. Linear and nonlinear chance constrained programming was used for single unit operations [6, 7]. In this work, a dynamic chance constrained MILP problem is formulated for production planning of existing continuous processes under market uncertainty. It is relaxed to an equivalent deterministic MILP problem. The solution provides robust purchase, sales and operation strategies for the future time horizon.
512 /3 ~ uncertain demands Q J
uncertain supplies {
P
~ sales to be decided Q )
purchases to be decided {
FR~ ~ F vo vr )
from upstream unit { l '
" " "~" WR .).
to downstream unit
} to disposal
Wv
Fig. 1: F l o w s u p e r s t r u c t u r e o f an o p e r a t i o n unit
2. P R O D U C T I O N P L A N N I N G P R O B L E M
We consider the planning problem of a company to decide in advance its investment for its production in the future time horizon T, e.g. a year, which is divided into sub-time periods (i = 1,I), e.g. months. The company consists of several operation units ( n = 1,N). Fig. 1 shows the superstructure of the flows of one of the traits. Some suppliers can supply as much as demanded, and some consumers may purchase how-much-ever produced by the company. These can be taken as decision variables, including some raw materials Rt, m (m= 1,M), utilities U,,~ (l = 1,L), and material products P,,j (j = l, J ) as well as energy products Q,,k, (k = 1,K). The internal flows to be decided are the input and output flows of both materials RIN , 1Si',ROUT U and utilities for each unit, F,,,j,, . .... jou, , "]Si',UIN .... h, , F,,,sou, ( j i n = 1,JIN; j o u t = 1 , J O U T ; lin = 1,LIN; lout = 1 , L O U T ; n = 1 , N ) , respectively. In addition, each individual unit can be started up or shut down during the planned time horizon, thus 1• N binary variables y,,, ~ {0,1} will be decided (y,,, = 0 means not in operation and y,,, = 1 means in operation).
Moreover, if the downstream unit is shut down, the outflows of the upstream unit become waste flows ( W,,,,,jo,,t, R U ) and have to be disposed. W,,,,to,, Due to the future varying market conditions, the demands of some other material products, fi,,j (j = 1, J ) , and some energy products, Q,,~ (k = 1,K), in the future periods can not be determined in advance. The supply of some other raw materials, /~,,~ ( ~ = 1, ~ t ) , and utilities, U,,i ([ = 1,L-), may also be uncertain. These supplies may be so important for the production that the company will purchase the amount which the suppliers can provide. Such uncertain flows are known as stochastic variables, i.e. they can not be determined in advance but their distribution may be available by statistical data analysis. Fig. 2 shows different possible distributions of dynamic stochastic variables in the future time horizon ( a - stepwise; b - peak-like; c - oscillating). The aim of the planning is to determine the decision flows under the uncertain supply availability (/?,,~, U,,i) and the uncertain demands (fi,.~, Q,,~ ), so that the expected total profit of the production in the future horizon will be maximized.
T 9 (a)
'0
~
(b) (c) Fig. 2: P o s s i b l e f o r m s o f f u t u r e p r o f i l e s o f u n c e r t a i n f l o w s
-T~
513 J
Z
K
M
P
j =1
L
Q
Ct,jet,J "1- Z
R
Ct,kat, k -- Z
k=l
Ci,met, m - Z
m=l
ctU,IUt, l
1=1
(1)
max Z et,,,,y,,. "q-
-
N FJow -
j
, n , j t n . . ] _ Z . 7 ,,.,rrT.,,.,h,, UIN Pt,n , F t R1N jm=l hn=l
,~ow
.... ,o.,W,,~
]
+ Z rlt.,.loutW,.,.lout lout =1
In (1), the first line represents the part due to the external decision flows, while the second line denotes the part due to the uncertain in- and outflows which can not be determined a priori. The third line in (1) is the sum of internal costs of the operation units, while the last line is the total cost of waste disposals. Although the values of prices (c e, c ~ c R, c u, c W) and cost factors ( a , fl, y, r/R, r/u ) in the future horizon may be uncertain, we assume that their expected values are known. The problem is subject to the following mass and energy balances as well as the restrictions of flows. For the purpose of planning linear balance equations can be used. Inflow allocationsN
Mass and energy balances:
N
Rt,m - - Z
e ......
Ut,! = Z
n=!
Capacity restrictions:
0 <~ Rt,m ~ Rtmm
0
(2)
< R,,,m,. < y,.. R,maxm
N
(3)
9 r max
O < V , ,.l
O 1< -Ut , _< / ]mt,l ax,. ,
Satisfying uncertain supplies:
U .... I n=l
< - y,,.tJ .... !
(4)
- -<- 0
(5)
N
ZR;. ,
<
,
--
Zu'
'
n=i
I,n,l
,[
n=l
Operation units: J1N
Mass and energy balances:
Pt,n,j
M
jm=l LIN L Qt,n,k = Z "I"Q 12UIN + Z ....,m"~,,,,tm hn=l 1=1
M
P j , . rr",, .RIN = ~2.. a .... . . j , . + ~ bP,,n,JRR, n m + Z m=l
-L
JIN
Capacity restrictions:
jm=l
_ _ max 0< P,,..j < y,,.P,,..j ,
(6)
M
.... b,Q.,U~., + Z d,Q,.]U~,.] + Z a~P ,.eau "t- Z "~P t,n,jml"fi,n,fin I =1
Pd,.~R,.~'
~=1
0
d,,,~t~.-~ '
....
m=l
(7)
m=!
< Q,,n,k _< Y,..~/,-)max .... k
_
(8)
The mass balance for P' and F n , the energy balance for Q' and F v , as well as there capacity restrictions have the same form as (6), (7) and (8), respectively, and are omitted here. Connections between units (e.g. flows from unit n~ to unit n 2): Mass and energy balances:
Ft ROUT ,nl,jout
=
R1N R Ft,n2,jm + mt,nl,jout ,
R
rrr R,max
Capacity restrictions:
0 ~ Wt,nl,jou t ~
Logistic restrictions:
Yt,nl = Y,,.2 +
y,,nlW,,nl,sout,
FtUOUT = ]5i,U1N U ,nl,lout ~" t,n2,1m "q- mr,hi,lout U nrr U, max
0 < W,,,,1,pu ` < y,,,Hn',..1,jout
(9) (10) (11)
Y,,%I
Outflow accumulations: N
N
n=l
n=l
Mass and energy balances: Capacity restrictions:
(12) 0_<
x,
max
0 < O,,k <- Q;,k
(13)
514 N
N
Satisfying uncertain demands"
(14)
~ P't,n,J- >/~,,)
n=l
n=l
Storage of the flows accumulated
in b u f f e r t a n k s : vR
= Vf~
^IN OUT i-l,~ + R,,~ - R,,~
Feed tanks"
v Rt,m = v Rt-l,m + R,,m IU -- R ol,m w
Product tanks:
Ve = V e
Tanks for waste flows:
WR WR RIN Vt,n,jout = Vi-l,n,jout + Wi,n,jout
Volume restrictions:
VmR,min
i,j
i-l,j q"
pm
t,j
_ pour i,j
t,~
,
V ~_ = V / ' _
,
i,j
i-l,j +
pm
t,j
_ pour t,j
W ROUT .. t,n,jout
(15) (16) (17) (18)
Satisfying tank capacities under uncertain in- or outflows: v R ' m i n < v--I ~ < V R ' m--a I , .
vU'min _<~t,~ _
'
vP'min
(19)
Again, the same relations for v U , v 0 as (15), for V Q , V O- as (16), and for V ~v as (17), as well as the corresponding restrictions are omitted here for short. The parameters in the above model include the coefficients (a, b, d) in the balance equations (6)-(7) and the values of the maximal allowable flow and storage capacities in the inequalities. The initial values of the tank storage are also known parameters. 3. H A N D L I N G
THE UNCERTAINTIES
Special attentions should be given to the inequalities (5), (14) and (19), where the uncertain flows appear. We reformulate (5) and (14) with the following single probabilistic (chance) constraints: Pr [. n=l
Pr
R:,,-~ < ,, -- R,,~
P'.... :-- >
,)
> - p,,-~ '
> -P,,),
Pr I~n=l
Pr I, n=l
U't,n,l- --< U t j Q : ,, ,~ > -
> --
Pij
Q,,~ _> p,,f
(20) (21)
And for (19), the stored volume is to be probabilistically ensured inside the lower and upper limit of the tank capacities under the uncertain flows: Pr{gmr'min
< v , ~,
-< g ~ ' m a x } >-
, Pr{~-u'min
Pr/V jP'min ~ V,,j ~ ~ V)P'max -> -,,j, Pr
,mm
< v Oi,l < -- g-U'max "I } >__
~/r~,k 0- <- v Q,max >- p,,~
(22)
(23)
In (20)-(23), p e (0,1) is a user-defined confidence level to guarantee the reliability of the operation. This means, due to the existing uncertain properties, one can not ensure a 100% success of the planned production, i.e. a risk of the decision has to be taken into account. According to the practical situations, different confidence levels can be assigned for different flows and periods. To describe the uncertain flows, it is natural to assume that they have a multivariate normal distribution with the nominal (base) value as the mean and a known standard deviation (e.g. 10% of the mean). As shown in Fig. 2, an uncertain flow may have different means and deviations in different time periods. Single chance constraints can be readily relaxed to equivalent deterministic inequalities. For example, the first inequalities in (20) and (21) can be written as N
~--' R;,, . ~ < - ~ , (1- p,.~), n=l
N
~--,p,l ( p ,,) ) , , . , j -> ~ - t,~ n=l
(24)
515 9 -~ is the inverse probability distribution function of a normal distribution. For given p values, the right-hands of the inequalities in (24) can be easily calculated. To relax the chance constraints (22)-(23), from (15) one can rewrite the first inequality in (22) as the following:
" IN
Pr
OUT
R,,~ < t. 11=1 ^ m
Pr
R,max- V0,~ R
R,,~ + Vm
> Pi,~
(25)
11=1
R11,~ >
R , ~ _ ER
ovr
R11,~ + V~
i1=1
>
(26)
o,~ - P,,~
For ii = 1,...,I, the term inside the probability representation in (25) can be represented as T~
(27)
< Tz~ + g~
where 1 T=
0
"'"
9
i| 1
0
If~ZN1,'~ R2,-~ {-~= "IN "IN~ ~Rz,
I
9"
We define ~" = / ~ .
If ~
(RI,OU--~ T) IR~ I
N OUT
\ t,-~ j
V.2 max __ V, R o,~ I V R,max _V, R 0,~ v~,ma• _ ~ n o,~
conforms a multivariate normal distribution,
i.e.
~ - ~ N ( l z ~ , 2 ~ ) , then ~"-~ N(TI, t ~ , T Z ~ T r ) , where ~t~ and 2~ are the mean vector and the
covariance matrix of ~ .
Since the elements in g~ and z~ are either known parameters or
decision variables, the inequalities in (25) and (26) can be computed in the same way as (24). Now the dynamic stochastic operations planning problem is transformed into the general deterministic MILP formulation which can be solved with a standard MILP solver. 4. C O M P U T A T I O N E X A M P L E As shown in Fig. 3, we consider a company with three operation units. An inflow R and an outflow fi are uncertain flows, while another inflow R can be decided. Two tanks are used to dampen the uncertain flows 9 The data of the units, tanks and mean values of the flows are given in Table 1-3, respectively. The standard deviations of both uncertain flows are set to 10% of the mean values. Correlations between the flows are considered as well 9 5 time periods are considered for the planning. The profit will be maximized under the chance constraints to hold the lower and upper limits of the two tanks. As shown in Fig. 4(a), the profit will be decreased if the required confidence level increases. It is interesting to note that it decreases stepwise at some points where the switch (on or off) strategy of the units has to be changed. This can be seen in Fig. 4(b), where
Yl=~'~y ....Y2=~-'y ....Y3=2y,, 3 denote i=l
e=l
the total
i=l
number of on-periods of the three units, respectively. To achieve a proper trade-off between reliability and profitability, the optimal decision should be chosen at the point just before that step change. Fig. 4(c) shows the optimal decisions with the confidence level as 0.93.
Fig 3." F l o w s h e e t o f the example c o m p a n y
516
n 1
2 3
_
-
a,
,8,
150 205 100
10 5 12
a 2
F ~"~
3 3
350
V ~ 9
VZ" 3
V "~
1
2
3
4
5
10
m 1
15
/~
4.0
5.0
6.0
5.0
4.0
20 20
2
130
60
200
/3
50.0 60.0 70.0 60.0 50.0
300
.......
250
. . . . . .
200
-,
,,oo
,.
. . . . . .
. . 0,84
0,88
0,92
Confidence level
(a)
--~-------f~
0,96
1
0,8
0,84
0,88
"~
~
__ 0,92
0,96
F2
~
----t
20
,
,~
~ .
25
I
g
. . .
50 0 ~- . 0,8
i
"~
-, ....
~-15o
Table 3: Mean of the flows in time periods
Table 2: Data of tanks
Table 1: Data of operation units
..._
1
_rL
---
................
j 1
0 t,~ 0
t~ 1
...... ,-,2
Confidence level
(b)
- --~3
",,,Z4
5
T i m e period
(c)
Fig. 4" O p t i m i z a t i o n results by different c o n f i d e n c e levels
5. CONCLUSIONS Production planning under market uncertainty by the dynamic chance constrained MILP approach has the following features: (1) By solving the problem with different confidence levels, the decision can be made with a desired trade-off between profitability and reliability. (2) The solution provides purchase decisions for some raw materials and utilities as well as sales decisions for some products in the planned time horizon. These decisions will be robust to the changes of uncertain feed supplies and uncertain product demands. A high robustness is highly desired, since these decisions are usually realized in the form of contracts with external companies and thus variations are rarely allowed. (3) The solution provides a robust operation strategy for the internal units (in- and outflows, units on and off) in the future time horizon. It implies that hardly any changes to the planned operations will be required. This is advantageous for a stable operation. Of course, changes of operations of internal units can be made according to the realization of the uncertain flows. (4) It is possible to identify a p r i o r i the uncertain flows that are sensitive to the solution and recognize their impacts on reliability as well as profitability. This leads to guidelines to modify the process (a sense of debottlenecking), e.g. considering a larger tank. (5) A moving horizon can be used to modify the decided operations. A re-optimization will be made in the beginning of each time period for new operation strategies, based on the current information of supplies, demands and tank capacities. The approach presupposes the availability of the quantification of uncertainties and the model of a process. These two issues are being addressed, i.e. data analysis and modelling are more and more emphasized in the process industry. REFERENCES
[1] [2] [3] [4] [51 [6] [7]
D.E. Shobrys and D.C. White, Comput. Chem. Eng., 24(2000), 163. M.L. Liu and N.V. Sahinidis, Ind. Eng. Chem. Res., 35(1996) 4154. M.G. Ierapetritou and E.N. Pistikopoulos, Ind. Eng. Chem. Res., 35(1996) 772. R.L. Clay and I.E. Grossman, Comput. Chem. Eng., 21 (1997) 751. S.B. Petkov and C.D. Maranas, Ind. Eng. Chem. Res., 36(1997) 4864. P. Li, M. Wendt, H.G. Arellano and G. Wozny, AIChE J., 48(2002) 1198. M. Wendt, P. Li and G. Wozny, Ind. Eng. Chem. Res., 41 (2002) 3621.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by ElsevierScienceB.V.
517
Complexity Analysis for Hybrid Differentiation in Process System Optimization Xiang LI, Zhijiang SHAO, Jixin QIAN Institute of Systems Engineering, Department of Control Science and Engineering, Zhejiang University, 310027, P. R. China Abstract Hybrid differentiation approach, employing different differentiation algorithms for different parts of a process model, has been presented and developed recently to achieve high performance differentiation in process system optimization. However, a convenient and efficient approach to choose differentiation algorithms is absent. In this paper, some measurement of complexity is defined as a criterion to choose differentiation algorithms, and an approach to evaluate the complexity in an extended automatic differentiation procedure is developed. An ad hoc approach to estimate the complexity of symbolic differentiation is particularly discussed in details. Numerical results validate the complexity analysis approach and demonstrate the high efficiency of hybrid differentiation.
Keywords
Complexity analysis, hybrid differentiation, process system optimization, redundant computation, similar terms 1
INTRODUCTION
Jacobian Evaluation is one of the most time-consuming steps for process system optimization (PSO) tl], which severely blocks the improvement of the optimization efficiency. Currently, finite-difference (FD), symbolic differentiation (SD) and automatic differentiation (AD) are the main approaches for Jacobian evaluation, among which AD is currently prevailing and widely considered as the most promising one in PSO. However, when modeled in simultaneous equations, PSO problems usually have approximately equal numbers of variables and equations because of their few degrees of freedom. As a result, for automatic differentiation the ratio of time for Jacobian evaluation to time for residual evaluation is bounded by 3n (n denotes the number of independent variables), while for finite-difference and symbolic differentiation that is roughly n [2]. Thus AD does not always have significant advantages over the other two differentiation approaches in PSO problems. According to the advantages and disadvantages of the three algorithms, hybrid differentiation (HD) approach, employing different differentiation algorithms to deal with different parts of a process model, was presented and developed by Li etc [3] and Shao etc [4]. It can exploit the model structure much better than a single differentiation approach so as to achieve higher efficiency. HD includes three steps. First, the optimization model is partitioned into different modules. Here the term of module merely refers to a group of codes or subroutines, and does not mean the blocks corresponding to some unit operation model subroutines. Then the best
518 differentiation approach for each module is selected. Finally, the overall derivative of the model is accumulated according to the derivatives of the modules. Among the three steps, the second one is the most important but the most difficult. In Section 2, some measurement of complexity is defined for a differentiation algorithm to characterize the number of scalar computations required to evaluate Jacobian matrix in scalar architecture, so that the best differentiation algorithm can be found by comparing the different complexities of different differentiation algorithms for each module. Then an extended AD approach is presented to evaluate the complexities. In Section 3, an ad hoc approach to detect redundant computations and the combination of similar terms are presented for estimating the complexity of SD. The complexity analysis approach for HD is applied to an optimization problem of an industrial batch polycondensation reactor in Section 4. Numerical results illustrate both the validity of complexity analysis and the significant advantages of applying HD to the optimization problem. 2
C O M P L E X I T Y ANALYSIS IN AN EXTENDED AD P R O C E D U R E
In a scalar architecture, the number of the scalar computations required for an algorithm to differentiate a module could be used to reflect the efficiency of this algorithm. It is called complexity of the differentiation algorithm for the module in this paper. Therefore, the best differentiation algorithm for a module can be selected through the comparison of their complexities for this module. Here we only discuss the complexity analysis for Jacobian evaluation. In HD framework, symbolic automatic differentiation (SAD) tSl, a symbolic variant of AD is adopted as an SD approach, because it can differentiate subroutines that are usually used to describe complex chemical process system models while traditional SD tools can not. In addition, the AD algorithm incorporating Jacobian-compressing technique t6'71, called sparse AD in this paper, is often adopted to exploit the sparsity of a model. The sparsity of variables in a model, which is necessary for the Jacobian-compressing technique, could be obtained during a quasi-AD procedure [61. Therefore, considering that the complexity analysis for FD is easy, the whole complexity analysis approach for different differentiation algorithms in HD could be united into a framework similar to that of AD. In operator-overloading AD, all the elementary computations (addition, subtraction, multiplication, etc.) during the evaluation of a process model are overloaded and the results of the elementary computations are regarded as temporary variables. The temporary variables that are dependent on the independent variables, called active variables t8l, are declared to be of a particular class named avariable. The value of an independent or temporary variable, its Jacobian, the related operator, and other useful information are encapsulated in class avariable. During the computation of a module, each operator is overloaded to compute both the resulting variable of the operation and its Jacobian. Class avariable can be extended to a class savariable (super active variable), which contains additional parameters that are indispensable for choosing the differentiation algorithms. As a result, the AD procedure is extended to perform more computational jobs (shown in Fig. 1). Unlike AD, sparse AD and FD, evaluating the complexity of SAD is tough work because
519 the elimination of the redundant computations are hard to detect. These redundant computations, including computations between constants, multiplications between 1 and any quantity, additions between 0 and any quantity, etc, are executed during the generation of the symbolic derivatives and need not to be performed in optimization. Furthermore, as each temporary variable denotes an expression in SAD, computations between the variables may incur combinations of the similar terms in the corresponding expressions. Such combinations should be taken into consideration for more accurate evaluation of complexity. Variable calculation Complexity evaluation for FD Complexity evaluation for SAD
~,~
Class
savariable ~
Complexity evaluation for sparse AD
Derivative calculation
Sparsity evaluation
evaluation Complexity evaluation forAD
Note: * denotes the number of scalar computations for calculating the temporary variable itself
Fig. 1 Diverse computational jobs in the extended AD procedure 3
COMPLEXITY ANALYSIS FOR SAD
The terms in the expression corresponding to a temporary variable could be grouped into three categories. The first is the nonconstant terms that will incur redundant computations when multiplied by a constant (except 0, 1, -1). The second is the nonconstant terms that will not incur redundant computations when multiplied by a nonzero constant. The third is the constant term. The numbers of the terms of each category are p, q, k. In this article, p, q, k, are encapsulated in class savariable to count discount, the number of eliminated redundant computations relevant to the temporary variable. Initially, each of the independent variables is assigned with p-0, q=l, k=-0. Then p, q, k of each temporary variable can be propagated during the operator-overloading procedure. Suppose that there is a multiplication between a variable Zl and a constant z2 (that is to say, p2=O, q2=0) with a result of z3. When the operator is overloaded, the p3, q3, k3 of z3 and discount of this operation can be evaluated according to thepl, ql, kl ofzl and the k2 ofz2 by the algorithm described in Fig. 2. The task of identifying all similar terms presented in a system of equations is very difficult[7]. Since linear equations are common in process models and are easy to find, we only consider the combination of linear terms associated with the independent variables. Two parameter lists, named index and coeffi, are encapsulated in class savariable to count combination, the number of eliminated computations during combination of the linear terms. The parameters in the list index are the indices of the independent variables in the linear
520 subexpression of the temporary variable, while those in the list c o e f f are the relevant coefficients of the independent variables. Initially, each of the independent variables xi is assigned with i n d e x = i and c o e f f = 1. Then the two lists of each variable during differentiation will be obtained when the relevant operator is overloaded. Since the linear subexpression is part of the temporary variable, updating the lists will probably influence the values of p, q. Suppose that there is an addition between variables Zl and z2 with a result of z3. The parameter c o m b i n a t i o n relevant to the addition can be obtained with i n d e x 1 and c o e f f l of Zl as well as i n d e x 2 and c o e f f 2 of z2 by the algorithm described in Fig. 3. The above approach may not find all the redundant computations and all the combinations of the similar terms. Therefore the efficiency of SAD will be underestimated. However, since SAD is not so convenient as AD and costs more memory, the user usually will not replace AD by SAD unless the latter greatly surpasses the former. Hence this approach is acceptable.
ifk2= 0 if kl = 0 discount = 2pl+q1
else discount = 2pt+qt+!
end p3=O, q3=O, k3=O
elseifk2= 1 or-1 discount = 1 P3 = Pi, q3 = ql, k3 = klk2
else if kl = 0 discount = p l + 1
else discount =p1+2
for i = 1 to length (index1) if indexl (i) is found in index2 at positionj if both coeffl(i) and coeff20") are 1 or-1 if coeffl (i)+coeff20")=O combination=2
else combination=l
end elseifeither coeffl(i) or coeff20") is 1 or-1 if coeffl (i)+coeff2 0)=O combination=3
elseif coeffl (i)+coeff20)=l or -1 combination=2
else combination=l
end elseif neither coeffl (i) nor coeff20) is 1 o r - 1 if coeffl (i)+coeff20)=O combination=4
end
elseif coeffl (i)+coeff20)=l or -1
p3 =pl+qt, q3 = O, k3 = klk2
else
end
combination=3 combination=2
end end end end Fig. 2 Partial view of the algorithm for obtaining d i s c o u n t 4
Fig. 3 Partial view of the algorithm for obtaining c o m b i n a t i o n
N U M E R I C A L RESULTS
The testing problem is an industrial polycondensation process optimization problem having been discussed in details by Li etc TM. The model contains a polycondensation process submodel and a flash distillation submodel, of which the former is continuous and the latter is discontinuous. The optimization was performed on Celeron 400MHZ PC with Microsoft Windows 2000 operating system. The Successive Quadratic Programming (SQP) algorithm was realized with Optimization Toolbox 2.1.1 in MATLAB. AD, sparse AD and SAD were realized with XADMAT ES], which was developed by Li etc. based on ADMAT E61. FD was
521 executed in a forward difference approach. Table 1-2 display the differentiation results of the two submodels by SAD, AD, sparse AD and FD. The estimated complexities approximate the actual ones so that can reflect the computing time. The complexities in the parentheses are estimated without considering the combination of similar linear terms. The considerable great difference between the two complexities indicates that the combination of linear terms is frequent during the differentiation and should be taken into consideration. The flash distillation submodel can not be differentiated by SAD and sparse AD because the discontinuous module does not have unique symbolic Jacobian and sparsity pattern. Sparse AD achieved much higher efficiency than AD did because this equation-oriented model is highly sparse. SAD performed much better than spare AD did because it exploited the sparsity more effectively, avoided redundant computations and combined similar terms. According to Table 1-2, HD should employ SAD for the polycondensation submodel, and FD for the flash distillation submodel. The results of the optimization based on FD, AD and HD are shown in table 3, which highlights the overwhelming advantage of HD. Table 1. Complexity and efficiency of each differentiation algorithm . . . . . . . . . . in polycondensation submodel Differentiation SAD algorithm Actual Estimated AD Sparse AD FD Complexity 24174 33045 (40517) 2251272 116181 2143743 Computing time(s) 0.24 76.24 11.41 194 Table 2. Complexity and efficiency of each differentiation algorithm in flash distillation submodel Differentiation SAD FD algorithm Actual Estimated AD Sparse AD Complexity / / 1709778 / 882659 Computing time(s) / 53.30 / 2.49 ,,
Table 3. Results of polycondensation process optimization with diverse differentiation al[[orithms Differentiation Algorithm
Iterations
Time for one differentiation(s)
Differentiation time (s)
Optimization time (s)
Ratio*
FD AD HD
9 13 21
196.17 136.82 2.73
1702.1 1778.7 53.16
2083.5 2254.4 519.67
81.69% 78.90% 10.23%
,,,,,,
,,,
,,
,
,,,,,
,
,,
,
,,,
,,
,
Note" *denotes the ratio of differentiation time to optimization time. 5
CONCLUSION
In scalar architecture, the complexity defined to choose differentiation algorithms for the modules of a process model can reflect the actual computing time. The approach to estimate the complexity presented in this paper is effective. Integrating HD to SQP algorithm can dramatically reduce the time for differentiation and significantly improve the efficiency of optimization.
522 ACKNOWLEDGEMENTS This research was supported by the National Natural Science Foundation of China (No. 20276062) and partially supported by the National High Technology Research and Development Program of China (No. 2002AA412110) REFERENCES 1 Wolbert, D., Joulia, X., Koehret, B., Biegler, L. T. Flowsheet optimization and optimal sensitivity analysis using analytical derivatives, Computers and Chemical Engineering, 18(11), 1083-1095 (1994). 2 Tolsma, J. E., Barton, P. I., On computational differentiation, Computers and Chemical Engineering, 22(4/5), 475-490 (1998). 3 Li, X., Shao Z., Zhong W., Qian, J., Polycondensation process optimization based on hybrid automatic differentiation, Journal of Chemical Industry and Engineering (China), 53(11), 1111-1116(2002). 4 Shao Z., Li, X., Qian J., Hybrid differentiation algorithm in chemical process system optimization, Journal of Chemical Industry and Engineering (China), accepted. 5 Li, X., Zhong W., Shao Z., Qian, J., Applying extended automaitc differentiation technique to process system optimization problems, Proceedings of the American Control Conference 2001, 4079-4084 (2001). 6 Verma, A., Structured Automatic Differentiation, Ph.D. Thesis, Cornell University (1998). 7 Tolsma, J. E., Barton, E I., Efficient calculation of sparse Jacobians. SIAM Journal on Scientific Computing, 20(6), 2282-2296 (1999). 8 Griewank, A., Juedes, D., Utke, J., ADOL-C: a package for the automatic differentiation of algorithms written in C/C++, ACM Transaction on Mathematical Software, 22(2), 131-167 (1996).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
523
Multi-objective programming in refinery planning optimization Shaojun Li a, Hui Wang a, Yongrong Yang b, Feng Qian" aEast China University of Science and Technology, Shanghai 200237, China bDepartment of Chemical Engineering, Zhejiang University, Hangzhou 310027, China
Abstract The aim of this paper is to present a multi-objective genetic algorithm approach to optimize production planning in circumstance of uncertain price. The characteristic feature of this approach proposed is that various objectives are synthetically considered in the solving process. According to multi-objective programming, uncertain parameters in a mathematical model are considered triangular possibility distributions. Thus the uncertain optimizing problem can be transfer to a three objectives optimizing problem. An example was used to show the feasibility of the approach. Keywords multi-objective, planning, genetic algorithm 1 INTRODUCTION Oil refinery is one of the most complex chemical industries, which involves many different and complicated processes with various possible connections. The objective in refinery operation is to generate as much profit as possible by converting crude oils into valuable products such as gasoline, jet fuel, diesel, and son on. From the managerial level, managers need to decide which crude oils to process, which products to produce, which operating route to use, which operating mode is the best for each process, etc. But there are many related problems remain unsolved in supporting high-level decision-making, because more undermined factors are involved in this level. These factors, such as the randomness of arriving orders, the uncertainty in a competitive environment, present a special challenge for effectively modeling the real-world situation by using traditional mathematical programming technology. In this paper a multi-objective programming approach is used to solve the uncertain problem, especially the price uncertainty, in process planning for oil refinery plant. 2 MATHMATICAL MODEL OF PRICE UNCERTAINTY The aggregated production planning is a top-level long-range planning problem in a manufacturing enterprise. Other planning activities such as the annual sales planning, resource planning and raw material planning depend on the aggregated production planning. Due to the uncertainty of market, especially, the uncertain price of products and raw materials, the aggregated production planning is very difficult to be made. The mathematical model of refinery production planning in the circumstance of price uncertainty can be built as follows [ 1],
Max
(ZCiXi-ZC'jXj-ZC'kXk) i
s.t.
j
EXi =ZXj i
j
(1)
k
(2)
524 y ' Xik < M k
(3)
i
X, =~X,j
(4)
J X i = YifXf
(5)
b~
(6)
<_ b i <_ b~
a) <_a i < a2i (7) The objective function (1) corresponds to the maximization of profit over the time horizon ,-.., represented by the difference between the revenue C i X i due to product sales and the overall cost (cost of raw material purchases C j X j , inventory holding cost and operating cost C k X k ). Fixed costs are generally not a concern to the planner as these costs will be essentially the same, no new device is considered here. Constraint (2) determines the balance of raw materials and products. Constraint (3) determines production capacity of every unit; M is the maximum production capacity. The material balance for each chemical is given by (4). Constrain (5) describes the production ratio of every unit, Yif is the production ratio of product i in the device f. While constraint (6) and (7) represent sales and purchasing limits. Uncertainty in parameters has been most often modeled by probability distributions, usually derived from evidence recorded in the past. However, when there is lack of evidence available or lack of certainty in evidence or simply when it does not exist, the standard probabilistic reasoning methods are not appropriate. In this case, uncertainty in parameters can be specified based on the experience and managerial subjective judgment. It is convenient in these cases to express uncertainty in parameters using various imprecise linguistic terms, such as customer demand is about C~ possible value), but definitely not less than C L (most pessimistic value) and not greater than C v (most optimistic value). Thus the uncertain in parameters can be denoted as C = (cL , cM , C v ).
Lai and Hwang [2] referred to portfolio theory and converted the fuzzy objective with a triangular possibility distribution into three crisp objectives. According to their method, the equation (1) can be represent as Z = max{(~rX}. The production planning making process becomes simultaneously to maximize three objectives, (cL)Tx, (cM)Tx and (cU)Tx. Thus the production-planning optimizing problem in the circumstance of uncertain price environment becomes a multi-objective programming problem as follows, I maxZl = (C L ) T X maxZ 2 =(cM) TX. (8) maxZ 3 = (cU) r X 3 MULTI-OBJECTIVE P R O G R A M M I N G IN PRODUCTION PLANNING The solution of a multi-objective optimization problem is dependent upon the preference of decision maker, which could be represented by a utility function that aggregates all objective functions into a scalar criterion. In most decision situations, a global utility function is not known explicitly and only local information about the utility function could be elicited. This leads to interactive procedures facilitation tradeoff analysis. As for the solution approach, the conventional optimization techniques, such as
525 gradient-based and simplex-based methods are difficult to extend to the multi-objective case, and it is often difficult to find the optimal network due to the non-convexity of the mathematical representation of the problem. Genetic algorithm,-a stochastic optimization technique based on the concepts of natural evolution, has been recognized to be possibly well suited to multi-objective optimization problems because multiple solutions can be searched in parallel. The ability to handle complex problems with discontinuities, multimodality and disjoint feasible spaces reinforces the potential effectiveness of GA in multi-objective search and optimization. The simultaneous optimization of multiple, possibly competing objective function differs from the single function optimization in that it seldom admits a single perfect solution, but rather a set of alternative solutions, called as non-dominated solution set or Pareto set. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered. Genetic algorithm seems particularly suitable to solve multi-objective optimization problems because it deals simultaneously with a set of possible solutions which allows to find an entire set of Pareto optimal solutions in a single run of the algorithm, instead of having to perform a series of separate rims as in the case of the traditional mathematical programming techniques. Additionally, genetic algorithm is less susceptible to the shape or continuity of the Pareto front, whereas these two issues are a real concern for mathematical programming techniques. Genetic algorithm has features that make them attractive for the approximation of the Pareto set: it is population-based, requires only objective function evaluations, uses probabilistic transition rules which make it less prone to local optimum entrapment and allow for several types of parallel implementations. Multi-objective genetic algorithm is developed on the basis of genetic algorithm, and it can trade-off between the multiple objectives directly. The population evolution depends on genetic operations acting on the whole population such as selection, crossover and mutation, and these genetic operations have key effect on the performance of the algorithm. The intrinsic, parallel and effective use of the global information are the main characteristics of this algorithm. The steady state non-dominated sorting genetic algorithm [3] is applied in this paper as the solution approach to the multi-objective optimization problem. It was developed by the combining of steady-state idea in single-objective genetic algorithm and the fitness assignment strategy of non-dominated sorting genetic algorithm, and has been test to be efficient in solving the MILP or MINLP problems with multiple objectives. In order to use multi-objective genetic algorithm solve the model above, the three objectives need to be transferred as follows. At first, the positive ideal solution (PIS) and negative ideal solution (NIS) of the three objective functions should be obtained. There are
Z~Is =maxICLf XIXeW}, ZlNIS=minkCL) r X[ X~ tiff}, Z~IS = maxtCM) r X I X ~ W } , Z~Is =minICM)r X[ X ~ qJ} Z3ms =maxICV~XlXeW }, Z~1S= min~CV) r X[XEW}. The linear membership function of these objective functions can be computed as
(9)
526 1 l,z, :
Zi - zimS z#,S _ z#,S
Z~ < Z ~ Is
(i = 1,2,3).
Z # ,s < Z i < Z # ,s
0
(10)
Z i > Z # Is
Thus the objective function become the formula as follow, (11)
max(#zl ), max(#z 2), max(#z3 ). 4 EXAMPLE
We use an example [4] to explain the using of multi-objective programming in petroleum refinery. Fig.1 is a simplified representation of a refinery that is essentially a primary distillation unit and a middle distillate cracker. It processes crude oil and produces gasoline (x2), naphtha (x3), jet fuel (x4), heating oil (xs) and fuel oil (x6). The primary unit splits the crude into naphtha (13 wt% yield), jet fuel (15%), gas oil (22%), cracker feed (20%), and residue (30%). Gasoline is blended from naphtha and cracked blend stock in equal proportions. Naphtha and jet fuel products are straight run. Heating oil is a blend of 75% gas oil and 25% cracked oil. Fuel oil can be blended from primary residue, cracked feed, gas oil and cracked oil in any proportions. Yields for the cracker (wt% on feed) are flared gas, 5%, gasoline blend stock, 40% and cracked oil, 55%. This information along with the flow diagram of Fig. 1 describes the physical system. All the variables, which are in the same units, t/d, are first assigned to process streams to represent the flow rate in each. In the example, the feed rates to the primary unit and cracker, averaged over a period of time, can be anything from zero to the maximum plant capacity. The constraints are as follows, Primary unit: xl ~< 15000(t/d), (12) Cracker: x14~2500(t/d). (13) Mass balances: mass balance constraints are in the form of equalities. There are three types of such constraints: fixed plant yields, fixed blends or splits and unrestricted balances. The fixed yields and fixed blends already state above. The restricted balances are as follows, Naphtha: X3+Xll--X7, (14) Gas oil: x12+x13-x8, (15) X2 gasoline "7
[ xll
~x
16
x3naphtha X4
XI2
I
jet v
X18 X14 XI 7 Xln
X15 l'I IIXi3 .,..._[
v tI XI9 iX 6
~-I FUELOIL BLENDING I
Figure 1. Flow diagram of a refinery
f,,,~l
527 Cracker feed: Cracked oil: Fuel oil: Gasoline; Naphtha: Jet fuel: Heating oil:
x14+x15=x9, x18+x19--x17, x6=x10+x13+x15+x19 . x2 ~<2700(t/d), x3 <~ 1100(t/d), x4<~ 2300(t/d), xs<~ 1700(t/d),
(16) (17)
(18) (19) (20) (21) (22)
Fuel oil: x6 ~<9500(t/d). (23) The prices of raw materials and products are denoted as follows ( s crude oil (7.0, 7.5, 8.0); gasoline (17, 18.5, 20); naphtha (6, 8, 10); jet fuel (10, 12.5, 15); heating oil (12, 14.5, 17); fuel oil (5.5, 6.0, 6.5). The operating cost of primary distillation unit is 0.5s operating cost of cracker is 1.5s The positive ideal solutions are got by solving the formula (9), and the negative ideal solution are got by selecting the minimum objective values in the circumstance of other two objectives getting their positive ideal solutions. Thus the three positive and negative ideal solutions are (23387.5, 18593.7), (6326.7, -4787.5) and (47737.5, 30860.8). Thus the objective functions can be denoted as follow, max (18.5x +8.0x 3 + 12.5x 4 + 14.5x 5 + 6 . 0 x 6 - 8 X I - 1.5X14 ) m a x ( 1 7 . 0 x 2 +6.0X 3 + 10.0X4 + 12.0X5 + 5.5X6 8.5Xl -1.5X14) (24) max (20.0x 2 + 10.0x 3 + 15.0x 4 + 17.0x 5 +6.5x 6 -7.5x I -1.5x14 ) st :
(12) - (23), Xa=0.15Xl, X9=0.20Xl,
X7=0.13XI, XS=0.22Xl, xl0=0.30xz,
X20=0.05XI4, X17--0.55X14,
XI6=0.40XI4,
xll:x16=l:l,
Xl2:Xl8=3:l f uI =
1 - 18593.7 23387.5-18593.7 OBJ1
/f 23387.5 < OBJ1 18593.7 < OBJ1 < 23387 5 if OBJ1 < 18593.7
/f
0
1
U2 --
O B J 2 + 4787.5
6326.7 + 4787.5 0
u3 =
f OBJ3 - 130860.8 47737.5 - 30860.8 0
/f
/f 6326.7 < O B J 2 -4787.5 < O B J 2 < 6326.7 if
OBJ2 <-4787.5
/f 47737.5< OBJ1 /f 30860.8 < OBJ3 < 47737.5 if OBJ1 < 30860.83
This model was solved by the steady state non-dominated sorting genetic algorithm (NSGA) mentioned above. The population of NSGA is 100 and the Pareto solution is captured no less than 150 generations. The crossover probability and mutation probability are 1 respectively. Figure 2 gives the Pareto solution of 200 generation. In the figure, the hollow triangles denote the solutions trade off between objective 1 and objective 2; the solid triangles denote the solutions trade off between objective 1 and objective 3. The figure gives not one
528 18
20 !
9
24
22
50 48 46
obj 2 ( x 1000)
zxz~
44 42
obj 3
40 ( • 1000) 38
A&
36 ~A
34 32 30
. . . . 18
19
, 20
2
.
,
22
.
;
2
.
24
obj 1 ( • 1000) Figure 2 The optimal results using multi-objective GA
and only solution, but a group of Pareto solutions. Thus the decision maker can choose one solution as the result by contrasting the different solutions. From the figure we can find that the objective 3 increases slowly but the objective 2 and the objective 1 decrease quickly when the objective 3 is bigger 46000. So we can deem that the solution in the circle scope is better than others. Certainly the decision-makers can choose every solution in the figure according to their preferences. The solutions in the circle scope have bigger flexibility. 5 CONCLUSIONS Point to the uncertain price of raw materials and products, multi-objective programming is introduced into production planning of refinery in this paper. It gives a new method to solve the optimal problem under uncertain environment by using multi-objective genetic algorithm. The production planning got by using multi-objective genetic algorithm can provide strong guidance for decision-maker to make long-range planning. An example shows the effectiveness of this method. REFERENCES [ 1] Minglong Liu, Nikolaos V Sahinidis, Process planning in a fuzzy environment, European Journal of Operational Research, 1997,100:142-169 [2] Lai Y J, C L Huang, A new approach to some possibility linear programming problems, Fuzzy sets and System, 1992, 49:121-133 [3] Gao Ying, Shi Lei, & Yao Pingjing (2001), Waste minimization through process integration and multi-objective optimization, Chinese Journal of Chemical Engineering, 9(3): 262-266 [4] Ravi, V., & Reddy, P.J. (1998), Fuzzy linear fractional goal programming applied to refinery operations planning, Fuzzy Sets and Systems, 96(2), 173-182
Process SystemsEngineering2003 B. Chen and A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
529
Multi-scale ART2 for State Identification of Process Operation Systems Xiuxi Li, Yu Qlan, Qiming Huang, and Yanbin Jiang School of Chemical Engineering, South China University of Technology, Guangzhou, 510640, China
Abstract
Adaptive Resonance Theory (ART) has shown great potential for operational data analysis and state identification. When dealing with real time data, however, ART2 may lose significant information and results in degradation of classification performance. This paper proposed a multi-scale ART2 (MS-ART2). Wavelet analysis is used as the pre-processing to improve classification performance of ART2. Base on the processed signal, patterns are better classified. Finally, application of the MS-ART2 to Tennessee Eastman challenge process for state identification is presented as a case study. Key words
Adaptive Resonance Theory, Wavelet analysis, State identification
1. INTRODUCATION Adaptive Resonance Theory (ART)[1,2] is a clustering-based, autonomous learning model. ART has been instantiated in a series of neural network models referred as ART1, ART2 and ART3. Among them, ART2 is designed for continuous-valued vectors and has shown great potential for operational data analysis and state identification. ART2 may be trained unsupervised when it is used for continuous signal input. It is well known that it is very difficult to get training data for the purpose of process fault identification and diagnosis. ART2 has been reported to identify and respond to the objects quickly and automatically. In addition, ART2 is recursive. It acquires new knowledge and retain stable while the existing knowledge is not corrupted. This property is proved very useful for monitoring where on-line data are continuously received. However, ART2 is limited in de-noise. ART2 could only de-noise the low frequency noise. Thus its performance is highly depressed when there is strong noise in input signal. In this paper, a MS-ART2 is proposed to improve the de-noise performance of conventional ART2 by combining wavelet analysis with ART2. 2. EFFECT OF NOISE TO THE PERFORMANCE OF ART2 To illuminate the effect of noise to the performance of ART2, the leak detection of oil pipe is studied as a case. It is very important for refinery that oil pipe transport in safe and *Corresponding author, Tel: +86(20)87112046, Email: [email protected]
530 efficient. Once leakages occur, not only does economic loss, but also environment would be polluted. Figure 1 shows the schematic diagram of the leak detection of oil pipe [3]. pl(t) is a pressure signal at the upstream pump station and p2(t) is at downstream pump station. L is length of pipe, and XL is position of leak. QL is quantity of leak. Under stable state, both ofp](t) and p2(t) are smooth signal with invariable mean value. When there is a leak in the pipe, however, there will be a suction pressure wave with definite wave speed spreading the upstream and downstream terminals. The mean of pl(t) and p2(t) will be changed while suction wave arrival at the two terminals of pipe. In this case, there is a leak at the sampling time 500. The plot of pressure signal p](t) is shown in Figure 2, where X-coordinate is sampling time and Y-coordinate is pressure. Let the length of time window be 5. That means the input pattern is (xi§ x i§ x i+3, x i+4, x i+5). 200 groups data are obtained. The vigilance threshold R is 0 (all input patterns are regarded as one type). Figure 3 shows the similarity of data computed with ART2. From Figure 3, it is noticed that the plot changes very sharp on the 100th input pattern. It shows that there is a leakage at that time. It matches the fact that there is a leak at the sampling time 500.
~] Downstream i pump station
upstream pump station
Q~
pl(t)
pE(t)
Figure 1 Diagnosis system for pipe leak 35i
. . . . . . . . . . . .
34St
33'5I 0.92
O.9
32 5I 3=0-;~o-~o~;-~o-;oo-
i-oo e ~ - 7 o o
800
9oo ~ooo
Fig. 2 Signal of pressure at normal low noise
o.%
2o
40
6'o
9o
loo
12o
14o
16o
18o
200
Fig. 3 Similarity of data from ART2
34 5
33 5 I
33! 32 5 t 32!-
100 200 300 400 500 600 700 800 900 1000
Fig. 4 Signal of pressure at high noise
Fig. 5 Similarity of data at high noise from ART2
531 In order to examine the ART2 performance under high noise, white noise is added on The plot ofpl(t) with noise is shown in Figure 4. Figure 5 shows the result of pressure ART2. Although the plot on the 100th sampling changes dramatically, which means a leakage may occur, the performance of pattern recognition of ART2 do decrease because false alarms occur on such as the 25th, 73rd input patterns and so on. This example shows that ART2 performance degrades when there are strong noises in the input signals. This is because noise cannot be well removed due to inappropriate de-noise mechanism of ART2. The de-noise mechanism is actually an activation functionflx).
pl(t).
f(x)={;
xO
where 0 is a threshold. If an input signal is less than 0, it will be considered as a noise component and set to zero. This is inappropriate for removing noise components contained in process dynamic transient signals that are often of high frequencies and in certain magnitude. 3. MS-ART2 NEURAL N E T W O R K To improve performance of ART2, Many improved methods were proposed by many researchers. A mechanism was proposed by Pao[4] to replace the data pre-processing part of ART2 with more efficient noise removal and dimension reduction methods. Wang[5,6] has proposed an integrated framework ARTnet, in which wavelet analysis is used to replace the data pre-process unit of ART2. In this study, a MS-ART2 neural network is proposed. Wavelet analysis is used as the data pre-process unit of ART2. In MS-ART2, wavelet analysis has two functions. The first function is to remove noise for original signal, and another is feature extraction. After wavelet transform, the low frequency part represents the trend of processing signal, while the high frequency part represents the details of signal changing. The signal transformed by wavelet is classified in three groups: low-frequency, high-frequency, and de-noise signal. They input into ART2 separately to training the neutral network to get the structure information of the ART2. Figure 6 shows the structure of proposed MS-ART2 neutral network. In post-processing system, classified results from ART21, ART22, and ART23 will be analyzed, compared, and stored.
low frequencywavelet Coefficient
signal
Wave,etI transform
~[ ART21 I
high frequency wavelet___~l Coefficient I ART22 wavelet de-noised
Postprocessing system
signal-----~! ART23
Figure 6 Structure of proposed multi-scale ART2 neutral network
532 The procedure of MS-ART2 neural network is as follows. The first step is training. The sample is transformed by wavelet to get the low frequency, high frequency coefficient of wavelet transform, and the de-noise signal. They input into ART2 separately to training the neutral network to get the structure information of the ART2 network of the sample. Then, the networks trained by samples are stored in the post-processing system. The second step is testing processing. The new sample by wavelet transforms, we could get the information of similarity degree of sample IN. Then the information of matching degree of IN is compared with the information stored in the post-processing system. It is considered as an existing pattern, if all information matches. Otherwise, the system will treat it as a new pattern and save it. 4. CASE STUDY To verify and illustrate the performance of MS-ART2 for state identification of process operation, data generated from the Tennessee Eastman (TE) simulation [7,8] were used. Tennessee Eastman process, which was developed by Downs and Vogel [9], consists of five major unit operations: a reactor, a condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. The process has 41 measurements, including 22 continuous process measurements and 19 composition measurements, and 12 manipulated variables. Some disturbances are programmed for researching the characteristics of the control system, listed in Table 1. The variables selected here are 11 manipulated variables and 22 measurements available every second. There are 12 groups of samples available, as listed in Table 2. Table 1 Process disturbance for the Tennessee Eastman process Case Disturbance iDV(1) A/C feed ratio, B composition constant IDV(2) B composition, A/C ratio constant IDV(3) D feed temperature IDV(4) Reactor cooling water inlet temperature IDV(5) Condenser cooling water inlet temperature IDV(6) A feed loss IDV(7) C header pressure loss - reduced availability IDV(8) A, B, C feed composition IDV(9) D feed temperature IDV(10) C feed temperature IDV(11) Reactor cooling water inlet temperature IDV(12) Condenser cooling water inlet temperature IDV(13) Reaction kinetics IDV(14) Reactor cooling water valve IDV(15) Condenser cooling water valve IDV(16) Unknown
Type Step Step Step Step Step Step Step Random variation Random variation Random variation Random variation Random variation Slow drift Sticking Sticking Unknown
533 Table 2 Fault patterns to be diagnosed Sample Fault T1 IDV[ 1] T2 IDV[6] T3 IDV[7] T4 IDV[13] T5 IDV[14] T6 IDV[7]*0.8 T7 IDV[7]*0.6 T8 IDV[7]*0.4 T9 IDV[7]*0.2 T10 IDV[6]+IDV[9] Tll IDV[6]+IDV[10] T 12 IDV[ 16]
Sampling number 1000 1000 350 1000 1000 1000 1000 1000 1000 1000 1000 1000
Fault time 200 200 100 200
Result IDV[ 1] IDV[6] IDV[7] IDV[13] IDV[14] IDV[6] IDV[6] IDV[6] IDV[6] IDV[6] IDV[6] New Fault
200
200 200 200 200 200 200 200
The precision of pattern classification is decided by clustering size of ART2. Clustering size is determined by vigilance threshold p, which is often empirically tested and selected. The key to choose p is to balance the computation complexity and the degree of losing alarm. From the simulation, it is appropriate that the vigilance threshold p is 0.95 here. The diagnostic results are listed in the last column of Table 2. The detail analysis of sample T1 is given as follow. To have a clear picture of classified results, only two measurements are discussed here. Figure 7 shows original plots of two variables of sample T1, where X-coordinate is sampling point and Y-coordinate is normalized data. The top chart is feed flow of component A and the bottom is of component D. In Figure 7, real-lines classify whole figure into three parts, corresponding the normal state, the transition state, and the abnormal state, respectively. The broken-lines are process trend determined by classification of MS-ART2. Normal ,f
,
A,
Transitions 2 3
Y-'--r"rc-~r .
o
"~ 0.8
:
:/
i l
: .~/~:
0.4 ~~___L__~___~
1 ----~ .....
i i i
! ,
,
i
,
,
i
!
,
,:, ,:, ,,,
296
100
200
300
448
400
~
i ,4__ 1
,
!
^ [
1
i
,
536
500
/
++
~
**
~**
b
0.6
[;'
D
, 0.4
.
0.2
656
600
L§
1
:
LI t~[,I LhJ,;
o.8
Oo
. . .
:
O.8
0
Abnormal
I
0
700
800
900
Figure 7 Plots of two variables of Pattern T 1 and its classification from MS-ART2
1000
0
0.2
0.4
0.6
0.8
Figure 8 Classification of T1 using MS-ART2 -*- Normal state, , Transition 1, [> Transition2, + Transition3, 9Abnormalstate, [] center
534 MS-ART2 classifies the whole process into normal state (sampling location 1 ~ 296), transition state 1 (sampling location 297 ~ 448), transition state 2 (sampling location 449 ~ 536), transition state 3 (sampling location 537 ~ 656), and abnormal state (sampling location 657 - 1000). The result of MS-ART2 is shown in Figure 8. States transition from normal to abnormal occurs at 296, and fault matures at 657. It perfectly shows the overall process of fault from occurrence, transition, and maturity. 5. CONCLUSIONS In this paper, a MS-ART2 is proposed by combining wavelet analysis with ART2. Wavelet analysis is used as the data pre-process unit of ART2. Using multi-scale characteristic and signal singularity detection functions of wavelet transform, better effect of signal de-noise is achieved. The signal transformed by wavelet is classified in three groups" low-frequency, high-frequency, and de-noise signal. They input into ART2 separately to training the neutral network. Post-processing system is used to process and store the patterns form ART2. Finally, application of the MS-ART2 neural network to Tennessee Eastman challenge process for state identification is presented. It shows that MS-ART2 is effective for detecting and identifying faults and abnormal events. ACKNOWLEDGEMENTS Financial support from the National Natural Science Foundation of China (No. 29976015), the China Excellent Young Scientist Fund, China Major Basic Research Development Program (G20000263), and the Excellent Young Professor Fund from the Education Ministry of China are gratefully acknowledged. REFERENCES
[ 1] G. A. Carpente and S. Grossberg. Computer Vision, graphics and Image Processing, 37(1), 54~ 115, 1987 [2] G. A. Carpenter and S. Grossberg. Applied Optics, 26(23), 4919~4930, 1987 [3] Hao Wu, Ph. D Thesis, Tsinghua University, 1996 [4] Y.H. Pao, Adaptive Pattern Recognition and Nueral Networks, Addison-Wesley,1989 [5] X.Z. Wang, B.H. Chen, S.H. Yang, and C. McGreavy, Computers and Chemical Engineering, 23(7), 899-906,1999 [6] X.Z. Wang, B.H. Chen, S.H. Yang, and C. McGreavy, Computers and Chemical Engineering, 23(7), 945-954,1999 [7] Y. Qian, Q.M. Huang, W.L. Lin and X.X. Li, Computers and Chemical Engineering, 24(2-7), 457-462, 2000 [8] Y. Qian, Q.M. Huang, X.X. Li and Y.B. Jiang, Proceeding of Process System Engineering of Asia, Kyoto, Japan, p.27-34,2000 [9] J.J. Downs and E.F.Vogel, Computer and Chemical Engineering, 17(3), 245-255,1993
ProcessSystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by ElsevierScienceB.V.
535
Application of a space- time CE/SE (Conservation Element/Solution Element) method to the numerical solution of chromatographic separation processes Young-il Lim* and Sten Bay Jorgensen CAPEC (Computer-Aided Process Engineering Center), Technical University of Denmark 2800 Kgs. Lyngby, Denmark
Abstract For solving partial differential equations (or distributed dynamic systems), the method of lines (MOL) and the space-time conservation element and solution element (CE/SE) method are compared in terms of computational efficiency, solution accuracy and stability. Several representative examples including convection-diffusion-reaction PDEs are numerically solved using the two methods on the same spatial grid. Even though the CE/SE method uses a simple stencil structure and is developed on a simple mathematical basis (i.e., Gauss' divergence theorem), accurate and computationally-efficient solutions are obtained in a stable manner in most cases. However, a remedy is still needed for PDEs with a stiff source term. It seems to be out of date to use the MOL for solving PDEs containing steep moving fronts because of the dissipation error caused by spatial discretization and time consuming computations. It is concluded that the CE/SE method is adequate to capturing shocks in PDEs but for diffusion-dominated stiff PDEs, the MOL with an ODE time integrator is complementary to the CE/SE method. Keywords: PDEs (Partial Differential Equations), Numerical analysis, Distributed dynamic system, Chromatographic separation process, CE/SE method, MOL (Method of Lines). 1. INTRODUCTION In the numerical analysis for solving partial differential equations (PDEs), a reliable numerical method is required, where reliability reflects computational efficiency, solution accuracy/stability and robustness in use. Normally, solution accuracy is enhanced at the cost of computational time. An adequate numerical scheme should be selected to avoid spurious oscillatory (or unstable) solutions. In the framework of the method of lines (MOL), PDEs are converted to an ODE system in the temporal space after spatial discretization. The ODE (or DAE) time integrator (eg., predictorcorrector Gear-types) achieves a high accuracy with respect to time through adjusting time step-size (At) adaptively to stiffness of the DAE system considered. For the numerical solution of practical chemical processes, Lim et al. [1], [2] have implemented high resolution unwinding schemes (e.g., WENO scheme tal) in the framework of the MOL. However, numerical dissipation caused by spatial discretization can be still substantial in the presence of steep shock fronts when the number of mesh points is insufficient. Moving mesh methods using the MOL are examined to capture the steep fronts but require long computational time Corresponding author, phone: +45 4525 2802, email: [email protected].
536 even with a small number of mesh points because of strong coupling and non-linearity between original physical PDEs and mesh equations added for mesh calculation EaJ. The chromatographic separation can be modeled by convection-dominated parabolic PDEs (Partial Differential Equations) for mass conservation in the mobile phase, ODEs (Ordinary Differential Equations) for the solute adsorption rate in the stationary phase, and eventually AEs (Algebraic Equations) for the adsorption isotherm between the two phases. Thus, they lead to a nonlinear and coupled PDAE (Partial Differential Algebraic Equation) system which is often solved, after discretization of spatial derivatives, by DAE (Differential Algebraic Equation) time-integrator in the framework of the MOL. However, the solution procedure may be inadequate for multi-component, multi-column and multi-dimensional systems since the DAE system obtained from spatial discretization of the PDAEs is steep in the axial direction, large in the Jacobian matrix size, nonlinear between state variables, iterative in the solution procedure and numerically-dissipative. Therefore, a new numerical method is needed. For the numerical solution of conservation laws (e.g., PDEs), Chang t51 proposed a new method, the so-called space-time Conservation Element and Solution Element or the CE/SE method for short, which is mathematically simple but is accurate even at discontinuities and is computationally efficient. The CE/SE method using Gauss' divergence theorem enforces both local and global flux conservation in space and time to achieve numerically non-dissipative feature without involving already established techniques. While the MOL with a DAE time integrator has an implicit feature with variable time steps and it takes much computational time on fine meshes, the CE/SE method is an explicit time-marching scheme in a simple stencil structure and reduces computational time. The present study addresses an application of the CE/SE method for the numerical solution of chromatographic separation problems described by PDEs or PDAEs. In preliminary tests, several PDEs involving steep moving fronts are solved to compare the CE/SE method with the MOL. In the next section, the CE/SE method is introduced and extended to the packedbed chromatographic separation problem. Numerical studies follows in section 3. 2. SPACE-TIME CE/SE M E T H O D S
Space-time CE/SE methods have been used to obtain highly accurate numerical solutions for 1D, 2D and 3D conservation laws involving shocks, boundary layers or contacting discontinuities ESl't61. Motz et al. t71 successfully applied this method to solve a population balance equation described by a hyperbolic integro-PDE. The Courant number insensitive Scheme II E81 has recently been proposed for the Euler equation (or convection PDEs for mass, momentum and energy conservation). Stiff source term treatment for convection-reaction PDEs [91 are also presented for the space-time CE/SE method. The extension to a PDAE system is derived from the original CE/SE method E51and the Scheme II t81. Consider a packed-bed chromatographic model: (3C 3n 3C 3 ~(Dax 3C
[-ff +o~-y;+v~ ~=~
/
~n
,
u = k(~ - ~ )
-gz l
(1)
~
n* = g ( C )
where, C, n and n* are the liquid, solid and equilibrium concentrations, respectively, vL, Dax and k are the liquid interstitial velocity, axial dispersion coefficient and mass transfer coefficient, respectively, t and z are the independent variables for the time and axial direction.
537 , u= (Ci and p =(-~ [,k(g(C)-n) 0 theorem, the above equation is equal to flux conservation as follows: For simplicity, let f =
! vLC-D~ aC! z
the divergence
"
V.h:p where V = -~z~
(2) and
h=
. The integral form of Eq. (2) in a space-time E2 (two
dimensional Euclidean space) is expressed, using the Green's theorem, as follows: definmon
F - ~S(v)-umdz+fmdt=IvPmdV,m=l,2 S(V) is the boundary of an arbitrary space-time
(3)
region V in E2. The key idea of the where CE/SE method is i) to construct a non-overlapping rectangular computational domain staggered in both space and time (see Fig. 1), ii) to approximate urn, fm and Pm within the solution element (SE) through a first-order Taylor expansion, iii) to perform the line integral of Eq. (3) about the CE+ At a position 'j' and a time level 'n' within SE(j,n) which is the shaded crossbar in Fig. 1 (a), u(j,n) and f(j,n) are approximated for points B, D and F (subscript m of Um and fm is omitted for simplicity hereafter): u(j,n)=uj" +u~j(z-zj)+u,j(t" " t" ) (4) n f(],n)= f; +f~jCz-zy)+ ft~Ct-t")
(5) where, the subscripts 't' and 'z' mean partial derivatives with respect to time (t) and space (z), respectively. On the basis of Eq. (4)-(5), known values at points C, E and an unknown value at a point A, counter-clockwise line integrals are performed through A-)B, B--)C, C O D and D--)A for F. as well as A-)D, D--)E, E--)F and F--)A for F+ (see Fig. 1 (b)). After the line integration about the entire boundary S(CE~_), fluxes F• are obtained: #.,,,o.
.
F• = ~s(cE,j-udz+ fdt=uj-%•
._.2
--~-LUzj•
-Ax
.
~
j•
(6)
For the source term in Eq. (3), the source term flux (P+) is obtained within V(CE+): ,
4 p~ (7) where the source term effects are hinged on the mesh point (zj,t") at the new time level [9]. As a result, the space-time flux balance is given:
F•177 = u7 - uj•
_+-T[u~j+,/2+Uzj.-Az [fj~l//e2- f~ ]+--~utj+l/2 At2
+ft~ --~-Pj =0 (8)
Fig. 1 SE (solution element) and CE (conservation element) at jth position and n th time level [51.
538 By summation of the two equations in Eq. (8), an unknown value u~' at point A can be obtained from four known values, Uj+l/2 ,-// and Uzj.t.1/2 ,-1/2 , at points C and E in Fig. 1" 12u~ -- Atp~ ]-I [Uj+I/2 ,-,/2 + U j,-lie ,-1/e _,,-1/2] 0 - l ~ 2 --$j+1/2 + S j - 1 / 2 = where, ,-l/e
Az ,-1/2 At r
At e f,-1/2
(9)
The Scheme 11[8] is employed to accurately
calculate these spatial derivatives ( Uzj~i/2 ,-1/e ) at the previous time level. Here, Az and At remain unknown as the computational parameter to be determined by the user. The decision of the two computational parameters depends on the system considered. For a system containing steep moving fronts, a small spatial step size (Az) is desirable and a small CFL number (or small At) is preferred for a stiff system with respect to time. To solve Eq. (9), Newton's iteration is carried out and 2-3 steps are needed for a satisfactory convergence. 3. NUMERICAL STUDIES The present CE/SE method and MOL with conventional upwinding scheme and WENO schemes TMare tested on several well-known examples such as pure convection, convectiondiffusion, convection-reaction and diffusion-reaction problems, which involve a steep front. It is normally hard to obtain accurate solutions of these problems, using conventional numerical schemes. Uniform 201 spatial mesh points are employed for all cases.
3.1 Pure convection and chromatographic adsorption problems A pure convection problem with an initial condition tl], ut=-Uz, is solved. In Table 1 and Fig. 2 (a), numerical performance is shown. A cbxomatographic adsorption equation [1] in Eq. (1) is also solved and its numerical solution is depicted in Fig. 2 (b). For the two convection problems, the Scheme II of the CE/SE method gives excellent results (i.e., fast computation and accurate solution). Table 1 Accuracy, temporal performance and stability evaluation for linear convection equations. Pure convection Chromatographic adsorption Numerical method Accuracy (Li error) CPU time (s) Accuracy(Li error) CPU time (s) ODE 1St-orderupwind 0.2696 0.8 0.1307 0.9 integrator 5th-orderWENO 0.0421 5.7 0.0252 9.5 CE/SE SchemeII (CFL=0.1) 0.0341 1.5 0.0185 1.2
Fig. 2. Numerical solutions of linear convection equations.
539 3.2 Burgers' equation with mild and severe conditions The famous Burgers' equation is solved with a mild condition [1] and a severe condition [4], respectively (see Table 2 and Fig. 3). For the severe case, the CE/SE method shows an overshooting solution near a shock and the shock position is different from that of the WENO scheme with the MOL. This problem should be discussed further. 3.3 Convection-reaction PDE A convection-reaction PDE t~] and the Fisher's equation [~] are treated. In Table 3 and Fig. 4, numerical results are shown. In the two cases involving a stiff source term, the CE/SE method uses a Newton's iterative calculation. For Fisher's equation, the CE.SE method with the Newton method often fails to converge and shows bad wave propagation.
4. C O N C L U S I O N In this paper, the MOL with high accuracy upwinding schemes is compared with the spacetime CE/SE method in terms of accuracy, computational performance and stability. The CE/SE method is extended to the chromatographic separation problem. For preliminary tests, six convection/diffusion/reaction PDEs are solved using the two methods. The CE/SE method shows good performance for most cases but some enhancement is still needed for the Burgers' equation with a severe condition and the Fisher's equation. It seems to be out of date to use the MOL for solving steep PDEs because of dissipation error caused by spatial discretization and time consuming tasking. However, an implicit CE/SE will be needed for diffusion-dominated stiff PDEs (eg., Fisher's equation). In this case, the MOL with ODE time integrator works much better, to our knowledge. It is thus concluded that the CE/SE method is adequate for capturing shocks but the MOL is complementary to the CE/SE method for diffusion-dominated stiff PDEs.
Table 2 Accuracy, temporal performance and stability evaluation for Burgers' equations. mild Burgers' equation Severe Burgers' equation Numerical method Accuracy (LI error) CPU time (s) Accuracy (LI error) CPU time (s) ODE ISt-orderupwind 0.00345 5.4 0.0083 10.7 3rd-order upwind 0.00035 10.6 0.0014" 130.1 integrator 5th-orderWENO 0.00072 203.5 -(reference solution) 93.0 CE/SE SchemeII (CFL=0.1) 0.00023 9.6 0.0535" 9.4 *unstable numerical solution.
Table 3 Accuracy, temporal performance and stability evaluation for PDEs with stiff source terms. Convection-reaction Fisher equation Numerical method Accuracy (Ll error) CPU time (s) Accuracy(L1 error) CPU time (s) ODE 1St-orderupwind 0.05769 0.9 0.00256 1.9 5th-order upwind 0.00663~ 5.8 0.04880" 3.8 integrator 5t~-orderWENO 0.00671 19.5 0.04857 7.0 CE/SE SchemeII (CFL=0.1) 0.00549 10.9 0.06916 4.9 *unstable numerical solution.
540
Fig. 3 Numerical solution of the Burgers' equation.
Fig. 4 Numerical solution of PDEs involving a stiff source term. REFERENCE [ 1] Lim, Y. I., J. M. Le Lann & X. Joulia, Accuracy, temporal performance and stability comparisons of discretization methods for the solution of Partial Differential Equations (PDEs) in the presence of steep moving fronts, Comp. Chem. Eng., 25 (2001), 1483-1492. [2] Lim, Y. I., J. M. Le Lann, X. M. Meyer, X. Joulia, G. B. Lee and E. S. Yoon, On the solution of Population Balance Equations (PBE) with accurate front tracking methods in practical crystallization processes, Chem. Eng. Sci., 57 (2002), 177-194. [3] Jiang, G. & C. W. Shu, Efficient implementation of weighted ENO schemes, J. Comp. Phy., 126 (1996), p202-228. [4] Lim, Y. I., J. M. Le Lann & X. Joulia, Moving mesh generation for tracking a shock or steep moving front, Comp. Chem. Eng., 25 (2001), 653-663. [5] Chang, S. C., The method of space-time conservation element and solution element--A new approach for solving the Navier-Stokes and Euler equations, J. Comput. Phys. 119 (1995), 295324. [6] Chang, S. C., X. Y. Wang, & C. Y. Chow, The space-time conservation element and solution element method: A new high-resolution and genuinely multidimensional paradigm for solving conservation laws, J. Comput. Phys. 156 (1999), 89. [7] Motz, S., N. Bender & E. D. Gilles, Simulation of two-dimensional dispersed phase systems, ESCAPE-12 (2002), 937-942, Hague, Netherlands. [8] Chang, S. C., Courant number insensitive CE/SE schemes, 38 th AIAA joint propulsion conference, AIAA-2002-3890 (2002), Indianapolis, USA. [9] Yu, S. T. & S. C. Chang, Treatment of stiff source terms in conservation laws by the method of space-time CE/SE, AIAA 97-0435, 35 th Aerospace Sciences Meeting (1997), Reno, USA.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
541
Using Tabu Search to Solve MINLP Problems for PSE B. Lin, l S. Chavali, 2 K. Camarda 2 and D. C. Miller 1'*
~Department of Chemical Engineering, Rose-Hulman Institute of Technology, 5500 Wabash Avenue, Terre Haute, IN 47803, USA 2Department of Chemical and Petroleum Engineering, University of Kansas, 1530 W. 15th, Room 4006, Lawrence, KS 66045, USA Abstract Tabu search is used to solve a MINLP formulation of a heat exchanger synthesis problem and a molecular design problem. The results are compared with those obtained by more traditional optimization algorithms. A particularly interesting feature of TS is that various sub-optimal solutions can be easily located and recorded. Keywords Tabu search, stochastic optimization, MINLP, heat exchanger network synthesis, computer-aided molecular design
1. INTRODUCTION Tabu Search (TS) 1'2 is a relatively new, stochastic optimization method, that until recently, had seen little application in chemical engineering. Within other disciplines, TS has been used to solve scheduling problems, 3 the traveling salesman problem," constraint satisfaction problems, 5 and general zero-one Mixed Integer Programming problems. 6 TS has recently been applied to chemical engineering problems.'2.79 " When compared with simulated annealing, TS seems to provide superior performance 2'8 because it makes use of "memory", which enables it to escape from local optima. A wide range of important PSE problems can be formulated as an MINLP. These problems are often NP-complete; hence, the existence of a general-purpose deterministic optimization approach is highly unlikely, l~ Stochastic methods are often used to solve these problems; however, most existing stochastic techniques are only suitable for solving small to medium-scale problems. 11 Thus, as we enter a new millennium with the hope of solving an ever wider array of more complicated problems, new and improved approaches are required. This paper describes the results of applying TS to several MINLP problems that are of general interest in process systems engineering. 2. TABU SEARCH TS is a meta-heuristic approach that guides a local search procedure to explore the solution space beyond local optima. The algorithm explores the feasible region by a sequence of
542 moves. 12 TS starts from a group of initial solutions, determines the best one and uses that as the starting point to generate a set of neighbor solutions, N(x), by modifying the current solution, x. The best one among them, x ', is selected and the next iteration begins. Unlike traditional local search algorithms, TS performs a "guided search" by taking advantage of "memory", which consists of historical information about the search process. This helps to ensure that all the regions of the search space are investigated and helps to minimize the likelihood of becoming stuck in local optima. Several issues must be addressed in implementing TS. These include how the memory (in the form of Tabu lists) is managed. In addition, strategies to manage diversification (broadly searching the feasible regions) and intensification (performing a more thorough search of a local region) must be identified. Lin2 provides a detailed set of guidelines for implementing the algorithm and details of our specific implementation. 3. HEN S Y N T H E S I S
Recently, Furman and Sahinidis 13 showed that stage-wise simultaneous problems such as HEN synthesis are NP-hard indicating that deterministic methods cannot solve them within polynomial time. Thus, stochastic optimization methods will be increasingly important to tackle such problems as the scale of the desired optimization increases. Below, we solve the HEN problem as presented by Yee and Grossmann. 14 This is a highly nonlinear, nonconvex MINLP problem. Because the binary variables play a more important role in determining the annual cost than temperature and heat duty, we employ the Tabu lists only for the binary variables. There are 16 binary variables. 12 of them define the matches between hot and cold streams (zij k) and the other 4 define the heat exchange between streams and utilities. 8 of the 12 heat duties (Q;:) need to be determined in order to solve equality constraints. Since Qok is determined based on z;: and its bounds, 8 binary variables, Z ijk, must be determined first. Thus, the total number of free variables is 3/: =8 +8 = 16. To analyze the HEN synthesis problem, the number of neighbors at each iteration were chosen to be N: and 10. N:, and the number of iterations were set to 100 and 500. For each combination of parameters, TS performed 100 runs. The probability with Nlveigh --N: and M = 100 is 25%, and with NNeigh 10. N: and M = 500 is 100%; however, the probability increases at the cost of much longer computation time. =
4. COMPUTER-AIDED M O L E C U L A R DESIGN Designing molecules with desired properties is an important new application in PSE. Computer-aided molecular design (CAMD) reduces experimental time and costs by discerning promising candidate molecules based on predicted properties. The property prediction of molecules can be performed either by group contribution or topological indices. Topological indices take the molecule connectivity into consideration and provide more accurate property prediction than simple group contribution methods. 15'16 Representing a molecule with an adjacency matrix and correlating nonlinear structure-property relation using topological indices, 17 CAMD can be formulated as an MINLP problem. We examine here the
543 Table 1 Basic groups and atomic connectivity values
Atom
~--
1
2
I/
I/
I
/I
3
4
~OH
Mo~
5 CI
O
6 "
~
CH3
7 ~CH2
8 "
~
NH2
6i
5
6
1
1
2
1
2
1
6i v
0.13889
0.17143
5
0.77777
6
1
2
3
NMax.i
2
2
3
3
3
3
3
3
"
Note: The value of NMax. i denotes the maximal number of the group allowed in a molecule. For this problem, the maximal number of groups is ~ N~o~, i = 22. i=l
design of a molybdenum catalyst for an epoxidation reaction. In this problem, 8 basic groups (see Table 1) are used to describe the search space. Molecule structure is expressed with a hydrogen-suppressed graph. Topological indices are calculated with the reciprocal square root of atomic connectivity indices of basic groups as shown in Table 2.18 Valence connectivity is similarly calculated as shown in Table 3. Two constraints must be satisfied to guarantee the feasibility of the obtained molecules: valency of each basic group and connectivity to ensure a single molecule. To limit the search space, an upper bound on the maximal number of groups in a molecule is specified. To build an initial feasible solution, a basic group is first selected as the source group. Then, other groups are connected to it until all bonds of each group in the molecule are appropriately connected. Neighbor solutions of TS are constructed by replacement operations. A group is first picked up from the current solution. Depending on the position of this group, different replacement actions will be taken: if the first or the last group on the main-chain is selected, the whole molecule will be replaced; if the selected group lies in the middle of the main-chain, all groups on its side-chains and the groups after it will be replaced; if a group on a side-chain is selected, only this side-chain will be replaced. The objective is to minimize the deviation of the estimated value of the density:
Table 2 Zero order, first order and second order simple connectivity indices oth
order
l
0
Z = ~ wi6( ~
9
i=1
1st order
IX
)1 =
aijk = 1 if group i is bonded with j with k type of bond
~ a i j k (6,6j
-
(i,j)" j>i
1
2 "a order
2Z =
~ y u k ( S ~ 6 j 6 k ) -~ (i,j ~ ) ; k>j>i
w; = 1 if group i exists in the molecule - 0 otherwise
0 otherwise
Yuk = 1 if group i is bonded with group k through groupj = 0 otherwise
544 Table 3 Zero order, first order and second order valency connectivity indices zero order First order 1
oZv = ~ w ~ ( r v ) - 7 i=1
s>
1
~zv = ~%k(S~S~)-- i (i ,j ) "j>i
22"v=
Second order
.ov vov
YijktOi (i,j,k) ; k>j>i
Pm ~ P m target p
target
1
j Ok
(1)
m
where density is calculated by
.~ +40901. lZ +1784. IZv - 7 2 0 4 6 - 2 X - 6 0 7 92Zv 24695. oZ 902' +649. o2"v. ozv -12271.12". 12"_ 65.4.12"v. 12"v - 1793- 2Z 922" +8.9.22"v. 2Zv _ 72323. !Z 92Z
p = - 5 5 3 5 1 +75800. ~ -
(2)
The target density value is 4172 kg/rn3. The number of neighbor solutions generated at each iteration is equal to the product of the number of basic groups and the maximal number of groups in a molecule, which is 8 * 22 -- 176. The number of iterations is 200. The length of Tabu List is def'med to be the same as the maximal number of groups in a molecule. The 5 best molecules obtained from 100 runs with TS approach are shown in Table 4. The probability that a molecule is found is determined based on 100 runs. Nearoptimal solutions may not be the final solution of a run. They are simply the best neighbor at a given iteration and its probability is simply assigned as 1%. Therefore, the summation of probability is not equal to 1. The best solution, with the objective function value of 0.000111, corresponds to a density of 4172.46, which is only found once in 100 runs. Solution 3 (0.000238) has 7 basic groups and is found 80% of the time. The corresponding density value of 4172.99 kg/rn3 is very close to the best solution Since the difference between the best solution and the 10th best solution (not shown) is less than 0.3%, this shows that TS can successfully determine several promising catalyst molecules for further experimental verification. It is especially useful to identify and record near-optimal solutions since the density correlation is only good to about 4%. Thus, near-optimal solutions are likely to be strong candidates for synthesis as the optimal one. Furthermore, many other factors, such as ease of synthesis, have not been taken into account within the optimization formulation. Thus, a user would like to have multiple options to choose from. Currently, molecules with good property values and simple structures are most frequently found by TS. Molecules 3 and 5 are found with the highest probability. Thus, we believe that the implementation of a new Tabu list procedure will be required to prevent the loss of potential molecules with more complicated structure.
545 Table 4 Results of molybdenum catalyst design with TS approach
[
o.c,i/ c?~...
Structure
Ill,lo
OH ---O - - O - -
o ~ M o
i'/c, 01"7.. 3
1
N H 2 - - MO ~
I OH
CI--CHf--Mo-- cl
I OH
Mo ~
/I
OH CI
Mo - -
Mo--O.----O--CH~.---CHE-.-CH~r--- OH
CI
OH
I
I
O gH, 4
O--CH~--OH
MO
/I NH2 NH2
CH 3
Obj. Value
, Prob
18
0.000111
1%
13
0.000115
1%
7
0.000238
80%
20
0.000571
1%
16
0.000659
1%
7
0.001361
20%
!,
OH 3
2
3
CH~--- NH 2
CHs
N
OH -- CH 2--CH ~--- Mo ~
c, j,
/
C,
Mo
C 21-I ~ - O ---NH 2
CH3
I#"3
OH
CI --- O "-----M o - - NH2
5
I
OH
TS provided the results in Table 4 in only 90 seconds on a Pentium III 1.0 Ghz CPU, 1024 MB memory with Redhat Linux 7.1 with the TS algorithm compiled using gcc compiler with option -04. In comparison, Using Outer Approximation via the DICOPT solver in GAMS, only an integer feasible solution was found alter 20 minutes on a Sun Ultra 10 workstation. The structure (shown below) resulted in an objective function of 5.67. CH3/QH J / CH3 - - M o - -
/I el OH
ill2//TM o --Mo----
I el
NH2
546 5. CONCLUSIONS In conclusion, TS has been shown to be effective in solving traditional MINLP formulations such as HEN synthesis. In addition, TS shows very promising results for computer-aided molecular design. Our preliminary results indicate that it can more rapidly generate good, feasible solutions that meet all the constraints. Because of the structure of the algorithm, various near-optimal solutions can easily be identified and stored. This is especially important because the property correlations have limited accuracy. Thus, near-optimal solutions may be as promising as the global optima. By identifying a range of potential target molecules, TS avoids missing potentially useful molecules and allows the user to use other criteria (such as ease of synthesis) to perform a final ranking of the candidates. Thus, TS will serve as an important optimization strategy for the important problems of the new millennium. REFERENCES
[1] [2] [3] [41
[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
Glover, F., Computers and Operations Research, 1986, 5,533-549. Lin, B., "Application of Tabu Search to Optimization in Chemical Engineering", Ph.D. Thesis, Michigan Technological University, 2002. Dowsland, K. A., Eur. J. ofOper. Res., 1998, 106, 393-407. Gendreau, M.; Laporte, G.; Semet, F., European Journal of Operational Research, 1998, 106, 539-545. Nonobe, K.; Lbaraki, T., Eur. J. of Oper. Res., 1998, 106, 599-623. Lokketangen, A.; Glover, F., Eur. J. of Oper. Res., 1998, 106, 624-658. Wang, C.; Quan, H.; Xu, X., Computers Chem. Engng., 1999, 23,427-437. Lin, B.; Miller, D. C., "Application of Tabu Search to Model Identification", AIChE Annual Meeting, Los Angeles, CA, 2000 Lin, B.; Miller, D. C., "Improvement to the Performance of Tabu Search", AIChE Annual Meeting, Reno, NV, 2001 Floudas, C. A. Nonlinear and Mixed-Integer Optimization; Oxford University Press, 1995. P6m, R.; Harjunkoski, I.; Westerlund, T., Computers. Chem. Engng., 1999, 23,439448. Glover, F.; Laguna, M. Tabu Search; Kluwer Academic Publishers: Boston, 1997. Furman, K. C.; Sahinidis, N. V., Computers Chem. Engng., 2001, 25, 1371-1390. Yee, T. F.; Grossmann, I. E., Simultaneous Optimization Model for Heat Exchanger Network Synthesis, in Chemical Engineering Optimization Models with GAMS; Grossmann, I. E., Ed.; CACHE, 1991; Vol. 6. Raman, V. S.; Maranas, C. D., Comp. Chem. Engng., 1998, 22, 747-763. Siddhaye, S. S.; Camarda, K. V.; Topp, E.; Southard, M. Z., Comp. Chem. Engng., 2000, 24, 701-704. Trinajstic, N. Chemical Graph Theory; CRC Press, 1983. Kier, L. B.; Hall, L. H. Molecular Connectivity in Chemistry and Drug Research; Academic Press: New York, 1976.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
547
Continuous-Time Scheduling of Tanker Lightering in Crude Oil Supply Chain Xiaoxia Lin, Emmanuel D. Chajakis and Christodoulos A. Floudas * Department of Chemical Engineering, Princeton University, Princeton, NJ 08544, USA Abstract This paper address the scheduling of marine vessels for the crude oil tanker lightering process. A novel continuous-time mathematical formulation is developed based on the concept of event points proposed in earlier work on scheduling of chemical processes [ 1-5]. A sequence of event points is introduced for each vessel and binary variables are defined to determine whether or not at each event point the vessel is to start a task consists of docking/undocking, pumping oil, and traveling. The model leads to a Mixed-Integer Linear Programming (MILP) problem, which can be solved to global optimality efficiently. A case study will be presented and the computational result demonstrates the effectiveness of the proposed approach. Keywords marine petroleum transportation, crude oil supply chain, tanker lightering, ship scheduling, continuous-time formulation, MILE 1. INTRODUCTION Lightering is the process of transferring crude oil from a discharging tanker to smaller vessels, to make the tanker "lighter". It is commonly practiced in shallow ports and channels where draught restrictions prevent some fully loaded tankers from approaching the refinery discharge docks. As illustrated in Figure 1, when a tanker with a full load of crude oil is either anchored near the mouth of a bay or still offshore, approaching the bay, one or more smaller vessels (e.g., barges) come alongside it and pump crude oil off into their tanks. As soon as enough crude oil has been pumped off the tanker, both the lightering vessels and the tanker sail up to the refinery discharge docks. Lightering can enhance the responsiveness of the crude oil supply process by taking advantage of the pumping facilities of multiple (versus a single) docks and by achieving swifter crude oil distribution among several refineries in the discharging area. Furthermore, it can dramatically reduce costly tanker demurrage and decrease overall logistics costs of the crude oil supply system. A lightering fleet typically consists of a number of vessels with different characteristics and a wide range of capacities. Tankers usually arrive in irregular time intervals and the estimated time of arrivals for each tanker is constantly updated. When too many tankers arrive within a short period of time, the lightering fleet enters bottleneck periods. During this congestion, it is difficult even for the most experienced scheduler to find manually the optimal combination of vessel-tanker assignments, timings and lightering volumes. The two primary components of lightering costs are tanker demurrage and lightering fleet operating costs. They are greatly affected by the quality of lightering fleet schedules and hence the very challenging task of creating "good" fleet schedules is of major importance in minimizing lightering costs. A good *To whom all correspondenceshould be addressed; Tel: +1 (609) 258-4595; Fax: +1 (609) 258-0211; E-mail: [email protected].
548
Figure 1: The lightering process. lightering fleet schedule is characterized by reasonable fleet utilizations and provides tradeoffs between tanker demurrage and fleet operating costs that result in minimal total system costs. There has been relatively little published work on ship scheduling in the literature. The only previous lightering fleet scheduling optimization effort was reported in [6], which combined simple MILP models and heuristics. In this work, we employ the original concept of event points featured in a novel continuous-time formulation for process scheduling to develop an effective mathematical model for the lightering fleet scheduling problem. 2. PROBLEM STATEMENT
The scheduling problem for tanker lightering studied in this work is defined as follows. Given: (i) the arrival time and lightering requirement of each tanker, (ii) the capacity, available time, pumping rate, travel speed of each lightering vessel, (iii) the demurrage rate for tankers, and the voyage cost rate for lightering vessels, and (iv) other considerations, such as whether or not each vessel has heating capability for crude oil of certain types; then the objective is to determine (i) tanker-vessel assignment, (ii) the lightering volume for each assignment, (iii) the timing of lightering and travel for each vessel, and (iv) the service time for each tanker; so as to minimize the overall cost which consists of tanker demurrage costs and lightering vessel voyage costs. 3. MATHEMATICAL FORMULATION A new continuous-time formulation for the tanker lightering scheduling problem has been developed based on the novel concept of event point introduced by Floudas and coworkers for the scheduling of chemical processes, which was first proposed in [ 1, 2] and further extended in [3-5]. In the context of tanker lightering, we define event points as a series of time instances along the time axes of each lightering vessel at which the vessel starts performing a task. The timings of these event points are unknown and continuous variables are introduced to associate them with tasks and units, which allows the lightering vessel to perform the lightering operations at potentially any time in the continuous domain of the horizon. A task performed by a lightering vessel is defined to consist of the whole sequence of operations carried out by the vessel to serve a tanker once, including: (i) mounting the tanker, (ii) pumping crude oil from the tanker onto the lightering vessel, (iii) dismounting the tanker, (iv) traveling from the lightering location to the refinery port, (v) docking the refinery port, (vi) pumping crude oil off the
549 lightering vessel to the refinery, (vii) undocking the refinery port, and (viii) traveling from the refinery port back to the lightering location. To model the lightering process, the following notation is defined. Indices: t tankers; v lightering vessels; n event points representing the beginning of a task. Sets: T tankers; T,, tankers which can be lightered by vessel (v); V lightering vessels; Vt lightering vessels which can lighter tanker (t); N event points within the time horizon. Parameters: reqt lightering requirement of tanker (t); rtt,v round trip time of vessel (v) when serving tanker (t); t~ arrival time of tanker (t)' capt,v capacity of vessel (v) when lightering tanker (t); pnlv, pu2v rate of pumping crude oil from the tanker to vessel (v) and from vessel (v) to the refinery, respectively; t~ earliest available time of vessel (v); vcv fixed cost per voyage of vessel (v); dt time for mounting/dismounting at a tanker and docking/undocking the refinery port; dr demurrage rate; H time horizon. Variables: z(t, v, n) binary assignment of vessel (v) to lighter tanker (t) at event point (n); v(t, v, n) amount of crude oil that vessel (v) lighters from tanker (t) at event point (n); ts(t, v, n), tf(t, v, n) the time at which vessel (v) starts and finishes performing the task when serving tanker (t) at event point (n), respectively; td(t) time tanker (t) finishes being lightered. Based on the above notation, the constraints and objective function of the model are formulated as follows. Allocation constraints ~z(t,v,n)
< 1
Vv C V , n E N.
(1)
tET~
For each lightering vessel (v) at each event point (n), at most one tanker can be served. Capacity constraints v(t, v, n) < capt,v . z(t, v, n)
Vt E T, v E Vt, n E N.
(2)
If a lightering vessel (v) serves a tanker (t) at an event point (n), that is, z(t, v, n) = 1, the lightering volume can not exceed the capacity of the vessel. Otherwise, if the vessel does not serve the tanker, that is, z(t, v, n) = 0, the constraint enforce that the lightering volume be zero. Lightering requirement constraints ~
v ( t , v , n ) = reqt
Vt E T.
(3)
vEV~ nEN
For each tanker (t), the sum of the amount of crude oil lightered by all the suitable lightering vessels at all event points should be equal to the lightering requirement. Available time of vessels ts(t,v,n) > t a
Vt E T, v E Vt, n C N.
(4)
For each lightering vessel (v), it can start serving a tanker (t) at event point (n) only after it becomes available.
550 Arrival time of tankers ts(t,v,n) > t~
(5)
Vt E T , v E Vt, n E N.
For each tanker (t), it can be served only after it arrives. Duration constraints 1 1 tf (t,v,n) = t s ( t , v , n ) + (rtt,v + dr). z ( t , v , n ) + v ( t , v , n ) . (p---~ + p--~v), Vt E T , v E Vt, n E N.
(6)
The duration of the task that lightering vessel (v) performs when serving tanker(t) at event point (n) is equal to the sum of the amount of time spent on mounting/dismounting the tanker, docking/undocking the refinery port, pumping on/off crude oil, and traveling from the lightering location to the refinery and back. Service time of tankers dt . z(t, v, n) + v(t, v, n) _ H . [1 - z(t, v, n)] ta(t) > t s ( t ' v ' n ) + -2 pul--------~
Vt E T, v E Vt, n E N.
(7)
If tanker (t) is lightered by vessel (v) at event point (n), that is, z(t, v, n) = 1, then the time at which this tanker finishes being served is no earlier than the time when the vessel finishes mounting the tanker, pumping on crude oil, and dismounting the tanker. The last type of constraints, denoted as sequence constraints, connect the timings of different tasks. They can be classified into the following two sets: Sequence constraints: Same lightering vessel for the same tanker t s ( t , v , n + 1) > t f ( t , v , n )
(8)
Vt E T , v E Vt,n E N.
A lightering vessel (v) can only lighter a tanker (t) at event point (n + 1) after it finishes the task while serving the same tanker at the previous event point (n). Sequence constraints: Same lightering vessel for different tankers ts(t,v,n) > t f ( t ' , v , n ') - H . [2 - z ( t , v , n ) - z(t',v,n')]
Vv E V,t ~: t' E Tv,n > n' E N.
(9)
If a lightering vessel (v) serves tanker (t) at event point (n) and tanker (t') at an earlier event point (n'), i.e. z(t, v, n) = 1 and z(t', v, n') = 1, the constraint enforces that the task at event point (n) start no earlier than the time at which the task at event point (n') finishes. If either of the two tasks does not take place, the constraint is relaxed. Objective: Minimization of total cost dr. Ira(t)- t~] + ~ tET
vEV
woo. ~
~
z(t, v, n)
(10)
n E N tET
The objective is to minimize the total cost consisting of the tanker demurrage costs, which is proportional to the number of hours each tanker stays at the lightering location before finishing being lightered, and the fleet voyage costs, which is fixed per trip for each lightering vessel. A more detailed mathematical model that addresses several classes of lightering problems can be found in [7]. 4. C O M P U T A T I O N A L STUDY The mathematical formulation described above results in a Mixed-Integer Linear Programming (MILP) problem. We apply it to the following case study. A fleet of four lightering vessels is
551 Table 1: Data of tankers in the case study. Tanker number Arrival time Destination Refinery (hrs) (distance (miles)) 1 16.0 R1 (67) 2 21.0 R2 (88) 3 35.0 R1 (67) 4 38.0 R2 (88) 5 47.0 R1 (67) 5' 47.0 R2 (88) 6 55.0 R3 (49) 7 117.0 R1 (67) Table 2: Lightering fleet data in the case study. Vessel Capacity Speed number (thousand barrels) (miles/hr) R1 R2 R3 to refinery to anchorage 1 370 370 245 8.0 11.0 2 225 225 225 7.0 9.5 3 260 260 195 9.0 12.0 4 90 90 90 7.0 9.5
Lightering requirement (thousand barrels) 335.0 340.0 215.0 222.0 185.0 135.0 177.0 320.0
Pumping rate (thousand barrels/hr) to vessel to refinery 60.0 20.0 38.0 15.0 50.0 20.0 16.7 7.3
Cost ($000/trip) 6.20 4.50 8.20 19.00
available to lighter seven tankers in a horizon of seven days. Data of the tankers and the vessels are shown in Table 1 and Table 2, respectively. Each lightering vessel has different capacities for different refineries due to the different depths of the refinery ports. Note that Tankers 5 and 5' refer to the same tanker which carries crude oil for two different refineries. Furthermore, in this case study, each lightering vessel is available from the beginning of the horizon and is capable of lightering all the tankers involved. The scheduling model is formulated and solved with GAMS/CPLEX on an HP J-2240 workstation. Three event points are introduced to model the lightering process and the MILP model consists of 60 binary variables, 191 continuous variables and 620 constraints, which is solved to optimality in 30 CPU seconds. The optimal schedule is shown in Figure 2, which also includes the arrival time, lightering requirement, and departure time of each tanker. Below the name of each vessel are its maximum capacity and voyage cost, respectively. Each task by the vessel is represented with a sequence of bars each designating a specific operation of the vessel. The tanker being served, the lightered volume, the corresponding event point, the starting time and finishing time of the whole task by the vessel are also labeled in the Gantt chart. The schedule requires the lightering fleet to take nine trips in total. Each of the two larger vessels, Vs 1 and Vs3, lighters three tankers, Tk2, Tk4, Tk7 in sequence, and Tkl, Tk6, Tk5' in sequence, respectively. Vs2 lighters two tankers, Tk3 and Tk5, sequentially. Vs4, with the smallest capacity and highest voyage cost, is assigned to lighter only one tanker, namely Tkl. The resulting total cost is $190,405, with a demurrage cost of $119,205 and a fleet voyage cost of $71,200. Extensive computational studies on a variety of lightering problems can be found in [7]. 5. CONCLUSIONS The scheduling problem for tanker lightering is introduced and addressed. A novel continuoustime mathematical model is developed based on the concept of event points proposed in earlier
552 I mount tanker l pump on oil
U dismount tanker 9 travel to refinery 9 dock port I pump off oil
~ undock port
9 travel to anchorage
Figure 2: Gantt chart of the solution in the case study. work on the scheduling of chemical processes. A sequence of event points is introduced for each lightering vessel and binary variables are defined to determine whether or not the vessel is to start a task at each event point, while the task consists of mounting/dismounting a tanker, pumping on/off oil, traveling between the lightering location and the refinery, and docking/undocking the refinery. The formulation leads to a Mixed-Integer Linear Programming problem which can be solved to optimality efficiently. A case study is presented and demonstrates the effectiveness of the proposed approach. ACKNOWLEDGMENTS The authors gratefully acknowledge support from the National Science Foundation. REFERENCES [1] M.G. Ierapetritou and C.A. Floudas, Effective Continuous-Time Formulation for ShortTerm Scheduling: 1. Multipurpose Batch Processes, Ind. Eng. Chem. Res., 37 (1998) 4341. [2] M.G. Ierapetritou and C.A. Floudas, Effective Continuous-Time Formulation for ShortTerm Scheduling: 2. Continuous and Semi-Continuous Processes, Ind. Eng. Chem. Res., 37 (1998) 4360. [3] M.G. Ierapetritou, T.S. Hen6, and C.A. Floudas, Effective Continuous-Time Formulation for Short-Term Scheduling: 3. Multiple Intermediate Due Dates, Ind. Eng. Chem. Res., 38 (1999) 3446. [4] X. Lin and C.A. Floudas, Design, Synthesis and Scheduling of Multipurpose Batch Plants via an Effective Continuous-Time Formulation, Comp. Chem. Engng., 25 (2001) 665. [5] X. Lin, C.A. Floudas, S. Modi and N.M. Juhasz, Continuous-Time Optimization Approach for Medium-Range Production Scheduling of a Multiproduct Batch Plant, Ind. Eng. Chem. Res., 41 (2002) 3884. [6] E.D. Chajakis, Sophisticated Crude Transportation, OR/MS Today, 24 (1997). [7] X. Lin, E.D. Chajakis and C.A. Floudas, Scheduling of Tanker Lightering via a Novel Continuous-Time Optimization Framework, Ind. Eng. Chem. Res., in press (2003).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
553
Dynamical Supply Chains Analysis Via a Linear Discrete M o d e l - A Study of z-transform Modeling and Bullwhip Effects Simulation Pin-Ho Lin a, Shi-Shang Jang b, David Shan-Hill Wong b aChemical Engineering Department, Nanya Institute of Technology, Tao-Yuan, Taiwan bChemical Engineering Department, National Tsing-Hua University, Hsin-Chu 30043, Taiwan
Abstract: In this work, a model of a supply chain system is derived using material and information balances and discrete time analysis. Transfer functions for each unit in the supply chain are obtained by z-transform. The stability of the linear system is studied. We proved that intuitive operation of supply chain system with demand forecasting causes bullwhip effect. Furthermore, we show that by implementing a PI or a cascade inventory position control, and properly synthesizing the controller parameters we can effectively suppress the bullwhip effect.
Keywords:
z-transform, bullwhip effect, cascade control, frequency analysis.
1. INTRODUCTION Supply chain management has attracted many attentions among process system engineering researchers recently. One of such areas is the analysis of the logistic problem of a supply chain using system control theory. A supply chain system is nothing but material balances of products and information flow [I]. In this work, a supply chain model is analyzed using z-transform. The objective of this work is to build up a dynamic model for the supply chain system and study the effects of the ordering strategy and forecasting the demand to the system dynamics. Analytical forms of the closed loop transfer functions are obtained. The causes of bullwhip [21 become quite apparent using the model and stability analysis [31. A PI and a cascade control structures are proposed and controllers are synthesized and tuned accordingly, to eliminate the bullwhip effect. 2. DISCRETE DYNAMIC MODEL Consider a simple supply chain that has no branch as shown in Figure 1. There are three
554 Material flow ~ ~ .
Information f l o w . . . . . . .
Yrw
.
.
Upw
Ywo
.
.
.
.
.
.
.
UwD
.
D.C. . .
.~
YaR
YRc
UoR
URc
Figure 1. A simple supply chain.
Uk1
Yjk
v.,
Ujl
Figure 2. The block diagram of node j of a supply chain logistic echelons: warehouse (W), distributing center (D) and retailer (C) between the producer (P) and customer (C).The material/information balances of a node can be modified based on the previous work [11. The z-transform of the modified discrete model is given by: z
Ij(z) = ~2-i_1 (z-LYij (Z) - Yjk (Z))
(1)
Ipj (z) = ~ _ 1 (Yij(z)- Yjk (z))
(2)
z
Oj (z) : ~--~_1 (U kj(Z)-- Yjk (Z))
(3)
U ji (z) = K j (SPj (z)- IPj (z))
(4)
I o Yjk (Z) = z-'Oj(z) [ z-'Ij(z)
o 0 _
(5)
where jie {WP,DW, RD,CR},ikE {WD,DR,RC},je {W,D,R}. Block diagram of the above equations is shown in Figure 2.
555 3. STABILITY ANALYSIS
Infinite Supply and High Stock In this case, the closed loop transfer function can be derived as the following equation: IPj(z) = Kj x SPj(z)- Ukj(z ) z - 1+ K j
(6)
with a characteristic equation: Hj(z)=z+Kj- 1=0
(7)
Therefore, then the ultimate gain of the feedback loop Kju is equal to 2. Infinite Supply and Low Stock In this case the following closed loop transfer function and characteristic equation are obtained: Kj(z TM -1) z-] Kj (zL+1 _ 1) SPj(z)
IPj (z) =
(8)
Z L+I +
z-1 Kj(z TM -1) n j(z)
"-- z TM ~-
--"
z-1
0
(9)
It can be shown that whenever Kj > 1, there exists at least one Izl _>1. Therefore, then the ultimate gain of the feedback loop Kju is equal to 1. 4. BULLWHIP EFFECT
Bullwhip effect for a node of Figure 1 is defined by Uji(z)/Uki(Z)>l, where Uji(z) is the order of a node, Uki(Z), is the demand from its down stream. When there is sufficient supply and high stock, we get: Uji(z) = Kj x ( z - l ) Kj SPj(z)+ ~ z-l+Kj z-l+Kj
Ukj(z )
(10)
If the set point of inventory position of a node, as shown in Figure 2, is fixed, mathematical manipulations find that the condition, Uji(z)/Uki(Z)> 1, is met only if Kj> 1. If Kj
1- Kj x F(z)x (L + 2) z-l+Kj
Ukj(Z)
(11)
556 40 (a) ,m ~ o
tm
Kj=O.7
319 -
,.--1tr ~-t.,
r .
:::~ .
.~ .
~ .~" : : : : . ' : . . .
:.
.:
:
:-: ~ - .,~. .-
:.: ,'-
...-" -
E
a~ 119 190 40
~'
..'"
.:
::--~.,
::.--: ~... - . : . - ~ "
.
_ ~ [ 10
20
30
40
50
60
70
,
,
,
,
,
,
.
319
419
519 time
619
719
...... 80 .
Demand Order
90 .
100
.
:30
c) -o
219
E a, D
119
O
Figure 3.
0
10
20
80
919
1190
Comparison the demand and order of a unit in a supply chain using P-only controllers with different gains.
Kj x ((L + 2)x F(z)x ( z - 1 ) + 1) Uji(Z ) --
z-l+Kj
Ukj(Z )
(12)
F(z) is the forecaster used to predict the current demand. Chen et al. [4] suggested the use of exponential filter F(z)=a/(z + a - 1 ) By viewing a Bode plot of the magnitude ratio IUkj(z)l/IUji(z)l using the exponential filter with or=0.1, and Kj =1 (intuitive ordering strategy), bullwhip effect appears in most frequencies. However, if one sets Kj-0.7, bullwhip can be suppressed for most frequencies. Figure 3 demonstrates the case that with a demand forecaster and Kj = 1, the bullwhip effect is very clear as analyzed above. 5. CONTROLLER SYNTHESIS The objective of a supply chain node is firstly, to satisfy the demand of the down streams, and secondly, to keep the smoothness of the operation. We assume that the closed loop control of inventory position should speed up the magnitude ratio IUkj(z)l/IWji(z)l without bullwhip effect and thus raise the customer satisfaction (reduce the backorder). Standard textbook [5] suggest the following two factors are considered: (i) Bandwidth (ii)Resonance Peak for the closed loop Bode plot of IUkj(z)l/IUji(z)l. Simple Feedback with Demand Forecast As derived above, intuitive inventory position P-only control (Kj=I), yields bullwhip, bullwhip, but due to the fact that P-only control algorithm yields offset in case of long term set point change by the demand forecasting. The customer satisfaction is hence very low since the backorder is very high as shown in Figure 4.
557 60 ooz~ 4 0 -o o= :20 E c:3 >~
"
"
O0
150 o~ ~: 1 0 0
0
400
"
:
10
20
.
(b)
0
10
.
20
.
(c)
30
.
~
.
.
.,.- . . . . . . . . .
~
10
.....
' 50
.
"" .....
""
....
70
* 60
.
,
50
1130 ] I
point
I. . . .
80
tory
90
1 O0
4 " ...........................
|
60
time
90 Sei
. *" . . . . .
i
40
80 [:" .... L._.~_
70
.
" ....................
,
30
60
. . . . :, ' " " ~ -. . . . . . . . . . . . . .
,
.
,
20
50
.
* 40
.
~l::'""
O0. . . . . . . .
"
30
~ -~ 200
o3
40
.
" ......
l
70
t
80
90
1O0
Figure 4. Simulation result of a supply chain unit with a P only controller(Kj=0.7) and stochastic demand from down stream
'•
40
,-
~20"
"
o~ o
O0
"~ 200
10
:~.
"
"
.-
- J
..
"
20
30
(b)
40 9
"
"
50
60
70
80
50
60
70
80
'
.
90
100
-. . . .
I
lO0
r.~
......
0
10
150~ 100 (c)
'
20
30
40
:&.,i . . . . -='=t~.%
.
.
.
Set
90
point
1O0 'J
O3 0
0
. . . . . . . . q. . . . . . . . . . , - - ; : 10 20
, 30
i 40
&l':~ . . . . . . , . . . . . . . . . . , ' " : . . . . . . r . . . . . . . . . ~. . . . . . . . . . 50 60 70 813 90 1130
tlme
Figure 5: Dynamic simulation result of a supply chain unit with demand forecasting and a PI-controller with Kj=0.67 and x Ij=3.3 To avoid this offset, a PI controller can be used: Cj(z) = Kj x 14
x~.j z - 1
(13)
Figure 5, gives the dynamic simulation of using the above tuning parameters. Cascade Control An obvious alternative to be used is a cascade control scheme. In the cascade scheme, the set point of the inventory position is raised (or reduced) if the filtered "long-term" trend of the difference between actual inventory position and the demand however, Figure 5 shows the case that lower controller gain (Kj=0.7) gives no is less than (or greater than) zero. However, this target is only loosely pursued in the inner loop. Figure 6 shows that the cascade control works very well without bullwhip effect. The cascade control scheme results in much faster response than pure PI. The period with backorder and the magnitude of back order are both smaller, hence custom satisfaction is also higher compared with the case of PI.
558 .~
80 60
[ (a)
9
Jl'% ~
40
r~
:
O0
IO
20
10
20 .
[ ...... [ -
='t=
,~
-'~.
"
30
40
50
..
60
D ....
Order
d
II
9
- -
70
80
90
100
80
90
1O0
200
.
O0
,~o
.
/ (c~
.
. .
10
20
.
30
.
~176176 +o ...................... 0
.
.
.
40
.
.
+ :,"" ' : " 30
.
.
50 .
60 .
70 .
.
'1
.--............................................................ J
40
50 time
SO
70
80
90
100
Figure 6. Dynamic results of cascade control at outer loop Kcj = 1.0, ~7~cj-5, and inner loop control gain Kj= 1.0 for L=3 6. CONCLUSIONS Stability of the system was investigated. Bullwhip effect is also analyzed. The study proves that bullwhip effect is inevitable if the standard heuristic ordering policy is employed with demand forecasting. Several alternative ordering policies were formulated as P-only, PI and cascade control schemes. By implementing a PI controller, the bullwhip effect of a supply chain unit can be suppressed but long term trends in customer demand can be tracked. The cascade control scheme not only provides efficient control of the inventory position of a supply chain unit without causing bullwhip effect.. REFERENCES
[ 1] Perea-L6pez, E., Grossmann, I. E., Ydstie, B.E. and Tahmassebi, T. Dynamic modeling and decentralized control of supply chains. Industrial & Engineering Chemistry Research, 40 3369-3383. (2001). [2] Lee, H. L., Padmanabhan,V. and Whang, S. The bullwhip effect in supply chains. Sloan Management Review 38.93-102. (1997) [3] Towill, D.R. Dynamic analysis of an inventory and order based production control system. International Journal of Production Research 20 671-687.(1982). [4] Chen, F., Ryan, J.K.and Simchi-Levi, D The impact of exponential smoothing forecasts on the bullwhip effect. Naval Research Logistics 47 269-286. (2000). [5] Coughanowr, D.R., and Koppel, L.R. Process Systems Analysis and Control. McGraw-Hill International. (1965).
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
559
Making decisions under u n c e r t a i n t y - applications of a hierarchical approach in industrial practice Hu Liua*~ K. K a r e n Yin~ at and G. G e o r g e Yin b t ~Department of Wood and Paper Science, University of Minnesota, St. Paul, MN 55108, USA bDepartment of Mathematics, Wayne State University, Detroit, MI 48202, USA A b s t r a c t This work is concerned with production planning of manufacturing systems under uncertainty. The dynamic systems are formulated by differential equations with Markovian disturbances. The random demand and random production capacity are modeled by two finite-state continuous-time Markov chains. To obtain the optimal feedback policies requires minimizing a discounted cost function, for which numerical solutions may be infeasible for large-scale systems. To address the issue of "curse of dimensionality", we resort to hierarchical approach and seek nearly optimal solutions. Application examples are provided for illustration. K e y w o r d s Planning; Markov chain; Optimal policy; HJB equation; Hierarchical approach. 1. I N T R O D U C T I O N Operating an enterprise requires making timely decisions in production planning and scheduling, process control, and inventory management subject to uncertainty from various sources such as raw material variation, customer demand fluctuation, and unpredictable equipment failure, etc. To better understand and more effectively deal with uncertainty requires sound mathematical models capable of capturing the salient aspects of the system and the unique feature of each major event while permitting the use of efficient algorithms to handle large-scale systems. In a recent work [1], the scheduling of a papermachine operation was formulated as a stochastic optimal control problem, the random demand and random production capacity were modeled by two continuous-time finite-state Markov chains, numerical solutions for obtaining the optimal policy were obtained by minimizing the discounted surplus and production costs. The computation needed in numerically solving such dynamic programming equations increases with respect to the number of Markovian states. In many cases, the computational requirements to obtain an optimal policy are staggering to the point that a numerical solution becomes infeasible. This is the so-called curse *Fax: (612) 625-6286; Email: [email protected] tCorresponding author. Tel.: (612) 624-1761; Fax: (612) 625-6286; Email: [email protected] tTel.: (313) 577-2496; Fax: (313) 577-7596; Email: [email protected]
560 of dimensionality [2]. In this work, we resort to hierarchical approach for solving such problems. A general description of the system is given first, followed by the problem formulation in Section 3. Section 4 provides application examples for illustration. 2. T H E P R O D U C T I O N
SYSTEM
Consider a manufacturing system that produces r different products. Let u(t) E l~ r denote the production rates that vary with time and the random machine capacity. With the total surplus (the inventory/shortage level) x(t) E N r and the random demand rates z(t) C N ~, the system is given by a differential equation, which states that the rate of change of the surplus is the difference between the rate of production and the rate of demand. Our objective is to seek the optimal production rate, u*(.), to minimize a discounted cost function, subject to the system dynamics, the machine capacity c~(t), and other operation conditions. Owing to random breakdown and repair, the machine capacity is modeled by a continuous-time finite-state Markov chain c~(.) = {c~(t) 9t >_ 0} with state space C = {al,...,a.c}. The demand process z(.) = {z(t)" t :> 0} is another finite-state Markov chain having state space Z = { z l , . . . , zd}. The generators of the Markov chains a(.) and z(.) are given by Qc = (q~j) e IRc•162and Qd = (qd) e N dxd, respectively. Recall that for any functions r on C and r on Z,
QCr
= ~
qj~, [ r
r
Qer
= ~
Jl •J
qj~j, [ 0 ( j l ) - r
(1)
Jl Ts
For additional properties of continuous-time finite-state Markov chains, see [3-5]. We consider the optimal control problem under a joint stochastic process /3(t) = (a(t),z(t)), the capacity and demand pair. Note that/3(.) is also a Markov chain which has a state space M
"-- {(oll,z1),'''(o/'C,z1),''',(o/'l,zd),'''(o/'c,zd)}
-- . / ~ 1 U . . . U
./~d,
(2)
and a generator Q, an m x m matrix (m = c x d). Note that Adi is given by J ~ i --~ {( O~1 Zi), .-- , ( O~c, Z i )}.
In view of the well-known result in stochastic control [6], an application of the dynamic programming principle [5, Appendix A.5] leads to that the value function v(.) given by v(x,/3) =
inf g(x, u(.), /3), /3 CAd u(.)e~
(3)
satisfies a system of partial differential equations known as HJB (Hamilton-Jacobi-Bellman) equations, where ,4 is the set of admissible controls; x and/~ = (c~, z) are the initial surplus and the initial (demand, capacity) pair, respectively. 3. H I E R A R C H I C A L
CONTROL
AND NUMERICAL
PROCEDURE
Note that unlike the usual situation in controlled diffusion, instead of one HJB equation, we need to solve m HJB equations. Due to the large state space of the joint stochastic process/3(t), the computation required to solve the HJB equations is intensive, which often renders numerical solution infeasible. Considering that in many manufacturing systems,
561 the rates of changes of the random events involved are markedly different [5], Sethi and Zhang developed hierarchical approaches that led to multilevel decisions [7]. They showed that such results are asymptotically optimal. We adopt the idea of hierarchical decision making in this work. 3.1. T h e H i e r a r c h i c a l A p p r o a c h In many manufacturing processes such as papermaking, the machine breakdowns and repairs take place much more frequently than the changes in demand. To reflect the different transition rates, we introduce a small parameter e > 0 into the system by assuming that the generator of/3(t) is Q = Q~= (q~j) 9 R rnxm, and that
Qe l ~, + ~. def 1 ( Qc =
)
=
c
~
Q~
+
I qd111c qUl2Ic """ qdldlcl " dt
"
d
qdl c qd2Ic
"
,
(4)
qddlc
where Q and Q are of the same order of magnitude. Observe that the introduction of separates the system into two time-scales, in which Q/e dictates the chain's fast changing part and Q governs its slowly varying part. Note that Q is nothing but the Kronecker product Qd | Ic, and the block diagonal matrix ~) = diag(Q c, Q ~ , . . . , Q~). Assume that Qc is irreducible. Then the equilibrium distribution of Q~, u = (~'1,~'2,...,u~), is the unique solution of ~,QC=0
and
~'i=
1.
(5)
i=1
Now, we are in a position to give a precise formulation of the problem in which we designate the quantities involved by the small parameter e. i.e., both tie(t) ~ Q~ and x~(t) are e dependent, and the surplus is given by x~(t) = fl~(t)(u(t),--Id)'= c d ( t ) u ( t ) - z(t),
~ ( o ) = (~(o),~(o))= ~ = (~,~.),
x~(O) = x, where
(6)
Id is the
J~(x, u(.),fl)
d • d identity matrix. Let the cost function be
=E
/0 e-pt[h(x~(t)) + 7r(u(t),~)]dt, fl e .A4
(7)
where p > 0 is the discount rate, u(t) = (ui(t), i < r) is the "normalized" production rate satisfying 0 < ui(t) < 1, h(.) is the holding cost, 7r(.) denotes the production cost, and x and/3 are the initial surplus and the (demand, capacity) pair, respectively. Note that for notational simplicity, we have normalized u(.) so that u~(t) <_ 1; thus the effective production rates become c~(t)u(t). Due to the use of Markov chains, we have a total of m value functions (refer to Eq. (3)). Using a dynamic programming approach, it can be shown that the value functions satisfy the HJB equations (see [6] and [7]): pC(x,/3) =
min {[~(u -/~)']. u(.)CA~ '
v ~ ( x , ~ ) + [h(x) + ~(u ~)]} + Q~v~(x,.)(~), ~ e M(s)
where A ~ denotes the set of admissible controls (see, e.g., [7]), Vv~(x, fl) represents the gradient of v~(x, fl) w.r.t, x, and a . b denotes the inner product of the vectors a and b.
562 Since ~ is small, the Markov chain 13~(.) jumps more frequently within the states in .A4i and less frequently from Jt4i to M j for i r j. Naturally, we aggregate all the states in Mi into a single state i. That is, we approximate/3"(.) by an aggregated process, say, ~ ( . ) , defined as (t) = i if 13~(t) e Mi. It can be shown [5] that as e --+ 0, ~'(.) converges weakly to fl(-), a Markov chain with state space A4 = { 1 , . . . , d} generated by = diag(u,~)Qdiag(~c,...,
lie) e R axd,
(9)
d d where ll~ denotes a column vector of dimension c with all components being one, and diag(A1,..., A t) denotes a block diagonal matrix having matrix entries A 1 through A t of suitable dimensions. In our setup, diag(u,... ,u) E R dxm, diag(lic,..., lie) E N mxd. We can show that corresponding to the original problem, there is a limit problem. Moreover, the value functions v~(x, fl) associated with the original problem converge to the value functions of the limit problem under "suitable aggregation." More specifically, define F0 = { ( U 1 , . . . , U a) : U i = ( u l i , . . . , u d ) , for i = 1 , . . . , d } , and denote by ~4~ the set of admissible controls for the limit problem.The limit problem is given by min j0(~, U(.), i) = E f o e-pCG(x(t), U(t), i)dt,
7)0.
x(t) = -f(g(t),U(t),-~(t)), x ( 0 ) = x, ~ ( 0 ) = fl E 3d = { 1 , . . . , d } ,
(10)
v(g, i) = infu(.)ex0 g0(N, U(.), i), where
-f(~, U, i) = ~
uj[(a j, zi)(u ji --Id)'] i E -M
j=l c
~(~. u.i)= ~ . j [ h ( ~ ) + ~(uJ;. (~J. z'))], i e ~ . j=l
The associated HJB equations take the form
pv(~, i) = min r/f(N ' U, i). Vv(N, i) + G(~, U, i)} + -Qv(N, .)(i),i E -~, u(.)ex 0
(ii)
Note that in the above, we have used the index notation convention. Although the limit problem still involves stochastic processes, the total number of states of the limit Markov chain in the limit problem is much lower than that in the original one. In fact, there are m = d x c states for the Markov chain/3~(t), whereas only d states for the limit fl(t). Thus the total number of equations need to be solved in the limit HJB equations becomes d. It has also been shown that an optimal or a near-optimal decision of the limit problem is asymptotically optimal to the original problem when e is small. Therefore we can use the optimal or near-optimal solution of the limit problem to construct controls for the original problem resulting in near-optimal controls of the original problem. Interested readers are referred to [5, Ch. 9] for proofs and more detailed discussion. Note that in obtaining the asymptotic properties, c --+ 0. When one uses the results in practice, ~ is simply a small real number; e.g., e = 0.1 is often considered small enough.
563 3.2. N u m e r i c a l P r o c e d u r e and A s y m p t o t i c O p t i m a l S o l u t i o n Using hierarchical approach enables us to reduce the given stochastic problem to a simpler one. The next step requires constructing a procedure to solve the limit problem 7)~ According to the numerical methods developed in [8], we first discretize the value function of the limit problem v(x, ~) on IRr by a sequence of functions vLX(x,~) on IRk (A > 0), and its partial derivatives v~(x,/3) by the corresponding finite difference quotients. Then write the discretized version of the HJB equation, and rearrange it into an iteration form vn+l(x,~ )Lx _ 7" (v~(x, ~)) . Starting from an initial value x and an arbitrary initial guess v0~, applying the value iteration procedure leads to the approximate solution of the limit HJB equation. Using the optimal or nearly optimal controls of the limit problem, we can then proceed to design controls for the original problems, yielding near optimality. 3.3. An A p p l i c a t i o n E x a m p l e This section illustrates the use of the hierarchical approach outlined above for planning a single-product production in a papermaking process. Using the weekly papermachine operation data collected from a large paper manufacturer during a 82-week period, we obtain the generator Qc and Qd of the random capacity and the random demand processes Qd
/-72/82 10/82 = 10/82 10/82
17/82 -65/82 17/82 17/82
38/82 38/82 -44/82 38/82
17/82 / 17/82 17/82 -65/82
( QC _
) -1.2
1.2 .
(12)
0.0375 -0.0375
Using e = 0.01, the generator Q~ for the joint Markov chain/3~(t) = ( a ~ ( t ) , z ( t ) ) is then obtained in accordance with (4). Subsequently, the generator Q of the "aggregated" Markov chain/3 was derived according to Eq. (9). In this example, it turned out that -~ = Qd. Since we are dealing with single machine and single product, all the vectors concerned become scalars. The holding cost and the production cost functions are h(x) = 0.01x + + 0.7x-, where x + = max{0, x} and x- = m i n { 0 , - x } for x E R, and 7r(u,/3)= 0.5u, respectively. The control set is F0 = {(U 1, U 2, U 3, U 4) 9 U i = (u ~', u 2i) = (0, u2i), for i = 1 , . . . , 4 } , since when the production capacity a(t) = a 1 = 0 the production rate is zero (i.e., when the machine is down, nothing can be produced). The optimal control (u *ji,j = 1 , . . . , 4, i = 1, 2) for the limit problem then can be found by solving (11) numerically; see [8]. Fig. 1 displays the dependence of production rate u j~ and the corresponding value function v on surplus x, demand rate z i, and the capacity a j. The optimal control is of threshold type (see [7, p. 41]) or specified by turnpike sets. The idea can be explained as: To reach point B from point A, one gets on the turnpike as soon as he or she can and stay there as long as he or she can. In the current problem, the limit problem becomes one that is equivalent to finding optimal controls due to random demand. Although the optimal control can be characterized by the turnpike sets, their closed-form solution is generally not available unless the Markov chain has only two states. Therefore numerical solution is needed. .. Using u .3', the optimal control of the limit problem, we can construct a control for the original stochastic production system as defined by (6) and (7):
x,
4
2
=Z
Z
i=l
z'),
j=l
where XA is the indicator function of the set A with XA =
01,
if X E A otherwise. . It can be
564 ii "
.. "i .... i 99 i N6
""
9
.
.
a...,6"~
9
. ..
72
4 -1
X
X
0
3 10
20
2
z
-1 ,u
20
-"
a.
Figure 1. Dependence of the production rate (a and b) and the value function (c and d) on surplus, demand ~nd the capacity
asymptotically optimal. Since the shown as in [5], that u~(t) = u ' ( x ( t ) , a ( t ) , z ( t ) ) i s hierarchical approach can significantly reduce the computation requirement (by 30%-50% for the examples we worked on), it is a promising approach to solve large-scale problems. 4.
SUMMARY
This paper is concerned with production planning under random demand and random production capacity processes. Rather than seeking the exact optimal solution that requires costly computation, we adopt hierarchical approach to seek near-optimal solutions. Using examples from the papermaking process, problem formulation and numerical procedure are presented. Our ongoing research focuses on near-optimal controls for paper industry using Markov decision processes. REFERENCES
1. K.K. Yin, H. Liu, and G. Yin, Computers & Chemical Engineering, submitted. 2. D. Bertsekas, Dynamic Programming: Deterministic and Stochastic Models, PrenticeHall, New York, 1987. 3. S.M. Ross, Introduction to Probability Models, Academic Press, New York, 2000. 4. M. It. A. Davis, Markov Models and Optimization, Chapman & Hall, New York, 1993. 5. G. Yin and Q. Zhang, Continuous-time Markov Chains and Applications: A Singular Perturbation Approach, Springer-Verlag, New York, 1998. 6. W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, 1975. 7. S. Sethi and Q. Zhang, Hierarchical Decision Making in Stochastic Manufacturing Systems, BirkhSmser, Boston, 1994. 8. H.J. Kushner and P. G. Dupuis, Numerical Methods for Stochastic Control Problems in Continuous Time, Springer-Verlag, New York, 1992.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
565
Finding candidates for multidimensional Attainable Regions. Elitsa Mitova a, David Glasser a, Diane Hidebrandt a, Brendon Hausberger a aSchool of Process and Materials Engineering, University of the Witwatersrand. Private bag 3, WITS 2050, Johannesburg, South Africa 1. I N T R O D U C T I O N - THE A T T A I N A B L E REGION
The Attainable Region is defined as the set of all physically realizable products from a reactor system Ill . In this paper we consider reactor systems where the only fundamental processes occurring are chemical reaction and mixing. A Candidate Attainable Region (AR c) is a region that is attainable but does not necessarily contain all realizable products. In order to ensure that the region is in fact attainable, the following Necessary Conditions [2] must be met: 1. It only includes points that are attainable, such as the feed point. 2. It is convex. Any concavities are filled using mixing, provided the space obeys linear mixing and mixing is an allowed process. 3. The reaction vector along the boundary of the AR cannot point out of the region. If such a point existed, the region could be expanded by allowing that point to undergo reaction in a Plug Flow Reactor (PFR). 4. No negative extension of a reaction vector in the compliment of the AR can intersect the AR boundary or the AR itself. It should be noted that at present we do not have a sufficiency condition, although recent work has shown some promise, and so if a region is constructed that satisfies all of the above conditions, then this region is only a Candidate Attainable Region (ARC). The above conditions do, however, give some direction as to possible extensions of the AR c . 2. G E O M E T R Y OF R E A C T I O N AND MIXING
Let C be a point in space representing an attainable state in our reactor system. We choose the elements of C so that they obey linear mixing laws (this may include variables such as mass fraction, concentration in a constant density system, residence time, enthalpy and excluding such variables as volume, mass, etc.). This means that if we mix a stream of state C with another stream, of state C*, the state of the resultant mixture, C R, will lie along the vector represented by C* - C. This is called the mixin~ vector between points C* and C and is denoted,v. In other words, the resultant point C will lie on the straight line between points C and and its value is given by the Lever Arm Rule, namely: C R = tiC - (1 - ~C*, where 13 is a unitless scalar that is a measure of the mixing ratio. The reaction vector, denoted R(C), gives the instantaneous direction of change if material of state C is allowed to react for a differential residence time r. The corresponding change in C, dC, is thus given by: dC=R(C)dz -.
566
3. GEOMETRY OF SOME IDEALIZED REACTORS In the systems discussed in this paper the only processes occurring are reaction and mixing. The general equation for a reactor wherein only these two processes occur is: dC --= dr
R(C)+av,
(1)
where v is the mixing vector C*-C as described above and a is a non-negative scalar with units of time -1, referred to as the feed control policy.
3.1. The Plug Flow Reactor (PFR) No mixing occurs in a Plug Flow Reactor and so a in Eq. (1) above is zero. The defining equation for a PFR is therefore:
dC= R(C),
C ( r = 0) = C o
(2)
dr The curve described by a PFR in state space is therefore a trajectory such that the reaction vector R(C) is tangent to the curve at each point C. Trajectories are directional by nature (i.e. they progress in the direction of the reaction vector), cannot cross each other and there exists one unique PFR trajectory for any given initial feed point C ~
3.2. The Continuously Stirred Tank Reactor (CSTR)
dC A CSTR locus consists of the stationary points of Eq. (1), i.e. - - = 0. At these points, dr the reaction vector R(C) and mixing vector v are collinear and point in opposite directions. The defining expression for a CSTR is therefore: o = R(c) +
av=
R(C) + a ( C * - C),
(3)
where C* is the reactor feed (equal to C ~ and from the mass balance it can be shown that a = -z:l, r being the residence time of the reactor. A more general form of the above equation where the feed residence time is also taken into consideration is: C - C O= ( r - z~
C('t ~ = C ~
(4)
As r increases from r ~ to oo the locus is traced from the feed point C Oto the equilibrium point
C e"
3.3. The Differential Sidestream Reactor (DSR) A DSR structure is a Plug Flow Reactor with differential feed addition along the length. The state of these addition streams may vary along the length of the reactor (in which case the feed streams are referred to as a moving mixing point) or it may stay constant (referred to as a fixed mixing point). In either case, a DSR is a smooth connector [3] between a reaction surface and a mixing surface, as shown in Fig. 1. This means that a DSR curve will
567
Fig. 1. Smooth Connector lie along the boundary between a reaction surface and a mixing surface, which are tangent to each other along that DSR curve. For a three-dimensional system, this curve must therefore lie on the surface described by[21:
(5)
~o= (Rx v).(dR( v)- dr(R)) = 0,
where dX is the jacobian matrix of vector X with respect to state C. The resulting structure is referred to as the to-surface. The defining equation for a DSR is given by: dC
dr
= R(C) + cry
(6)
The feed control policy (a) to ensure that the structure remains in the t0-surface in 3-D is given by: -V(p* R a =~ . Vtp.v
(7)
In four dimensions, the feed control policy is defined as follows/4]:
a = -
det[ v , R , ( d r ( R ) -
dR(v)),
(d(dv(R) - dR(v))(R) - dR (d v(R) - dR( v)))]
det[ v,R, ( d r ( R ) - dR(v)), ( d ( d v ( R ) - d R ( v))( v) - d r ( d r ( R )
.
- d R ( v)))]
(8)
4. THE 4-D A T T A I N A B L E REGION FOR VAN DE V U S S E - TYPE KINETICS
We are given reaction kinetics as follows: A ~-~ B - ~ C, 2A ---~D,
B *-, E,
with
rB = kl.C A -k2.C r o = k 3. C ~ ,
and
B -k4.C
rF.
B -I- k 5 . C E - k 6 . f B "
= k 4.C
~
B - k 5. C E .
Using the structures described above in conjunction with the Necessary Conditions it was attempted to construct an AR in CA-CB-CD-CE-space for a parameter set of: kl = k4 = k6 =
568 1 min 1, k2 = k5 = 0.01 min l , k3 = 10 litre.moll.min l , and a feed of [CA, CB, CD, CE] = [ 1 0 0 0]. A similar problem was examined previously [5] wherein the reaction B ~-} E did not occur. The candidate AR boundary proposed in CA-CB-CD-Space can be seen on Fig. 2. It was believed that if the problem was re-examined in four dimensions (CA-CB-CD-CE-Space), a similar structure would emerge. A number of general reactor networks were examined, of which two were considered as possible AR c boundary structures and are shown on Fig. 3-4.
Fig. 2. Three-dimensional Candidate AR for Van de Vusse reaction kinetics In order for Network (II) to lie on the AR boundary, we need the following condition to hold along the CSTR locus_[6]: det[ v, dR(v), (dR) 2(v), (dR)3 (v)] = 0
(9)
The CSTR points where Eq. (9) holds are termed critical CSTR points. This condition was solved along the proposed CSTR locus and it was found that it only holds at isolated points along that locus, and hence Network (II) cannot be a generalised AR boundary structure. This reactor network can, however, be used as an alternative process at the isolated points at which Eq. (9) holds, namely for r = 0, 0.35s or oo. In this case, it was found that the critical CSTR point corresponding to r = 0.35s can also be achieved by a critical DSR of infinite volume. The CSTR point at r = oo is nothing more than the point [0 0 0 0], i.e. infinite dilution, and the CSTR point of r = 0 is simply the initial feed point, [ 1 0 0 0]. Network (II) is therefore a sub-case of Network (I). The optimal reactor structure for this class of problems will therefore correspond to Reactor Network (I) as shown on Fig. 3.
569
Fig. 3. Proposed Reactor Network (I)
-----'liD"
Fig. 4. Proposed Reactor Network (II)
5. CONCLUSIONS Since no sufficiency condition has been found as yet for the Attainable Region, and no simple constructive algorithms exist, it is often problematic to construct the AR for a given system. Even though the AR Necessary Conditions [2] can be used as a guideline for AR construction, there is still no guarantee that the candidate AR constructed is in fact complete. In this paper we have shown that, when attempting to construct multidimensional AR's, it is useful to examine similar problems in lower dimensions first, as some of the structures will remain in the AR boundary in higher dimensions.
NOMENCLATURE C C* Co Ce R dR a fl v r
State vector/point Mixing point Feed point Equilibrium point Reaction vector Jacobian of vector R Feed control policy Mixing ratio Mixing vector Residence time
REFERENCES [1 ] [2] [3] [4] [5] [6]
D. Glasser, D. Hildebrandt and C. Crowe, Ind. Chem. Eng. Res., 26 (1987), 1803. D. Glasser and D. Hildebrandt, Chem. Eng. Sci., 45 (1990), 2161. M. Feinberg and D. Hildebrandt, Chem. Eng. Sci., 52 (1997), 1637. M. Feinberg, Chem. Eng. Sci., 55 (2000), 2455. E. Mitova, D. Glasser, D. Hidebrandt and B. Hausberger, PRES-01, (2001). M. Feinberg, Chem. Eng. Sci., 55 (2000), 3553.
570
Process SystemsEngineering2003 B. Chen and A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
An Integrated Optimization of Production Scheduling and Logistics by a Distributed Decision Making" Application to an Aluminum Rolling Processing Line Tatsushi Nishi* and Masami Konishi* *Department of Electrical and Electronic Engineering, Okayama University 3-1-1 Tsushima-Naka, Okayama 700-8530, Japan Abstract A distributed decision making system for integrated optimization of scheduling and logistics is proposed in this paper. The proposed decision making system consists of several sub-systems. Each sub-system has its own objective function and optimizes the decision variables independently. The proposed approach is applied to an aluminum rolling processing line. In the proposed system, all of the decision variables such as arrival date of raw materials, lot-sizing, production scheduling and allocation of products to the warehouses are optimized simultaneously by repeating the local optimization at material resource planning sub-system, scheduling sub-system for each production process and warehouse planning sub-system. The effectiveness of the proposed approach is demonstrated. Keywords supply chain optimization, planning and scheduling, logistics, warehouse management, augmented Lagrangian decomposition 1. Introduction Many process industries have successively achieved great improvements of planning and scheduling and global logistics planning based on the use of the concept of Supply Chain Management t~l. Integrated optimization of scheduling and logistics is one of the challenging problems for Business Decision Making in PSE area from the viewpoint of global optimization. This paper introduces an integrated optimization of scheduling and logistics in a production plant within one company. Generally, scheduling decisions involve the production sequence of operations and starting time of operations. Logistics decisions include the material resource planning, the warehouse management and distribution planning. One of the conventional approaches for scheduling and logistics planning is a hierarchical decomposition scheme in which planning system is concerned with the long term decisions such as production goal including logistics decisions, on the other hand, scheduling is concerned with the short term decisions so as to satisfy the production goal set by the planning system. In the hierarchical approach, the decision at the scheduling system does not affect the decision at the planning system, though the decisions at the planning system must be treated as the constraint at the scheduling system. It becomes difficult to derive the production plan taking the precise decisions into account. Moreover, the integrated optimization of scheduling and logistics is difficult to achieve because the optimization model becomes increasingly large and accurate information will be needed to optimize the integrated optimization model though the information will not always be obtained in actual situations. *Correspondingauthor,fax:+81 86 251 8111,tel: +81 86 251 8125, Email:[email protected]
571 In most of production facilities, production instructions are given to each production section. The production management system may be developed at each section. The decisions created at each section should be optimized individually while collecting the information and improving the model of each management system. From the above viewpoints, Hasebe et al. proposed an autonomous decentralized scheduling system that has no supervisory system controlling the entire plants with regard to creating schedules E21. The system consists of a database and some scheduling sub-system belonging to the respective production section. Each sub-system independently generates a schedule of production section without taking into account for the schedule of other production section. The data exchange among the sub-system and the generation of the schedule at the sub-system are repeated until a feasible schedule for entire plant is derived. The effectiveness of the system for flowshop problems with storage and due-date penalties is discussed by Nishi et al. (2000) [3]. The idea of decentralized scheduling can be applied to supply chain planning problems. Nishi et al. (2000) proposed a decentralized supply chain optimization system for multi-stage flowshop problems t41. In this paper, the decentralized supply chain optimization system is applied to aluminum rolling processing line. An integrated optimization method of scheduling and logistics by the distributed decision making system is proposed.
The effectiveness of the integrated
optimization by the proposed system is investigated.
2. Scheduling and Logistics in Aluminum Rolling Processing Line The outline of the aluminum rolling line is shown in Fig. 1. The plant consists of batch casting, hot rolling, cold rolling and warehouses. The characteristic of the plant is that the repetitive operations are included in the cold rolling process. For the supply chain optimization, scheduling, material resource planning, and warehouse planning should be taken into account in the optimization model. The decision variables for the scheduling of the plant consist of the production sequence of operations, lot size, starting time of operations. The decision variables for the logistic decision consist of material resource plan, allocation of the products to the warehouses, and the amount of inventory for the products. Table 1 shows the constraints and the objective functions at each section. These complicated constraints at each facility make it difficult to optimize the entire decisions simultaneously. Table 1 scheduling and logistics problem for the plant
W eho e
Fig. 1 Aluminum Rolling Processing line
Section
Constraints
Objective function
Procurement
Timing and amount
Material costs
Casting
working hour, lotsize
Setup costs
Hot rolling
Processing temperature Setup time
Cold Rolling
Repetitive operation
Due date penalty
Warehouse
Maximum inventory
Handling costs
572
3. Integrated Optimization Method of Scheduling and Logistics by Distributed Decision Making A distributed asynchronous decision making algorithm has been proposed by Androulakis et al[5]. The local optimization problems for scheduling problem (SP) and warehouse planning problem (VvrP)are formulated as a general representation. m
m ~.SP
(SP) min f (t i , w i , i,t ) subject to (scheduling constraints) (1) where, t~ represents the starting time of operation for product i at machine m. wf' represents binary variable which indicates the allocation of product i to the machine m. p~.SP represents the production amount of product i in time period t which is determined by scheduling.
The warehouse planning problem (WP) is formulated as:
subject to (warehouse planning constraints) (2) (WP) min g(xi.t, y ~ ) where, x~,, represents the amount of inventory for product i in time period t and y~ represents the allocation of product i to the warehouse k. The global optimization problem taking into account both scheduling decisions and warehouse planning decisions is formulated as" m
m
(GP) min f (t~ , w i , p~,se) + g(x~,,, y ~ ) constraints), and p~,sP= p~,~p
subject to (scheduling constraints), (warehousing (3)
where, p/.We represents the amount of products transported into the warehouse from the production section.
The material balance equation shown by Eq. (3) is relaxed in our
approach. Thus, the global optimization problem (GP) is decomposed and reformulated into two sub-problems by using augmented Lagrangian decomposition. By embedding a linear penalty function in each objective function, WP sub-problem can be formulated into a MILP problem. ,
m psi,
,~
pSP
pSP
wP
(RSP) min f(t~' w~ , ~,t ) + ~,, + a [ ,,t - P~,t I P (RLP)min g(x~ ,t, y~ , p~,WP)_ 2,.tpwP + ct [ p~,wtp _ p S~,, [
(4) (5)
The Lagrangian multipliers for each problem are updated in the direction of sub-gradient of the dual function. The sub-problems (RSP) and (RLP) are iteratively solved until the value of penalty function becomes zero by gradually increasing the penalty coefficient a . Thus, the convergence of the proposed method can be attained. Here, P represents the tentative value derived by solving each sub problem in the previous iteration. The scheduling system also consists of some scheduling sub-systems for production processes. The outline of the total system is explained in the following section. 4. Distributed Optimization System The system structure of the distributed optimization system is explained in this section. The total system consists of the database for the entire plant, the material resource planning sub-system (MRP), the scheduling sub-system (SS) and the warehouse planning sub-system (WP). The purpose of the MRP sub-system is to decide the material order plan so as to
573 minimize the sum of the material costs and inventory holding costs of raw material. The SS sub-system determines the production sequence of operations, batch size, the starting time of operation so as to minimize the changeover times and due-date penalties minus the number of products within a given time horizon. The warehouse planning sub-system (WP) determines the allocation of products to the warehouse and the amount of inventory so as to minimize the cost of handling in the warehouse and the sum of the penalty for product shortage. In the proposed system, the data exchange and re-optimization at each sub-system are repeated several times until a feasible solution is derived. The main difference of the proposed system is that the tentative master production schedule (TMPS) derived at each sub-system is exchanged among the sub-system. Each sub-system generates a solution of its own aiming at minimizing the objective function for itself and the penalty of the difference of TMPS. Database for the entire plant
Material Resource Planning Sub-system Thetlm/n0 of the ardv~ofmalmtal,.AWoOuctlonamotmlof t~,~,,.,~, Immuet~ate~r
I ]
I
ObjectivefuncUon
o f i n ~
(l) Preparation
Material ] ~.~ I 1(2)Referencet~ I Resource Scheduling leachother'sdatal Schedulingl (2) 1 Wareh~ sub-system I'~'1~" "" '~'1~ I sub-system I'*~1 Planning I ( ~ Planningsub -system for Casting..] (*'1 for Rolling I (,~ sub-system ] (3i
(3)
Materialcost. Inventorycost for raw maledal
{PitMRP'~
(3) Generation of a new solution ~(3i Step 2), 3) are repeated until a feasible solution for the entire supply chain is derived,
l { P i t SP}
Decentralized Scheduling Sub-system ..................... ~ ....... obl,,~. . . . . . . . . . .Imt~ . . . .size, . . . Production . . . amountof pmducbl .,.~,.sat eachtime penod {Plts~[ [{P,tWP} Warehouse Planning Sub-system t~.~, v.,~,,. Alotmlkm of I~'oduct~ ,owem~ou., Ihe emounlofinv~mloty A production amount of productsat each periodwhich il del~eb~ wP l . l ~ t e m o~,,,~,~t.,, l~~.~,~..~,.,~,.t~o,~,0..~.~...,,,..o~.~,~.
Fig.2 Structure of the distributed optimization system
Fig.3 The optimization model
The following data is exchanged among the sub-systems. (1) The tentative earliest starting time for each job (TEST): exchanged among SS sub-systems The ending time of job at the immediately preceding production process (2) The tentative latest ending time for each job (TLET): exchanged among SS sub-systems The starting time of job at the immediately following production process (3) The tentative master production schedule for each product (TMPS): exchanged among MR_P, SS, WP sub-systems Fig. 4 shows the TEST and TLET for hot rolling process obtained from other sections. Fig. 5 shows the data exchange algorithm. Each square illustrates the steps of the data exchange in MRP sub-system
Material Storage Buffer Castmgprocess
Casting
t Job A /
/
~,_~eAH~ Hot-rolling Process
Cold-rolling
SS WP sub-system sub-system Hot-rolhng Cold-rolhng
at
Hot-rolhng
/-Job-A
" | Penalty for operation for Job A in Hot-rolling
2
M
".~":":--:::".z---, . . . . .
::::::"-'-'-"'":"
WP
:TLET at Hot-rolling
Process Warehouse
Fig. 4 TEST and TLET for Hot-rolling
Fig.5 Data exchange among the sub-systems
574 the iterations. Each arrow represents the flow of the data. The dotted arrow indicates TMPS. The thick arrows indicate TEST and TLET exchanged among the SS sub-systems. 5. Numerical Experiments The proposed system is applied to an aluminum rolling processing line. The total time horizon is divided into 4320 time unit (3 days). The total planning period for MRP and WP system consists of 480 time unit (8 hours) in which the total time horizon is divided into 9 time periods. The four kinds of products are produced from two types of material. The demand for the products is given in each time period. The scheduling system for each section is optimized using Simulated Annealing method [4]. The condition for each process is shown. Material Resource Planning: The lead-time for material transportation is negligible. The delivery date of the raw material is limited. The problem can be formulated as MILP problem. Casting Process: There are four parallel batch machines. It is possible to process two unit of batch at a time with the same product. The working time for a day is restricted to 20 hours. Hot-roiling Process: There is single batch machine. The range of processing temperature is given for each product. The changeover time depends on the difference of temperature. Cold-rolling Process: Repetitive operation is necessary. The production path for each product is given a priori. The changeover time depends on the product width of products. Warehouse: There are two storage areas Wl and W2. The capacity of the warehouse is limited. The transportation time to the storage space is negligible. It is possible to transport a product to a warehouse at a time period. The handling cost depends on the amount of delivery to the customers at a time period. The MILP problem is solved by using CPLEX(iLog). The intermediate result and final result are shown in Figs. 6, 7. The results obtained by MRP sub-system are omitted in this paper due to the space limitation. From the results of these figures, a feasible schedule for the entire plant can be derived after 18 times of data exchange. Casting
M, L .~ P. ~ ~ M4
f' q
Casting
a=l'
P I' I~ ~'~
Hot-rolling m .~.__.E ~ ~ cl lliiiii~ii+~h++~;iiiiii~,+iiiiiiiiiiiiiiiiii;;;;;;;;;;i;i;;i;;;;;;;;;;;
Ho-o,,,o0 ..l
o,o-ro,,,noo,I C1
-
!
I==0
~-
-'mo~ =
C3 l
"r,.,,.
ps.
.........................
uug uu
u uIIIIIUJ, UUUIWtJ
~llllllllllllllllJll
' 1
2
pwP '~1 r - , , , Inventory
w~
~ l
__
,
~
o
3
4
5
,l~--~ 2
3
,N,m
4
.N
s
=
w2 ,~ ~ r ~ - - r r - = l l ~ W ~ l 20O0
I
2
3
4
6
7
8
9
. . . . o
8
e
_ _ ~ , ~ ,
5
?
~
6
"~o
Inventory
wl W2
'~o1~
5
9
Fig. 6 The result after 1 times of data exchange
I~
.
.
.
.
6
. . . . . . i 2 3
2OOO
2000
, ~ , 7
.
pw,, ,ooo I
i
..... I
~ 1
~ 2
7
S
g
o?8_________.._99
. , . ,
2 .
,m,m, 4 5
3 .
. 3
4 .
.
5 .
4
.
. 5
0 .
.
7 .
6
.
8
. 7
8
9
Fig. 7 Final result 18 times of data exchange
575
6. Effectiveness of the integrated optimization by the proposed system In order to demonstrate the effectiveness of the integrated optimization by the proposed system, we developed a hierarchical planning system in which the warehouse planning and scheduling is executed sequentially. Table 2 shows the comparison of the results obtained by both systems. The penalty of product shortage of the proposed system is smaller than that of CONV even if the inventory holding cost is smaller than that of hierarchical decision making system. Table 2 Comparisonof the proposed system and hierarchical planning system The proposed system The hierarchical system Number of job 49 58 Handling costs
29800
25000
Inventory holding costs
10800
11900
Penalty of product shortage
17000
38000
Total
57600
74900
7. Conclusion We have been proposed an integrated optimization of scheduling and logistics for aluminum rolling processing line. The distributed optimization system which gradually generates a feasible schedule through repeated data exchanges and optimization at each sub-system. In the proposed system, tentative master production schedule derived at MRP and WP sub system, the starting time and ending times of operations at each production section are exchanged concurrently. Thus, the tentative master production schedule at each sub-system is gradually adjusted. By adopting the structure, the proposed system can flexibly accommodate various modification caused by unforeseen changes. The effectiveness of the proposed system is demonstrated by comparing the results with that derived by the hierarchical planning system. REFERENCES [1] D. Taylor, Global Cases in Logistics and Supply Chain Management, 1997. [2] S. Hasebe, T. Kitajima, T. Shiren, Y. Murakami and Iori Hashimoto, Autonomous Decentralized Scheduling System for Single Production Line Processes, AIChE Annual Meeting, Paper 235c, 1994. [3] T. Nishi, A. Sakata, S. Hasebe and I. Hashimoto, Autonomous Decentralized Scheduling System for Just-in-Time Production, Comp. Chem. Eng., 24, Nos. 2-7, 345-351, 2000. [4] T. Nishi, M. Konishi, S. Hasebe and I. Hashimoto, Autonomous Decentralized Supply Chain Optimization System for Multi-Stage Production Processes, Proceedings of 2002 Japan-USA Symposium on Flexible Automation, 131-138, 2002. [5] I.P. Androulakis, G.V. Reklaitis, Approaches to Asynchronous Decentralized Decision Making, Comp. Chem. Eng., 23, 341-355, 1999. [6] T. Nishi, M. Konishi, S. Hasebe and I. Hashimoto, Machine-Oriented Decentralized Scheduling Method using Lagrangian Decomposition and Coordination Technique, Proceedings of the 2002 IEEE Int. Conf. on Robotics & Automation, 4173-4178, 2002.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
576
An adaptive interval algorithm to identify the globally optimal process structure A.R.F. O'Grady, I.D.L. Bogle and E.S. Fraga Centre for Process Systems Engineering, Department of Chemical Engineering, UCL (University College London), Torrington Place, London, WC1E 7JE, U.K.
Abstract With the rapid development of new processes and technology, particularly in the pharmaceutical industry, the potential gains from developing novel process designs are increasing. The consideration of as many structural alternatives as possible and as early as possible is important. This paper presents an algorithm that can identify the globally optimal process structure. The algorithm has been implemented within the Jacaranda automated design framework. Through the use of interval analysis, an adaptive discretisation procedure has been designed. The procedure does not impart any preconceived ideas of possible process structures from the user and hence may be more likely to yield novel results. A case study demonstrates the ability to generate novel and potentially unexpected results and with improved efficiency. Keywords Adaptive optimisation, process design, interval analysis, process synthesis 1 INTRODUCTION Automated design tools are intended for use at the early, conceptual stage of process design. At this stage, little is known about the structure of potentially good designs and one should minimise the number of a priori decisions about these structures. Automated design tools for process design are often based on the definition of a mixed integer non-linear programme (MINLP) from a given superstructure Ill. In defining the superstructure, decisions are made about possible alternatives, decisions that can limit the quality of subsequent solutions identified by the MINLP solution procedure. This paper describes an algorithmic procedure that is able to identify the globally optimal process structure given a feed stream, product specifications and a set of units that can be used. A superstructure is not defined so possible alternatives are not necessarily limited by the imagination of the user of the system. The economic benefits of considering as many alternatives as possible at this stage could be much greater than in post synthesis process optimisation. The procedure is based on an implicit enumeration (IE) approach. IE is used to generate and traverse a search graph where nodes correspond to unit operations and edges to process streams. The IE method has been implemented in Java within the Jacaranda [2] generic framework for automated design using object oriented design principles. In the Jacaranda framework, unit design variables are discretised to yield a finite search space. The advantage of the use of discretisation is the ability to tackle problems with complex mathematical models without concern for linearity or convexity issues. The disadvantage is the possibility
577 of missing good solutions that fall between the discrete values considered by the search procedure. Discretisation requires the user to specify how to map continuous quantities (e.g. unit operating conditions, component flows, and stream enthalpies) to discrete space. Appropriate values for the discretisation parameters can often be determined from the engineering aspects of the design problem. For example, limits on contaminants for waste products may provide upper limits on the coarseness of component flow discretisations. However, in general, the user may not be able to rely on such insight to determine the discretisation parameters. Furthermore, solutions presented to the user provide no indication of the effect of the discretisations on their quality. This paper presents a scheme that adaptively discretises unit variable values so that the optimal structure for a problem may be isolated. It is important to note that at the early stages of design, it is the process structure that is of paramount interest to the user rather than the unit design variable values. Therefore, a problem, in the early stages of design, is said to be solved when it is mathematically proved that one structure is superior to the others. For a given process stream being fed to a unit, IE is used to cycle through the discrete design options. Adaptive discretisation of the unit design variables is then used to isolate the optimal process structure. The algorithm is called recursively on outputs from the units until a product specification is met. The adaptive procedure allows an efficient search and it is not necessary for the user to define a level of discretisation. This efficiency and ease of use makes such tools more accessible to the non-expert user. The globally optimal solution is assured by the application of Interval analysis [31 to provide bounding information. Intervals are used to span the distance between discrete levels. Interval arithmetic ensures that the interval result of an operation on two intervals contains all achievable real values. This is useful as continuous real variables can be divided into discrete interval sections. As a result, the global optimum can be bounded. Hansen [41 provides a detailed explanation of some interval methods and their application to global optimisation. Interval analysis has been previously applied to process engineering problems. Schnepper et al. t51 used interval analysis to find all the roots of several chemical engineering problems with mathematical certainty. Byrne et al. [6] used interval methods to globally optimise given flowsheet structures. Our own previous work I7'81 used interval analysis to provide bounds on flowsheet structures, allowing the user to identify the optimal process structure by using successively finer discretisations on subsequent runs. In these earlier works, a user specified, uniform discretisation of the unit design variable was applied for each new stream encountered. This work served as proof of concept for the use of interval derived bounding information in a more efficient adaptive method. 2. M E T H O D O L O G Y The adaptive algorithm is based on the recursive algorithm that forms the core of Jacaranda t21. In the new procedure, increased efficiency is achieved by using variable levels of discretisation across unit variables. The choice of where finer discretisation takes place is based on bounding information provided from previous designs and recursive solution of their sub-problems. Hence, the procedure adaptively discretises variables to increase efficiency and to remove the need for the user to specify a discretisation level for each unit variable. The concept of the box TM has been used to describe the ranges of unit variables and discrete choices for a particular unit design.
578
2.1 Boxes Each side of a box consists of a range of values of a particular variable. This range is represented by an interval. The number of discrete design options for a unit defines the dimensionality of the associated boxes. For example, a distillation column may be have three degrees of freedom. Two sides of a box describing such a column design could be design pressure and key recovery fraction. Their ranges would be represented by intervals. The third dimension of the box would be the light key component. A box may be split along any of its continuous variables. The split occurs at the midpoint of the interval value of the variable. This operation results in two new boxes. They are identical apart from each representing one half of the original interval values of the split variable. In the new algorithm boxes are used as a basis for unit designs. As a result the box is assigned with cost bounds for the objective function. When the output streams from this unit design are solved then the costs of the solutions are added to give a cost interval for the box. For each process stream, boxes are stored in a binary tree based on the value of their lower cost bound. Another important piece of information associated with a box is the downstream structure of the solutions to the streams that the unit design produces. This allows two boxes to be compared on the basis of the structure that they represent.
2.2 Box splitting algorithm The algorithm in Fig. 1. is applied to the initial process stream and then recursively for each unit output stream created. Firstly, the problem stream is checked against the set of previously solved problems. If the problem is new then for each unit available, an enumeration is used to create a box for each possible discrete design alternative. Each side of an initial box is an interval representing the range of values allowed for that particular unit design variable. The procedure shown in Fig. 2. is then called for each box created. A design is attempted based on the design variable intervals of the box. If the design is successful, then the procedure is called recursively on the resulting process streams. The lower cost bound of the box is then compared with the current lowest upper bound for all the boxes encountered for this problem. If the lower bound on the cost is greater than the best upper bound then the box cannot contain the optimal solution and the box is discarded. Otherwise it is stored. If the design is unsuccessful then there are two possible courses of action. If the whole box is infeasible then it is rejected. For example in distillation this could occur because the minimum reflux ratio is below zero. However, a design may fail because the design variable intervals are too wide leading to the situation where some of the box is feasible and some of the box is infeasible. This can also occur due to dependency. In this case the box is assigned with a lower bound of-infinity and stored. This ensures that such boxes remain active and will be subsequently selected for splitting. After the processing of the initial boxes, a loop begins that continues while the stopping criteria are not met. In order to evaluate the stopping criteria, the procedure first checks if the unit design associated with the box is a product. If not then the lower cost bound of each of the stored boxes is compared with the current lowest upper bound of all the boxes encountered for this sub-problem. If the lower bound of the box is greater than the best upper bound then the box cannot contain the optimal solution and is removed. If the box cannot be removed, its structure is compared with the structure of the box with the lowest lower bound. If all the boxes represent the same structure then it must be the optimal structure for the processing of the stream. If there is no diversity in structure or the stream is a product, the
579 solution is stored in a data structure that is globally available and indexed by a key unique to each sub-problem [9]. If the criteria are not met, the box with the lowest lower bound of cost is selected and split along its longest side. the two resulting boxes are then processed by the procedure shown in Fig. 2.
procedure solve(problem p) boolean done = false
Initialise an empty binary tree to store boxes if p already processed done = true else
for Each available unit for Each discrete alternative, Create a new box Process box end for end for while done = false if stopping criteria met Store solution globally done = true else
Retrieve box with lowest lower bound from the tree Split box along longest side Process resulting boxes end if end while
procedure Process box(box b) Design unit based on b if design successful for Each product Create a new problem, p solve(p) Retrieve solution Update b with cost and structure end for if b is good enough store b in tree end if else
if Fails due to dependency Set infinite costs for b Store b in tree end if
end if end procedure solve
end if end procedure Process box
Fig. 1. Box splitting algorithm
Fig. 2. Processing and storage of boxes
3. C A S E S T U D Y : B E N Z E N E R E C Y L E P R O B L E M The stream, defined in Table 1, must be purified to achieve 98% purity for benzene with such a stream to contain less that 0.5% chlorobenzene. This stream appears in the separation section of a chlorobenzene process and the benzene is to be recycled back to the reactor.
Table 1 Composition of feed stream Component Flowrate (kmol/s) Benzene Chlorobenzene Di-chlorobenzene Tri-chlorobenzene Pressure Temperature
0.97 0.01 0.01 0.01 1 atm 313 K
580 The other components are to be removed as waste, with the requirement that any waste stream contain less than 10 mol% benzene and less than 10 mol% chlorobenzene. Any output stream which consists of >90% chlorobenzene will also be accepted as a valid product stream. Semi-sharp distillation columns, with key recovery of between 97 and 99.9%, are to be used for the separation. The column designs are based on the Fenske-Underwood-Gilliland equations. The capital cost of heat exchangers is also taken into account based on the energy requirements of condensers and reboilers. The aim is to identify the flowsheet structure with the lowest capital cost for this separation. The adaptive scheme solved this problem in 26 seconds using an 850 MHz Pentium III computer with Java 1.4. The solution yielded is shown in Fig. 3. Below each column, the light and heavy key components are shown. In the solution, distillation with benzene and chlorobenzene as the keys is carried out in two columns. These two separations are coarser than the very fine split required from one column. The algorithm has shown, through comparing cost bounds of solutions that using the three column structure will incur the least capital cost. Benzene
Benzene
Chlorobenzene B/C1B
B/C1B Waste C1B/C12B Fig. 3. Optimal structure for the benzene problem
The maximum number of box splits before a solution was isolated was found to be 18. The finest discretisation in the percentage key recovery variable, as a fraction of the allowed key recovery range, was found to be 1.5x 105. This fraction corresponds to a width of about 1/600000 th of the possible range. Using the uniform discretisation technique, it would not be possible to solve the problem to this level of discretisation. In order to highlight further the efficiency of the adaptive algorithm, the problem was attempted using 18 discretised key recovery intervals for each discrete unit choice. This is the finest uniform discretisation possible. Table 2 Comparison of adaptive and uniform discretisations Adaptive algorithm Uniform discretisation Number of problems 884 246836 % problem reuse 26 96 Time (s) 27 3650
581 A comparison between the adaptive algorithm and uniform discretisation is shown in Table 2 demonstrating the efficiency attained by the adaptive approach. Further insight may be gained by the discovery of the regions where it is necessary for the adaptive procedure to discretised most finely. This is the subject of ongoing research. 4. DISCUSSION The process structure identified by the adaptive algorithm, in the case study presented above, demonstrates the potential benefits of the new adaptive procedure. A novel design that is not immediately obvious to the designer may be uncovered. If altematives were to be incorporated into a superstructure, it is likely that this superior altemative would be omitted as only one split between any two components would be defined. The case study has also demonstrated the enhanced efficiency of the procedure. There is a substantial improvement in solution time when compared with the finest achievable level of uniform discretisation, a level orders of magnitude coarser than the finest level reached by the adaptive algorithm. In the new scheme, it is unnecessary for the user either to specify a level of discretisation or to examine the results before deciding if finer discretisations are necessary. In addition, the globally optimal solution structure is assured. Overall, the accessibility of the system to a nonexpert user has been improved while retaining quality assurance. These factors are extremely important if automated process design tools are to be routinely used in industry. REFERENCES
1) H. Yoemans and I.E Grossmann, Computers Chem. Engng., 23(1999) 709. 2) E.S. Fraga, M.A. Steffens, I.D.L. Bogle and A.K. Hind, An object orientated framework for process synthesis and optimization, Foundations of Computer-Aided Process Design, M.F. Malone, J.A. Trainham and B. Carnahan (eds.), AIChE Symposium Series 323(2000), 446. 3) R.E. Moore, Interval Analysis, Prentice Hall, Englewood Cliffs, New Jersey, 1966. 4) E. Hansen, Global Optimization Using Interval Analysis, Marcel Dekker, New York, 1992. 5) C.A. Schnepper and M.A.Stadtherr, Computers Chem. Engng., 20(1996) 187. 6) R.P. Byrne and I.D.L. Bogle, Ind. Eng. Chem. Res., 39(2000) 4296. 7) A.R.F. O'Grady, I.D.L. Bogle and E.S. Fraga, Chem. Pap.,55(2001) 376. 8) A.R.F. O'Grady, I.D.L. Bogle and E.S. Fraga, Interval analysis for identification of potential process structures in early design, European Symposium on Computer Aided Process Engineering- 12, Grievink J and Van Schijndel J (eds.), Computer-Aided Chemical Engineering, Elsevier, 10(2002) 271. 9) E.S. Fraga, Chem. Eng. Res. Des., 74(1996) 249.
582
Process SystemsEngineering2003 B. Chenand A.W.Westerberg(editors) 9 2003 Publishedby ElsevierScienceB.V.
A H y b r i d C L P a n d M I L P A p p r o a c h to B a t c h P r o c e s s S c h e d u l i n g Benjamin Roe a*, Nilay Shah a and Lazaros G. Papageorgioub ~Centre for Process Systems Engineering, Imperial College, London, SW7 2AZ, U.K. bCentre for Process Systems Engineering, UCL(University College London), London WC1E 7JE, U.K. Abstract In this work a novel hybrid CLP/MILP algorithm for scheduling production in complex multipurpose batch processes is presented. The scheduling problem is decomposed into two sub-problems: first an aggregate planning problem is solved using an MILP model, and then a sequencing problem is solved using CLP techniques. The CLP model avoids the complexity of explicitly stating material balance constraints by instead using precedence constraints between batches to ensure the schedule is feasible. The algorithm used is summarised and encouraging computational results for a scheduling example are shown.
1. INTRODUCTION Scheduling complex multipurpose processes is a computationally hard problem. Much research in the area has concentrated on using pure MILP as an optimisation method. While mixed integer linear programming(MILP) is a flexible method, the computational effort needed to solve large-scale problems can be prohibitive. The combinatorial nature of the scheduling problem suggests that a decomposition of the general problem into a number of sub-problems has the potential to outperform traditional single-level methods. Each sub-problem can then be solved using the method that is most effective for that specific class of problem thus leading to a hybrid approach. The work of Harjunkoski and Grossmann[ 1] has shown that a hybrid formulation, combining constraint logic programming(CLP) and MILP, has potential as a scheduling method in multistage flowshop scheduling. The formulation decomposes the problem into two sub-problems: a machine assignment optimisation sub-problem, solved using MILP, followed by the generation of a feasible schedule subject to this assignment using CLP. In this paper the objective is to present a more general hybrid MILP/CLP algorithm for scheduling multipurpose batch processes. 2. OVERALL ALGORITHM The overall structure of the scheduling algorithm is shown in Fig. 1. The input to the first stage is a list of deliveries to customers that must be met and the State-Task Network description of the process. The objective is to determine the minimum set of batches required to meet these deliveries and to allocate the batches to the units available. The MILP formulation used in this *Author to whom all correspondenceshould be addressedemail:[email protected]
583
Figure 1. Overall algorithm structure
stage is described in detail in Section 3. The second stage of the algorithm then performs the actual scheduling of these batches; the process of assigning a start time to each batch so as to create a feasible schedule subject to all process constraints. This stage is described in Section 4. 3. MILP AGGREGATE FORMULATION
The main constraints in the MILP formulation are the material balances. Total production must be greater than or equal to demand:
EE
-
iGIs jGUi
Vs
-
(1)
iEls jEUi
The total extent of each task is limited by the number of the task performed and the unit size:
Vm '~3nNJ"i
-< E i j
-<
~/rmax ~ . . "J .,,z3
V i, j E U~
(2)
In order to produce an optimal assignment of batches to units, an estimate of the makespan of the schedule is required in the aggregate model. The time taken for the tasks on a unit is the sum of the duration of the tasks assigned to it and the changeovers required between the tasks. Only modelling sequence-independent changeovers, the minimum possible changeover time can be used as an estimate. The following constraints are required to model the makespan:
W:j <_ E N~j < MWf3 ief :n%
V f, j
(3)
V
ieTj
L _> D3
Vj
j
(4)
(5)
where W/~ is a binary variable indicating whether a task in family f occurs on unit j and L is an estimate of the schedule makespan as the maximum of the utilisation times of every unit, Dj. The objective function is therefore to minimise L, the makespan. The second sum in constraint 4 is the minimum duration of changeovers required given the tasks to be performed on the unit j, assuming that all tasks in each family are performed consecutively.
584 4. CLP F O R M U L A T I O N The solution to the MILP stage consists of a list of N batches, B T, to be performed. The MILP solution defines the task, in, the unit, jn, and the extent, En for every batch. The total extent of each task, E# is divided into N~3 batches by using the full capacity of the unit j as the batch size. The second stage takes this aggregate production plan and performs detailed scheduling. For each batch, a sufficient amount of every feed state for the batch must be produced before it in the schedule. The set of preceding batches for each batch, B~, is determined by using a simple C++ program. The rest of the sequencing problem is solved using CLP. The constraints that must be satisfied by a feasible schedule are unit allocation, available storage, renewable resource utilisation and unit cleaning constraints. Product demands are modelled as deliveries that occur at a certain time. These deliveries are included in the CLP model simply as zeroduration batches that do not use a unit but consume the required state and have a fixed time at which they must occur. Precedence constraints are posted for delivery batches just as they are for process batches. The selection of the two jobs to impose an additional constraint between is performed in a backtracking depth first search tree manner. Fig 2 shows an example of the ordering constraints imposed during search - one candidate schedule is generated at each node of the search tree. i Post initial constraints For normal batches T,~,m~n - 0 and T,~,max - H - dn, allowing each batch to occur at any time in the problem horizon without overlapping the end of the horizon. For delivery batches, Td 'rain and T~ 'max are defined in the problem description. Using the data above, the following constraints are posted to make up the initial CLP program [2]. These constraints relate to precedences (6), unit utilisation (7) and unit cleaning (8).
T,~, + l~,, < T2
V n e B T, n' C BV~
disjunctive ([T~], [l~]) nEB~"
(6)
Vj
disjunctive ([Tn~, T2,], [l~ + CSj , le]) n'EB~,n'~Ffn
(7) V j, i e Tj, n e B~
(8)
ii Generate candidate schedule Using finite domain search, generate a schedule for the process subject to the constraints posted in 1 and 3. iii Repair inventory constraint violation Calculate inventory levels at each time point in the problem horizon given the candidate schedule from 2. If no violations of inventory level constraints are found, the schedule is a solution. Otherwise locate two batches near the earliest inventory violation, Bn and Bn, where T,~, > T,~ in the current solution, and add/~n' to B~p. Retum to step 2. The objective function is defined as the total value of product states delivered to customers. Product states can only be delivered to customers if they are produced before the specified delivery time. In order to perform optimisation, the precedence constraints for batches that precede deliveries are replaced with reified constraints (Eqn. 9). A solution that maximises
585 .
1
.
.
.
L_ _ _ _ a
/~ \
_
- }
_
f-~,
_
L_____l
,.
0.636
2
-
!.0 ~
.....
1.0
K_4 / \ 0 . 4 3 2
5
I
.
-
] \
-
~
_
j t
0 468 ~
.
.
.
.
~
...... I
1.0
/~\ \
/
0.814~
~
I
,.o
Figure 2. Ordering search tree
. . . . . . .
, L
-
,.o ._D
Figure 3. Case study STN
the amount of product delivered can then be found by minimising the sum B. This sum is the only objective function used during CLP search; the MILP and CLP formulations are entirely separate.
Pdn= ~ 0---~ T~ + Li.,j. < T~ t 1 ~ T,~+Li.,j. > T~
V d c D , n C B~
(9)
dED nEB~
Standard CLP optimisation search proceeds by repeatedly finding a solution and tightening the bound on the objective function value until no further solution is found[3]. As the only variables affecting the objective function value are the Pun binary variables, these are included with the batch start times in the list of variables to be instantiated. Given a constraint on the upper bound of the objective function value, the search procedure first instantiates the Pd,~ variables to meet this constraint and then attempt to instantiate the Ts variables subject to the constraints implied by the values of the binary variables. This procedure can waste much time attempting to find schedules for combinations of values of Pdn which are infeasible. If this is the case, incomplete tree search can increase solution efficiency by ignoring parts of the search space. The most effective method found is to use depth-bounded search followed by bounded-backtrack search. This will explore the search tree completely down to the level at which all Pan variables are instantiated. Below this, only a limited number of backtracks within a branch are allowed before the branch is pruned and search continues with the next branch. As a solution is usually found in relatively few backtracks if one exists, this allows the search to ignore parts of the search tree which do not contain a solution. 5. CASE STUDY The use of the hybrid approach is now illustrated with a process scheduling example. The hybrid algorithm is compared to a standard discrete-time MILP formulation, using the gBSS
586 Table 1 Unit task suitability data Unit Task Suitability A F_I, K_5_6, T_3 B E2b, K_4, T_4 C F_3, K_8 D F_2a, K_7 E T_l, T_2
Table 2 Solution data Solver Pure MILP Hybrid
for case study Objective Value Solution Time 49.3 10hrs 51.3 45mins
scheduling package [4],[5] using the XPRESS MILP solver. The hybrid algorithm was implemented using the ECLiPSe package developed at IC-PARC [2], also using the XPRESS solver for the MILP stage. The process state-task network and unit data are shown in Figure 3. Production is to be scheduled for a two-year horizon using a weekly time interval; the processing time for every task is approximated as one week. No maximum inventory level constraints are required due to the small scale of production. The tasks related to each product are considered as members of the same family. A cleaning time of 2 weeks is required on a unit between tasks from different families. Many deliveries of each of the three products are possible during the problem horizon. The objective function value is the total amount of all of the product states delivered to customers as described in section 4; this is calculated as the total possible value of deliveries (87.2) minus those not made, as defined in the CLP objective function. The LP relaxed value of the objective function is 78.3. Table 2 compares the objective function value and solution time for the two methods. The solution time for the hybrid algorithm is over an order of magnitude shorter than for the single-level approach. Approximately 99% of the hybrid solution time is spent in CLP search, the majority of this time being spent generating solutions with progressively better objective function values.
6. CONCLUDING R E M A R K S The results show that for processes a process in which the main complexities relate to cleaning and task allocation, the hybrid approach can outperform a traditional discrete time MILP formulation both in search speed and solution quality. Complex process constraints can be easily modelled using CLP, and the ability of the solver to propagate constraint effects provides a very efficient solution method. The search algorithm used in the CLP optimisation is an important factor in the efficiency of the method - the algorithm used here avoids the generation of multiple degenerate solutions and prunes branches of the search tree unlikely to contain a solution effectively. For processes in which inventory level constraints are tighter, the global nature of the pure MILP method allows it to outperform this hybrid approach at present. Further improvements of both the MILP and CLP stages of the method are possible, however, including better handling of inventory level constraints.
587 NOMENCLATURE
MILP Formulation Symbols Set Description Ui Tj I~ Fa
Ff Variable Eij
Nij Wfj Dj L
Parameter Vjr a i n Vjmax f~s f/~s
ds CsI
li H
Units that can perform task i Tasks unit j can perform Tasks that interact with state s Set of all task families Set of tasks which belong to family f
Description Total extent of task i on unit j Number of times task i is performed on unit j Binary variable indicating whether a task in family f occurs on unit j Duration of batches on unit j Makespan Estimate
Description Minimum capacity of unit j Maximum capacity of unit j Fraction of extent of task i produced as state s Fraction of extent of task i consumed as state s Total demand for product state s Initial Inventory of state s Duration of task i Number of time points in problem horizon Changeover time between tasks of different families on unit j
Cj Additional CLP Formulation Symbols Description Set BT Set of all batches to be performed D Set of all deliveries to be made
B~
Set of batches to be performed on unit j Set of batches of task i Set of batches which must precede batch n C BT
Variable
Description
~max
TnS,min
T2 in
fn
Maximum value of T,~ Minimum value of T~s Start time of batch n Task which n is an instance of Task family of which batch n is a member
REFERENCES 1. 2. 3. 4.
I. Harjunkoski, I.E. Grossmann, Comput Chem Eng 26 (11) (2002) 1533-1552. J. Schimpf, K. Shen and M. Wallace,www.icparc.ic.ac.uk/eclipse (1999). K. Marriott and P.J. Stuckey, Programming with Constraints (1999) 114-116. L.G. Papageorgiou, N. Shah and C.C. Pantelides, Advances in Process Control III, IChemE, Rugby (1992) 161-170. 5. N. Shah, K. Kuriyan, L. Liberis, C.C. Pantelides, L.G. Papageorgiou and P. Riminucci, Comput Chem Eng, 19S (1995) $765-$772.
588
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier ScienceB.V.
Risk conscious scheduling of batch processes G. Sand and S. Engell
Process Control Laboratory, Dortmund University, 44221 Dortmund, Germany Abstract We consider real-time scheduling problems of flexible batch processes under the special consideration of uncertainties. Any decision has to be made subject to a certain risk since it affects the future evolution of the process, which is not precisely predictable. This difficulty can be faced by a moving horizon approach with frequent re-optimisations. We propose the use of a model framework from stochastic programming to reflect the uncertainty and the potential of recourses realistically. The framework is applied to a real-world process from the polymer industries, a decomposition algorithm is sketched and numerical results are given.
Keywords real-time scheduling, flexible batch processes, stochastic integer programming 1. INTRODUCTION During the operation of flexible batch plants, a large number of discrete decisions has to be made in real time and under significant uncertainties. Any prediction about the evolution of the demands, the availability of processing units and the performance of the processes is necessarily based on incomplete data. Resource assignment decisions must be made at a given point of time despite the fact that their future effects can not be foreseen completely. The multi-product plant is the most popular flexible plant concept in the chemical industries, especially in the growing market for specialty chemicals, where small volumes of highvalued products in several complicated synthesis steps are produced. Its flexibility enables the production of a variety of products on a single plant and for rapid and cost efficient adaptations of the product supply to the customer demands. To operate such a flexible plant in a dynamically changing environment, a large number of management decisions and control activities is needed, so efficient process management and control systems have a strong impact on the profitability of the plant. For the problems on the lower control levels, standardized solutions exist, and new plants are often highly automated on these levels. Nevertheless, the management of process operations, in particular planning and scheduling activities, can hardly be standardized, since these higher-level problems are dominated by complex interactions which are highly plant specific. Therefore, computer-aided planning and scheduling is still a topic of extensive academic research and so far seldom realized in industry. A large number of publications demonstrate that the theory of mathematical programming (MI(N)LP) provides promising methods to model scheduling problems adequately and to solve them efficiently (for an overview see Ref. 1). An appropriate strategy to schedule highly coupled processes online is based on a moving horizon approach, similar to model predictive control (MPC): The problem is solved for a certain horizon, and the first decisions are applied. Due to modelling inaccuracies or disturbances of the process, the computation must be iterated after some period taking new infor-
589 mation into account. While this is a "closed loop" strategy with decisions in the feedforward and observations in the feedback branch, the models used are often based on an "open loop" view, which neglects the optimisation potential of re-optimisations subject to feedback information. Undoubtedly, the quality of scheduling decisions can be increased by modelling the uncertain future evolution along with corresponding reactions more realistically. 2. RISK CONSCIOUS MODELLING
2.1. Motivation In recent years, several publications appeared which address the issue of uncertainty (e.g. Ref. 2-6). However, so far the following essential aspects of uncertainty conscious scheduling models received only little attention: 1. Plant managers who face uncertainty will try to maximize the mean profit but they will also try to avoid the rare occurrence of very unfavourable situations, e.g. heavy losses. Naturally they aim at a compromise between expected profit and accepted risk. 2. In any iteration, only the first few decisions in the horizon are really relevant. Due to temporal couplings, the remainder of the decisions has to be anticipated, but they are never applied since the solution will be modified in the next iteration. The second aspect cannot be reflected by open loop models since these models do not differentiate between "here and now" and "recourse" decisions. Compensations to possible disturbances can only be considered, if the model reflects different possible scenarios of the process evolution with corresponding degrees of freedom to react to certain realisations. Concerning the first aspect, the maximum expected profit for a number of possible process evolutions can in general not be determined by calculating the expected evolution and solving an optimisation problem for the mean values. Using a scenario based model usually leads to higher expected profit and provides sensitivity information to control the risk. 2.2. Stochastic integer programming The mentioned aspects exactly fit into the modelling framework of two-stage stochastic integer programming. For linear models and a scenario-based representation of uncertainties, a deterministic equivalent of a two-stage stochastic integer program (2-SSIP) can be written as a large mixed-integer linear program (MILP): f~ max
X,Yl ..... Ya
s.t.
cTx + E
T
~o=17r'~176y~
(1)
To~x + W~yo~ = ho~,x ~ X , Yo~ ~ Y, o~ = 1. . . . . f2.
In this framework the uncertain evolution is represented by a finite number of scenarios co with corresponding probabilities nc0. The variables are assigned to 1st and 2 nd stage vectors x and Y0~, which belong to polyhedral sets X and Y with integer requirements. The x-vector represents "here and now"-decisions which are applied regardless of the future evolution. It therefore is identical for all scenarios. In contrast, the yo-vectors model scenario-dependent recourses under the assumption that the respective scenario materializes. The uncertainties may affect any parameter of the model, such that f2 different matrices and fight hand sides To , Wo and ho may arise. The classical objective is to maximize the first stage profit plus the expected second stage profit computed as a weighted sum of x and Yc0 subject to the weighting-vectors c and q0~-
590 2.3. R i s k A v e r s i o n
The expected value criterion does not utilize the full information about the shape of the probability distribution of the objective function over the scenarios. This may lead to results with a high expected profit while a few scenarios give very low values of the objective function. To control the risk that the profit falls below a certain threshold e, Eq. (1) can be extended by a excess probability: f2
max x,Yl ," ",Yo,
T cTx + ~ 7r.o~qo~yo~ - 5 ~ r%uco (o=1
(2)
r
U 1 ,...,Uf2
s.t.
Tcox+Wcoyo~=hco,cTx+qTyco>_~.-Mu~,xeX, yo~ eg, u~ e {0,1}, o~= 1.... ,~.
The idea is to compute the probability that the profit falls below a threshold e by using binary indicator variables uo in a big-M inequality, and to reduce the expected value proportionally. The parameter 5 weights the risk relative to the mean value. This extension fits into the 2SSIP framework and increases the model size only marginally. From the syntactical point of view, any deterministic MILP-model can be regarded as a single-scenario base-model, which can be extended to a 2-SSIP according to Eq. (1) or (2) under two conditions: Firstly, the uncertainty must only affect the parameters and secondly, the decisions must not affect the probabilities. (E.g. models which represent time by an index cannot reflect temporal disturbances.) If these conditions are fulfilled, the scenarios are able to represent any probability distribution, e.g. tree-structured evolution estimations and empirical distributions. It should be noted that even coarsely approximated uncertainty representations have an advantage over mean value representations, since it is mathematically proven that 2SSIPs lead to better (lSt-stage -) scheduling decisions than obtained for the corresponding mean value problems (Ref. 7). 3. A B E N C H M A R K
PROBLEM
The described modelling methodology was applied to a flexible batch process from the polymer industries: The multi-product batch plant shown in Fig. 1 is used to produce two types (A/B) of expandable polystyrene (EPS) in 5 grain size fractions each. The preparation stage and the polymerisation stage are driven in batch mode whereas the finishing is done continuously. A polymerisation batch is produced according to a certain recipe (out of ten possible ones), which determines the EPS-type and the grain size distribution. The resulting mixture of ,
F
r
li
--a
, I,
IL~J~--44! I~
E
i
I
~.~
..................
,,
c% c% c%
LJ LJL#O
,- ................ 1
~J Preporotion.~
I"
' I
,, ~A2 ~
q
A5
B1
I ~>B3
' ng
Fig. 1. Flowsheet EPS-process
I
B4 B5
591 grain sizes is buffered in one out of two mixing vessels and then continuously fed into the two separation stages, which must be shut down temporarily if a minimal flowrate cannot be maintained. Scheduling decisions to be made are: 1. choice and 2. timing of the recipes of the polymerisations, 3. outflows of the mixing vessels, and 4. start-up- and shut-down-times of the finishing lines. They are subject to resource constraints and non-linear equality constraints describing the mixing process. The objective is to maximize the profit calculated from revenues for satisfying customer demands in time and costs for polymerisations, start-ups/shut-downs of the finishing lines, inventory, and penalties for demand shortages. The uncertainties can be classified into endogenous and exogenous uncertainties, which are or are not linked to process events, respectively. Endogenous disturbances comprise polymerisation times and yields; disturbances in the plant capacity and in the demand are regarded to be exogenous in nature. 4. MASTER SCHEDULING MODELS 4.1. The model family The scheduling problem is decomposed into a detailed scheduling and a master scheduling problem (DS/MS), which are implemented in a cascaded feedback structure (see Ref. 8). A deterministic base model for the DS problem was developed by Schulz (Ref. 9), so in the following we will focus on the master level. We developed a family of MS base models which comprises several model instances for the process and for the profit. It uses a time representation which is based on three fundamental considerations: 1. The problem is formulated on a finite moving horizon of reasonable length. By shifting the horizon, some of the former recourse decisions Yco become here and now decisions x. This auto-recourse, i.e. the property, that the same model is used throughout, gives rise to a uniform model structure over the entire horizon. 2. According to its horizon, the MS model reflects uncertainties with long-term effects, i.e. uncertain demands and capacities, which are both exogenous in nature. Since the probability of the occurrence of an exogenous event within a certain period of time depends on the period length, only the consideration of fixed time periods allows for the definition of uncertainty scenarios co with fixed probabilities no0. 3. An appropriate time representation is a multi-period grid with fixed time intervals, and the period lengths have to be chosen such that the probability of a disturbance is significant. Since the need for re-optimisations is determined by the same criterion, the iteration period is synchronous with the time grid. A reasonable choice is a horizon of 14 periods of 2 days each. The first 2 intervals are defined as the 1st stage, they serve as a guideline for the DS level. 4.2. Illustrative instances To give an impression of the models we present the key ideas by some illustrative constraints and refer to Refs. 8, 10 for more details. Scheduling decisions to be made on the master level are 1. the rough timing of start ups/shut downs of the finishing lines, 2. the rough timing of polymerisations and 3. the assignment of recipes. Given I fixed time periods i, the degrees of freedom are represented by the variables Zip ~ {0,1} and N% ~ IN, which represent the operation mode of the finishing
line p in i and the number of polymerisation starts according to recipe rp ~ {1... Rp } in i,
re-
592 spectively. The relevant constraints are the capacity of the polymerisation stage and of the finishing lines. It turned out that the interaction between the periods is of major importance. Considering the constraint for minimal throughput of the finishing lines, the formulation for decoupled periods would read as follows (C- mixer levels, F - feed rates, m i n - minimum, m a x - maximum, 0 - initial state):
Rp Z Nirp >-vp c m i n +zipF; m rp=l
(3)
Vi, p.
The technique to model the couplings is to constrain sums of periods (the non-linearity can exactly be linearized):
Zi" ZRp Ni'rp >-zi"pg(i"+l)pCpin - [z(i-1)pzipcpIi Cf p~ i=l i'=irp=l
max else} + ~i'=' iZ"i , p F p m .
Vi, i " , p l i < i " .
(4)
The use of Eq. (3) instead of Eq. (4) leads to more shut down-procedures if a finishing line is driven at its lower capacity limit and to significantly higher costs. An essential target of profit oriented scheduling is to maximize the sales subject to demand and supply constraints. W i t h Mif p E IR+ denoting the sales of productfp in i, Bif p E IR+ the demand and 9fprp c IR+ the yield offp according to a certain recipe rp, Eq. (5) defines the demand and the supply constraints, respectively: i
i
i
i
Z Mi'fp <-i~l Bi'fp A Z gi'fp <-E E PfprpNi'rp Vi, fp, p. i'=l "= i'=l i'=l rp
(5)
A disadvantage of this formulation is the missing distinction between timely and late sales. To control the lateness, an index d represents delay intervals; the constraints then read as follows: i
j
i
ZMd(i+d_l)fp <_Bifp A Z~mclffp <_ZZpfprpNjrp d j=ld=l j=l rp
vi, fp,p.
(6)
5. SOLUTION A L G O R I T H M The 2-SSIP according to Eq. (1) exhibits the characteristic block-angular matrix structure shown in Fig. 2, left. It may be transformed equivalently by introducing additional vectors Xo, for any scenario co as well as equality constraints x 1 . . . . = xt~ represented by H. The 2-SSIP is solved by a decomposition algorithm (Ref. 11) which exploits the characteristic matrix structure. It is based on a Lagrangian relaxation of the H-constraints, which are reinforced by a branch and bound-algorithm. At any node of the tree s scenario-problems are solved by CPLEX, a feasible solution is guessed by heuristics and a bound is generated by solving a Lagrange dual problem with NOA. Since the various base models exhibit a similar numerical performance extensive numerical studies were performed on the stochastic model instance characterised by Eq. (4) and (5).
593
9
9.
r
o
".
o
..
H Fig. 2" Matrix structure The uncertainty was represented by if2 = 1,000 randomly generated rhs-scenarios for the demand and the capacity, such that a 2-SSIP with 140,022 integer variables, 448,000 continuous variables and 736,009 constraints results. It was solved on a SUN Ultra Enterprise 450 with a 300 MHz processor, and the CPU-time was limited to 4 hours, which is "short" compared to the periods. The algorithm generated solutions with optimality gaps of 5.9 %, and with an additional problem specific preprocessing they were reduced to 4.2 %. The inclusions of the excess probability leads to reductions of risk by 20 % while the expectation value is only changed by 2 % and the numerical performance remains essentially unaffected (Ref. 10). 6. CONCLUSIONS The proposed scheduling methodology allows for a realistic representation of future reoptimisations and evolutions with various probabilities within a closed-loop structure. The scenario representation of uncertainty ensures the practical applicability since open-loop models can be extended in principle and empirical probability distributions can be utilized. Furthermore the efficiency of the approach could be improved in two ways: Firstly, the two-stage program may be extended to a multi-stage program, which takes multiple reoptimisations into account. And secondly, the algorithm implementation may be parallelised on both the stochastic decomposition and the scenario level. REFERENCES
[1]
G.V. Reklaitis, A.K. Sunol, D.W.T. Rippin and 13. Hortacsu (eds.), Batch Processing Systems Engineering 143, Springer, Berlin, 1996, 660-705. [2] S. Pierucci (ed.), ESCAPE-10, Elsevier, Amsterdam, 2000, 79-84. [3] S.J. Honkomp, L. Mockus and G.V. Reklaitis,. Comp. Chem. Engg. 21 (1997) 1055. [4] M.G. Ierapetritou, E.N. Pistikopoulos and C.A. Floudas,. Comp. Chem. Engg. 20 (1995) 1499. [5] S.B. Petkov and C.D. Maranas, Ind. Eng. Chem. Res. 36 (1997) 4864. [6] E. Sanmarti, A. Espunia and L. Puigjaner, Comp. Chem. Engg. 21 (1997) 1157. [7] J.R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer, New York, 1997. [8] J.v. Schijndel and J. Grievink (eds.), ESCAPE-12, Elsevier, Amsterdam, 2002, 775-780. [9] C. Schulz, Modelling and Optimisation of a Multiproduct Batch Plant, Dr.-Ing. Dissertation, Dortmund University, Shaker, Aachen, 2002 (in German). [10] S. Engell, A. M/arkert, G. Sand, C. Schultz and R. Schulz, Online scheduling of multiproduct batch plants under uncertainty. In: M. GrOtschel, S.O. Krumke and J. Rambau (eds.), Online Optimization of Large Scale Systems, Springer, Berlin, 2001,649-676. [11] C.C. Caroe and R. Schultz, Oper. Res. Lett. 24 (1999) 37.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
594
DSR Algorithm for Construction of Attainable Region Structure Seodigeng T.*, Hausberger B.*, Hildebrandt D. *, Glasser D. *, Kauchali S. * 9School of Process and Materials Engineering, University of the Witwatersrand Private Bag 3, Wits, 2050, Johannesburg, South Africa
Abstract
A new Differential Side-stream Reactor (DSR) Algorithm is proposed for Attainable Region (AR) construction. For fundamental processes of reaction and mixing, the candidate AR can be constructed using constant intermediate mixing policy DSR, which fully describes the fundamental processes. Feinberg (2000) derived the mixing policy that governs the existence of a DSR on the AR boundary; he called that DSR a critical DSR. Here we show that the mixing policy can, in the limit, be approximated by constant values.
Keywords
Attainable Regions, DSR Algorithm
1. INTRODUCTION The construction of the Attainable Region (AR) for reactor network synthesis is important as a main subtask for chemical process synthesis. For a given feed and underlying reaction kinetics, AR analysis can be used to find optimal reactor types, flow configuration and key design parameters which will optimise the objective function. The method considers permitted fundamental processes and/or combinations of fundamental processes to find the set of all possible outcomes, termed the Attainable Region (AR). The outer limit of the achievable outcome is represented by the boundary of the AR and again, it is on the boundary where a given objective can be optimised. Any point on the boundary can be interpreted as a network of processing units that can be used to attain it. The boundary of the AR is therefore of special interest as to what types of operating unit structures can exist on it. Recently, researchers focused on the automated construction of AR boundary. The Isostate Algorithm (Rooney et al, 2000) proposed generating ARs from 2D planes in orthogonal subspaces which can be fused into higher dimensional ARs. The planes are constructed from the Plug Flow Reactor (PFR) trajectories and loci of Continuous Stirred Tank Reactor (CSTR) running from all process feeds. The necessary conditions for the AR boundary are checked then rectified wherever broken. The planes are combined into the subspace and convexified. This procedure is done iteratively and does not guarantee
595 convergence. Kauchali et al (2001) formulated a Linear Programming model using a large network of completely connected CSTRs. They discretized the composition space into an arbitrarily large grid of points from where the necessary conditions can be checked. The proposed DSR algorithm is a contribution to the search of automated techniques in the generation of Attainable Region boundaries. The algorithm offers quicker results and is numerically less intense compared to its predecessors. The technique can be easily extended to higher dimensions and can also include other fundamental processes like heat and mass transfer.
2. BACKGROUND A DSR is a plug flow reactor (PFR) with differential mixing along its length. For a system that considers only reaction and mixing as fundamental processes, the general process vector may be written as:
p(c,u) = r(C)'~t'arV(C,Cm)
[1]
For the isothermal case, temperature is set constant and in the case of the adiabatic system it is determined from the energy balance. For such systems, the control variables are the mixing point, Cmand the mixing policy a. Hence U=(Cm,a) [21 The mixing point Cm, should be attainable, thus it is a point that has already been achieved or will be achieved later by a fundamental process. For a DSR, reaction and mixing are allowed to occur simultaneously and the resultant process vector is described by:
dc= r(c)+a(cXcm -c) dr the initial conditions are c = co and x = 0
[3]
3. THE DSR A L G O R I T H M TECHNIQUE The constant mixing policy DSR algorithm uses positive numeric values of a for mixing. The methodology of the algorithm can be broken down into four stages as listed below; 1. The initialisation stage that generates starting points. i. Generate PFR trajectories and CSTR loci from all system feed points. ii. Convexify the resulting structure to eliminate all points that lie interior. iii. From the PFR trajectories, select a grid of points and use them as feed points to generate CSTR loci. iv. Convexify the resulting structure to eliminate all points that lie interior. v. Select a grid of CSTR points and use them as PFR feed points to generate PFR trajectories. vi. Select points that are to be used as mixing points.
596 2. The Growth stage. i. Select a grid of extreme points on the candidate region including all mixing points. ii. From each of the extreme points start a number of constant alpha DSRs mixing at the selected points. iii. Convexify the structure to eliminate all interior points. 3. Refining stage: refines the DSR points on the boundary of the region. i. From the DSR points that are on the boundary of the candidate region grow refined constant alpha DSR ii. Convexify the structure. iii. Iterate (i) and (ii) until the stopping criteria (Currently the number of iterations is used as the stopping criteria) 4. The Polish stage. i. Generate PFRs from extreme points that are either DSR or CSTR points, resulting in a complete candidate AR. 4. M O T I V A T I N G E X A M P L E A well studied Van de Vusse kinetics example is presented as a study case. The isothermal second order mass action kinetic scheme with corresponding rate constants are as given below:
A 2 A
Ik,-111 Ilk2-051 _
k4 _
_
_
_.,,
D
The example was solved by implementing the algorithm in MATLAB and the results are presented.
597 5. RESULTS
Figure 1:
3D AB Attainable Region for the Van de Vusse Kinetics (Runtime = 2 minutes)
Figure 1 shows how the DSR algorithm approximates the candidate AR which matches the results found via traditional techniques outlined by McGregor (1998) and the results from Rooney (2000). Furthermore the critical DRS (solid) can be seen delineated by an envelope of constant alpha DSRs, shown in figure 2.
598 Alpha vs ComponentA 120
/
./.;Y
100
x /z
,7 =. 60
Y J
J
f 0 0.00
0.10
0.20
0.30
0.40
0.50
0.60
ComponentA L[ :t
Figure2"
1 Iteration = 20 seconds -
9- 5 rdn - tK - 10 min ~ A l p h a
Policy ~ A l p h a
Policy [
Concentration of Component A versus Alpha (a)
It is interesting to observe that in figure 2, the approximate DSR points form a smooth trajectory with no scattered points. By allowing more runtime the optimal policy can be approximated more accurately by these trajectories.
6. DISCUSSION For the presented Van de Vusse example, the algorithm generates the approximate candidate region which is 95% volume of the region generated from solving the optimal DSR in approximately 20 seconds on a Pentium IV 2.5GHz PC. This result is obtained after the growth stage. After executing the refining stage once, the volume increases to 97%. The first iteration of the refining stage results in less than 2% volume growth, with percentage volume decreasing further for subsequent iterations. To accurately delineate the optimal policy, more iterations are needed and termination criteria for this procedure is presently being formulated. The approximate envelopes of constant a DSRs intercept the mixing policy DSR at the critical CSTR, a result which can be used as a quick method to compute the critical CSTR.
599 7. CONCLUDING REMARKS The proposed algorithm can quickly approximate the candidate AR boundary and show reactor structures that occur on it. Regions where DSRs occur can be quickly identified with approximate mixing policies. The fact that the algorithm is not dimension limited enables easy study of higher dimension mixing policies. The current study of this research focuses on studying more 3D examples and extending to 4D examples for systems of reaction and mixing. REFERNCES Feinberg, M., (2000), Optimal reactor design from a geometric viewpoint II. Critical sidestream reactors., Chemical Engineering Science, 55, 2455 - 3565. Feinberg, M. & Hildebrandt, D., 1997, Optimal Reactor Design from a Geometric Viewpoint: 1 Universal Properties of the Attainable Region, Chem. Eng. Sci., 52, (10), pp 1637-1665. Rooney, W. C., Hausberger, B. P., Biegler, L. T., Glasser, D., (2000), Convex attainable region projections for reactor network synthesis, Computers & chemical engineering. 24, no. 2-7, 225 - 229 Kauchali, S.,Rooney, W. C., Biegler, L. T., Glasser, D., Hildebrandt, D., (2002), Linear programming formulations for attainable region analysis, Chemical engineering science. 57, no. 11, 2015 McGregor, C. (1998). Choosing the optimal system structure using the attainable region approach for systems involving reaction and separation, Ph.D thesis, Republic of South Africa: University of the Witwatersrand.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
600
Green process systems engineering: challenges and perspectives Lei SHI a and Youqi Y A N G b'*
Department of Environmental Sciences and Engineering, Tsinghua University, Beijing, 100084, China b China National Chemical Information Center, Beijing, 100029, China a
Abstract:
Major challenges on process industries from environment protection and
sustainable development contribute to the emergence and development of green process systems engineering (PSE). This paper is trying to make a survey of the efforts of Green PSE responding these challenges in the last two decades, including what this term means, what have been done, and what are likely to be done. Keywords: Green; process systems engineering; process analysis; process optimization; process integration; education
1. INTRODUCTION Process industries in the 21 st century are facing serious challenges brought by sustainable development, particularly by environment protection. These challenges are not only to developed countries, even more serious to China--the largest developing country. They could be summarized as follows [ 1]: Sustainable development requires the aims of enterprises shifting from maximizing economic benefit to multi-objectives: natural resources saving, products and processes environment benign and economic beneficial. This is especially true to China. For instance, China produces SO2 1.4 million t/a and CO2 0.8 billion t/a, which is ranking first and second respectively in the world. The energy consumption of unit value of output in Chinese process industries is 4-5 times higher than in Germany or Japan. Besides, China has a shortage of water; water resource per capita of China is only one fourth of the average of whole world. However, Chinese refineries consume water per tone of crude oil average about 2 m3/t which is 4 times higher than international standard (0.5 m3/t). International competition forces enterprises paying more attention to make their products and processes greener. Increasingly restrictive laws and regulations on health, safety and environment (HSE) make process companies facing risks for their unsustainable business
" Corresponding author, [email protected]
601 practices. For instance "green barrier" for textile set up by some governments caused 14 million USD loss in China's textile export business in 1999, because the dyestuff is not green. Reducing cost by decreasing mass intensity and/or energy intensity can promote process companies' competitive advantages. To be low cost process manufacturer is the aim of commodity chemicals producer in 21 Century. The more adept a process company is at adopting technologies that simultaneously reduce mass and energy intensity, the more it will realize higher profits. Finally, every company is paying more attention about the image in local and business community. More and more process companies have the plans to meet the ISO 14000 and HSE qualifications. The challenges in process industries lead to significant changes in academic fields. At the turn of 1990s, green chemistry [2], green chemical engineering [3,4], green engineering [5], green design and green manufacturing [6] emerged in a flush way. All these newly emerged disciplines have made many efforts to incorporate environmental issues and sustainability into their traditional fields. Meanwhile, the whole engineering scope faces a challenge to be transformed and conformed: the technological system should and can be matched and harmonized with their natural counterparts if they are designed and operated elaborately and rationally [7]. Such a social, industrial and academic background contributes to the emergence and development of green process systems engineering (PSE). 2. THE SCOPE AND DEFINITION OF GREEN PSE PSE deals with the mass-, energy- and information flows of process systems and studies optimal business decision-making about the resource allocation, planning, management, design and operation control of process systems. As a discipline, PSE has passed about half century life. Sets of PSE methodology, approach and supporting tool came into being, such as simulation and analysis, process synthesis, mathematic programming, AI/expert systems, advanced process control, etc. as summarized by Grossmann and Westerberg [8]. Unlike traditional PSE focusing on man-made systems and treating environment requirements as constraints, Green PSE stresses the interaction between process systems and natural system and aims to establish harmonious social-economical-technological combined systems by incorporating man-made systems into nature ecosystems. For process businesses, Green PSE is concerned with the improvement of business decision-making process for the creation and operation of process systems with minimized environment impact and maximized economical benefits in order to reach sustainable development purpose. Since the scope of PSE is broadened recently, we need to investigate the impact of process system on environment from the viewpoint of supply chain and from the whole life cycle process. Therefore, the impacts from a supply chain of process system could be included into four types as follows: 1) Energy, fresh water and other nature resources consumption; 2) The impacts of products in usage and discarding; 3) Pollution from the manufacture process; and 4) Environment pollution from other stages of the supply chain
602 (such as transportation, warehousing, distribution, etc). Green PSE not only inherits those traditional PSE methodologies, approaches and supporting tools, but also absorbs those from neighboring related disciplines such as green chemistry and engineering, industrial ecology, environment science and engineering, ecological engineering, as shown in Fig.1. Since Green PSE is a discipline crossing the borders of several disciplines, there are different contributions to the development of Green PSE from different groups. The recent status is summarized as Table 1 shown. In brief, Green PSE is in its initial stage as a new discipline branch. If traditional process systems engineering treats human-made system as an object separated from nature environment and the environment impacts are treated just as boundary constraints, then Green PSE wants to put the process systems into its environment and particularly stresses the interaction between human-made system and nature environment trying to find the laws and patterns of harmonious development of both. Table 1 Some Contributors to Green PSE Term Green Engineering Programs
BackgroundL EPA sponsored pr0iect in USA ............
PPIC(Pollution Prevention Inform. Clearing House) WBCSD (World Business Council for Sustainable Development) Green and Sustainable Chemistry Network
EPA sponsored Inform. Center under P2 Dept. Intern. Org. supported by 160 companies from 30 countries Semi-govern. Org. of Japanese chemical industry located at Chem. Innovation Inst.
http://www.gscn.net
SUSTECH Program
Sponsored by European Chem. Industry Council Non-beneficial Org. to promote coordination between gov., enterprises and acad. inst. Sponsored by Amer. Inst. of Chem. Eng. supported by 16 intern. Companies, inst. & IT companies A cross-discipline Center sponsored in Carnegie-Mellon Univ. in 1986
http://www.cefic.org/ sustech
Located in Univ. of North Carolina consisted of 5 university and 3 national institutes International academic Org. at Yale Univ.
http ://www.nsfstc.unc .edu
Green Chemistry Inst. Of Japan
Center for Waste Technologies CWRT
Reduction
Green Design Initiative NSF Science and Tech. Center for Envir. Responsible Solvents & Processes ISIE (Intern. Society for Industrial Ecology) Institution of Industrial Ecology in China
Academic O r g . under Economy Society of China
Ecological
Information Source http://www.epa.gov/o pptintr http://www.epa.gov/o pptintr http://wbcsd.ch
http://www.aiche.org/ CWRT http://www.ce.cmu.ed u/Green Design/
http://www.Yale.edu/i s4ie
603
GreenCh
~
~ -
and
Industrial Ecology
Engineering
Fig. 1 The relationship between green PSE and other disciplines 3. SIMULATION AND ANALYSIS OF GREEN PSE
3.1 Simulation and analysis of micro-systems At the micro- and nano- system level, this is mainly the area of Green Chemistry. Among 12 contents Green Chemistry, two of them concerning Green PSE as follows: (A). Product design. To design and produce environmentally benign products is important for many large petrochemical companies. For example, DuPont Company asked each business department to check and evaluate its products according to economic benefit-environment impact coordinate axis [9]. The products of low benefit but big environmental impact should be replaced or prevented. In order to design environmentally benign products, molecular simulation tools are needed. These tools enable engineer to structure "ideal molecular" and to test this molecular structure's properties (such as toxicity, bio-activity) on computer platform. Now, there are several commercialized simulation tools available, such as SYBYL of Tripos (QSAR research for small molecular and proteins); Cerrius 2 of Molecular Simulation (research for petrochemicals, surfactants, medicine, catalysts, etc.); Insight II of Biosym (design of big bio-molecular). (B). Synthesis of reaction path. This is a synthesis problem starting from an a priori target product and initial raw materials and identifying the possible reaction pathways to meet "green requirements". The methodologies of reaction path synthesis may be divided into two categories: based on providing information and based on logical reasoning. The example of former one is CPAS (Cleaner Process Advisory System) project sponsored by AIChE, National Center of Manufacturing Science (NCMS) and CenCITT [10,11]; the latter's examples are MEIM (Methodology for Environmental Impact Minimization) application to screening the reaction paths [12,13,14,15] and "Inverse synthesis" starting from specified product to find the possible intermediates to produce this product until to find the reasonable raw material. The representatives of inverse synthesis are Retrosynthesis from MIT's McRae, Syngen (synthesis generator) of Hendrickson at Brandeis University and Environment Acceptable Reaction (EAR) of E1-Harwagi at Auburn University [ 10].
604
3.2 simulation and analysis of meso-systems This is a more traditional and relatively mature area. The works in this area could be summarized into the following three directions: pinch analysis [ 16,17,18,19], thermodynamic analysis[20 ], and flowsheeting. Some process simulators with environmental impact calculation are listed in table 2. The most mature method so far seems to be the WAR (WAste Reduction) method developed at NRMRL under EPA of USA. Table 2 Some Process Simulators with function of Environmental Impact Calculation software . . . . . . . Developer ....... Brief Description .................................................. ESP (Environ. Simulation OLI Systems I n c . Propertiesof more than 2000 chemicals embedded, Program) enabling simulating waste treatment processes. ChemCAD III Chemstations I n c . Licensed with EPA to implement the WAR algorithm incorporated into its flowsheeting simulator Eviro Pro Designer Intelligent Inc. About 100 unit operation models available including many waste treatment units BatchDesign-Kit Dept. of Chem. Creating process flowsheets according to ZAP (zero Eng. in MIT avoidable pollution) concept, this is a batch process simulator. ECSS--CHSTAR Qingdao Institute of A flowsheeting simulator added with environmental (Engineering Chemistry Chem. Technology, PR Impactcalculation function in a project supported by Simulation System) China. China National Science Foundation. EnviroCAD New JersyTech. Inst. A tool incorporated simulation, assessment and . . . . . . . . . . . . expert inte!ligence~f0rp!-0cess screening......................
3.3 Simulation and Analysis of Macro-systems Learning from the laws of metabolism and mass recycle in nature, the ecological industry is trying to systematically organize and allocate the resources, industrial production, consumption and services to establish a sort of new pattern of industry in order to meet the requirements of sustainable development. According to this idea, in Damark, USA, Germany, Canada and China, etc., many Eco-Industrial Parks (EIP) have been established and more EIPs are expected. So far there is no such kind of common accepted methodology to design and/or evaluate EIP [21,22]. Thermodynamic analysis is probably a relatively easy way to approach this task. Some pioneer works have been done by Jorgensen [23] who tried to suggest a unified way to calculate exergy of living biological systems and inorganic/detritus (dead organic matter) and Bakshi [24] who suggested to use Emergy M as the basic measure. 4. ASSESSMENT AND OPTIMIZATION OF G R E E N P R O C E S S SYSTEMS
4.1 Assessment of environmental performance To screen or improve design alternatives at process level, environmental performance evaluation tools should deal with two distinct issues: 1) how to assess potential environmental impacts at a variety of process stages; 2) how to reconcile environmental impacts with other decision criteria [5]. Many researchers have done a lot of work on the conceptual framework,
605 methods and implementations, and developed a great number of environmental performance indicators in various formats: absolute, normalized, relative, indexed, aggregated or weighted [25,26,27,28,29]. In general, these indicators can be categorized into three types: productive efficiency indicators, monetary aggregate indicators and environmental impact indicators. The productive efficiency indicators depend on the amount of materials or/and energy flows from process balances. Monetary aggregate indicators can be derived from data on the materials and energy flow level if flow-induced costs and their allocation are available. Environmental impact indicators incorporate the effects of material and energy flows on climate, biosphere or human being in an aggregated way [30]. Each category has its advantages and drawbacks (Table 3). However, it is still a challenging job to develop widely accepted indicators.
4.2 Optimization Two types of optimization models should be stressed in terms of green PSE: rigorous models and LCA-based (or related) models. The rigorous models are usually incorporated into process simulation packages and very useful to deal with the material or species with little amount but significant environmental impacts. The LCA-based (or related) models have obtained an increasing attention on process selection, design and optimization because LCA enables quantification of environmental interventions and evaluation of the improvement options throughout the life cycle of processes [31 ]. Coupled with multiobjective optimization, furthermore, LCA can provide a robust framework and a potentially powerful decision making tool to identify more sustainable solutions in the process industries. The progresses of multiobjective optimization approaches, esp. evolutionary algorithms will fasten the adoption of both rigorous models and LCA-based or related models [32,33]. Table 4 illustrates the-state-of-the-art of the implementation of multiobjective optimization in green PSE. Table 3 Environmental performance measures in green PSE Basis Performance measures 9 Energy consumption/efficiency; Productive 9 Materials consumption/efficiency; efficiency 9 Emissions; indicators 9 Amount of wastes Value lost or net value added [5]: Monetary aggregate indicators Environmental Mass impact + toxicity indicators [27] + persistence + environmental mobility
Comments Being standardization, flexibility, robustness and "objectivity"; but high sensitive of the results with respect to the number of factors and units considered. Easily merged into economic objectives, but some environmental-related costs or benefits are difficult to calculate Consider the different impact of materials/chemical species and can make use of LCA modules or database; but require classification in environmental media (soil, water and air) or in impact categories (global warming acidification ozone de letion etc
606 Table 4 Multiobjective optimization in Gree n PSE Models .
.
Local
.
.
.................................................................................................................
Environ.Perform. measures
Multiobjective optimization
References
Productiveefficiency indicators
Mathematical programming
[34], [35], [36], [37], [38], [39], [40], [41], [42] [43]
Monetary aggregate indicators
Evolutionary algorithms Mathematical programming
.
.
.
.
.
.
.
.
.
Evolutionary algorithms Environmental impact indicators Mathematical programming Evolutionary algorithms LCA-based Productiveefficiency indicators Monetary aggregate indicators
Mathematical programming Evolutionary algorithms
[44] [45], [46], [47], [48], [49], [50] [51], [52], [53] [54]
Mathematical programming Evolutionary algorithms
Environmental impact indicators
Mathematical programming
[13], [55], [56], [57]
Evolutionary algorithms 5. INTEGRATION OF GREEN PROCESS SYSTEMS With the development of PSE along chemical supply chains (see Fig. 2), process integration has extended its applications from exclusive devotion to the study of energy efficiency to the efficient use of raw materials, emissions reduction and process operations [58,59]. More recently, process integration begins to move beyond plant boundary and incorporate process design and operation into supply chain management and product stewardship. Meanwhile, the methodology and approach shift from only using either a thermodynamic approach or one based on mathematical programming to making more effective use of both philosophies simultaneously [60]. Currently, a wider range of methods and tools has become available to enable individual processes to be integrated well within themselves with the other processes and utility systems on a site, such as pinch analysis (including heat pinch, water pinch, hydrogen pinch and others), exergy analysis, and mathematical programming techniques. However, these methodologies and tools are not enough to make them integrated into the industry, economic and ecological systems. Process integration faces two types of challenges: 1) to systematically synthesize current process integration techniques and existing environmental tools at varieties of decision levels, such as green chemistry, green engineering, cleaner production [61 ], industrial ecology [62], etc.; 2) to find out the lost fundamentals and conjunction techniques from the point of sustainable development view (see Fig. 3). A key
607
Fig. 2 The development of PSE along with chemical supply chain (adapted from Grossmann & Westerberg [8])
Components of Process Industry
Fig. 3. Process integration in green PSE and related disciplines
608 project study on intelligent approaches of chemical process integration to simultaneously optimize both economic and environmental objectives sponsored by China Natural Sciences Foundation stepped a stride towards this direction by carrying out the following six studies: 1) integration of mass-energy-coupled reaction processes [63,64,65,66]; 2) heat-exchanger network synthesis of mass-energy-coupled purification & separation processes; 3) multi-level, multi-objective optimization of large-scale systems [67,68]; 4) Fuzzy knowledge expression and Fuzzy reasoning in process integration [69]; 5) multi-objective optimization based on uncertainty models [70,]; and 6) environmental performance indicators in chemical processes [20]. More recently, this project group extends its research field to the study of industrial ecology. Preliminary works have presented some promising results on providing framework and methods to assess the use of materials and energy, sources of loss, loss mechanisms and techniques for amelioration in both technological system and natural ecosystem. 6. EDUCATION IN GREEN PSE
Significant progresses have been observed in the past ten years for pollution prevention to be integrated into pedagogical materials of PSE [71]. Some new courses have been developed for education and training of green PSE at undergraduate and graduate level, such as "Process integration and industrialpollution prevention" in the University of Virginia [72] and "Environmentally conscious chemical process design" in the University of Notre Dame [73]. These courses try to present students an integrated solution to environment problems in process industries by providing strategies, techniques and tools for greening traditional fields of PSE and introducing some new elements such as organization and management, policy and regulations, and economics. A generalized scheme of green PSE pedagogical materials usually includes the following four levels: 1) orientation courses; 2) disciplinary courses; 3) specialist environmental courses; and 4) integrated projects. Along with the evolution of educational contents, the teaching methodology also becomes more systematic and comprehensive. Education in PSE no more focuses on discussing how the traditional fields such as process design and control are influenced environmental considerations (this can be called bottom-up approach), but pay more attentions to discussing how to organize PSE components to achieve sustainability (this can be called top-down approach). To help students develop this top-down thinking and problem-solving skills, the "systems approach" which Perkins [74] stresses in education in PSE is more needed. The adoption of information techniques and computer-based educational methods will accelerate this process [75,76]. 7. CONCLUSIONS Improvements of production efficiency or capital efficiency won't automatically lead to sustainable development of process industries. From the viewpoint of sustainable
609 development, the development of process industry should also consider environmental value and social ethics, which requires the scope of PSE to extend to the hybrid systems made of both process systems and natural ecosystems. Responding to this challenge of new century, Green PSE is emerging as a new discipline branch that crosses the boundaries of many traditional disciplines. Traditional methodologies and tools need to be renewed and/or innovated to meet the requirements of Green PSE. Simulation and analysis of multi-scale systems, assessment of process environment performance, multiple objective optimization methods and process integration for environment are the key issues to be paid more attentions. Much effort has been witnessed on these aspects in the past ten years, but more innovative outcomes are expected. Moreover, Green PSE will be expected as a course in more and more universities in near future. REFERENCES
[1]
Y.Q. Yang, S.W. Cheng, China Annual Process Systems Engineering Symposium, Beihai, China, 2001.
[2]
P.T.Anastas, T.C. Williamson, Green chemistry: frontiers in benign chemical synthesis and processes, Oxford: Oxford University Press, 1998.
[3] [4]
E.Z. Ming, J. Fu, Chem. Industry and Eng. Progr. (China), 18(2001) 5. Y. Zhang, Chinese J. of Process Engineering, 1(2001) 10.
[5]
D.T.Allen, D.R. Shonnard, AIChE J., 47(2001) 1906.
[6]
G.F. Liu, Z.F. Liu, G. Li, Green design and green manufacturing (in Chinese), Beijing: Mechanical Industry Publisher, 2000.
[7] T. Graedel and B.R. Allenby, Industrial Ecology, New Jersey: Prentice Hall, 1995. [8] I.E.Grossmann, A.W. Westerberg, AIChE J., 46(2000) 1700 [9] C. Mouche, Chem. Processing, 61 (2000) 32. [10] A. Stanley, Chem. Eng., 7(1995) 32. [ 11] S. Lakshminarayanan, H.Fujii, B. Grosman, E. Dassau, D.R. Lewin, Comput. Chem. Engng, 24(2000) 671 [12] E.N. Pistikopoulos, S.K. Stefanis, Comput. Chem. Engng., 22(1998) 717. [13] S.K. Stefanis, et al., Comput. Chem. Engng., 19(1995) $39. [14] S.K. Stefanis, A. Buxton, A.G. Livingston, E.N. Pistikopoulos, Comput. Chem. Engng., 20(1996) S 1419. [ 15] A.Buxton, A.G. Livingston, E.N. Pistikopoulos, Comput. Chem. Engng., 21 (1997) $959. [ 16] M.M. E1-Halauagi, V. Manousiouthakis, AIChE J., 35(1989) 1233.
610 [ 17] Y.P. Wang, R. Smith, Chem. Eng. Sci., 49(1994) 981. [ 18] O. Delaby and R. Smith, Trans. IChemE, 73 Part B(1995) 21. [ 19] M. Spear, Proc. Engineering, 11(2000) 24. [20] Y. Wang, X. Feng, Comput. Chem. Engng., 24(2000) 1243. [21] T. Wang, J.Z. Shen, Y.R. Li, S.Y. Hu, Proceedings of China Process Systems Engineering, (2001) 143 [22] X.P. Zheng, S.Y. Hu, J.Z. Shen, Y.R. Li, Computers and Applied Chemistry (China), 18(2001) 43. [23] Jorgensen, Sven Erik, A Systems Approach to the Environmental Analysis of pollution Minimization, Lewis Publishers, Boca Raton, London 2001. [24] B.R. Bakshi, Comput. Chem. Engng., 24(2000) 1767 [25] Y.Q. Yang and L. Shi, Comput. Chem. Engng., 24(2000) 1409 [26] P. Sharratt, Comput. Chem. Engng., 23(1999) 1469. [27] J.A. Cano-Ruiz, G.J. McRae, Annu. Rev. of Energy and Environment, 23(1998) 499. [28] A. Azapagic, S. Perdan, Trans IchemE, 78, Part B (2000) 243 [29] G. Koller, U. Fischer, K. Hungerbuhler, Trans IchemE, 79, Part B (2001) 157. [30] E.J. Jung, J.S. Kim, S.K. Rhee, J. of Cleaner Prod., 9(2001) 551. [31] A. Azapagic, Chem. Eng. J., 73(1999) 1. [32] V. Bhaskar, S.K. Gupta, A.K. Ray, Reviews in Chemical Engineering, 16(2000) 1. [33] X.P. Jia, F.Y. Hart, Computers and Applied Chemistry (China), 18(2001) 113. [34] T. Umeda, T. Kuriyama, et al., Comput. Chem. Engng., 4(1980) 157 [35] A. Sophos, E. Rotstein, G. Stephanopoulos, Chem. Eng. Sci., 35(1980) 2415. [36] M. Sokic, S. Zdravkovic, Z. Trifunovic, Canadian J. Chem. Eng., 68(1990) 119. [37] A.R. Ciric, S.G. Huchette, Ind. Eng. Chem. Res., 32(1993) 2636. [38] J. Schmuhl, et al., Comput. Chem. Engng., 20(1996) s327. [39] C. Chang, J. Hwang, Chem. Eng. Sci., 51(1996) 3951 [40] A. Alidi, Appl. Math. Modelling, 20(1996) 925. [41] H.H. Li, S.Q. Zheng, F.Y. Han, Computers and Applied Chemistry, 18(2001) 57. [42] Y.I. Lim, P. Floquet, X. Joulia, Ind. Eng. Chem. Res., 40(2001) 648. [43] Z.L. Yao, X.G. Yuan Comput. Chem. Engng., 24(2000) 1437. [44] A.R. Ciric, T. Jia, Comput. Chem. Engng., 18(1994)481. [45] I.E. Grossmann, R. Drabbant, R.K. Jain, Chem. Eng. Comm., 17(1982) 151 [46] S. Fathi-afshar, J. Yang, Chem. Eng. Sci., 40(1985) 781. [47] A. Pertsinidis, I.E. Grossmann, G.J. McRae, Comput. Chem. Engng., 22(1998) $205. [48] E. Heinzie, et al., Ind. Eng. Chem. Res., 37(1998) 3395. [49] Y.I. Lim, P. Floquet, X. Joulia, S.D. Kim, Ind. Eng. Chem. Res., 38(1999) 4729.
611 [50] M.A. Steffens, E.S. Fraga, I.D.L. Bogle, Comput. Chem. Engng., 23(1999) 1455. [51] M.M. Dantus, K.A. High, Comput. Chem. Engng., 23(1999) 1493. [52] L. Shi, Study on Process Integration Involving Environmental Impact (Ph.D. Thesis), Dalian University of Technology, 1999. [53] Y. Gao, L. Shi, P.J. Yao, Chinese J. of Chem. Eng., 9(2001) 267 [54] J.J. Marano, S. Rogers, Environmental Progress, 18(1999) 267. [55] G.E. Kniel, K. Delmarco, J.G. Petrie, Environmental Progress, 15(1996) 221. [56] A. Azapagic, R. Clift, Comput. Chem. Engng., 23(1999) 1509. [57] B. Alexander, G. Barton, J. Petrie, J. Romagnoli, Comput. Chem. Engng., 24(2000) 1195. [58] N. Hallale, Chem. Eng. Progr., 97(2001) 30. [59] R.F. Dunn, G.E. Bush, J. Cleaner Prod., 9(2001) 1. [60] R. Smith, Applied Thermal Engineering, 20 (2000), 1337. [61 ] R.V. Berkel, UNEP's 6th international high-level seminar on CP, Montreal, Canada, 2000. [62] T. Graedel and B.R. Allenby, Industrial Ecology, New Jersey: Prentice Hall, 1995. [63] M.H. Li, S.Y. Hu, Y.R. Li, J.Z. Shen, Comput. Chem. Engng., 24(2000) 1215. [64] W. Zhao, C.G. Zhou, F.Y. Han, C.Y. Li, J. Chem. Eng. (China), 52(2001) 41. [65] W. Zhao, C.G. Zhou, F.Y. Han, C.Y. Li, J. Chem. Eng. (China), 52(2001) 46. [66] K.L. Hua, Y.R. Li, S.Y. Hu, J.Z. Shen, Comput. Chem. Engng., 24(2000) 217. [67] S.Q. Zheng, C.H. Chi, J.C. Yue, F.Y. Fang, Proc. of China Process Systems Engineering, (2001) 102. [68] S.Q. Zheng, H.H. Li, F.Y. Fang, Proceedings of China Process Systems Engineering, (2001) 106. [69] C. Chen, J.Z. Shen, Y.R. Li, S.Y. Hu, J. Chem. Eng. of Japan, 34(2001) 1147. [70] K.F. Hou, Y.R. Li, J.Z. Shen, S.Y. Hu, Hydrocarbon Processing, 6(2001). [71] M.P.C. Weijnen, P.M. Herder, Comput. Chem. Engng., 24(2000) 1467 [72] G. Carta, M.D. LeVan, et al., Chem. Eng. Education, 1997, 242 [73] J.F. Brennecke, M.A. Stadtherr, Comput. Chem. Engng., 26(2002) 307. [74] J. Perkins, Comput. Chem. Engng., 26(2002) 283. [75] D. Shin, E.S. Yoon, K.Y. Lee, E.S. Lee, Comput. Chem. Engng., 26(2002) 319. [76] C. Vezzoli, J. of Cleaner Prod., 11(2003) 1.
612
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Logistic Optimization for Site Location and Route Selection under Capacity Constraints Using Hybrid Tabu Search Yoshiaki SHIMIZU" and Takeshi WADA"
aDepartment of Production Systems Engineering, Toyohashi University of Technology Toyohashi 441-8580, Japan Abstract Recent globalization of market raises outstandingly the importance of logistic decisions toward just-in-time and agile manufacturing. With this point of view, in this study, we formulate a site location and route selection problem as a p-Hub problem with capacity constraints. It refers to a non-linear integer programming deciding simultaneously location of hubs and routes from plants to customers via hub centers. To solve the problem practically as well as efficiently, we have developed a new and novel meta-heuristic method termed hybrid Tabu search and implemented it in a hierarchical manner. Through the numerical experiments that outperformed two popular commercial software, we confirmed effectiveness of the proposed method even for real-life applications. Keywords logistic optimization, supply chain, capacitated p-Hub problem, Tabu search, Lagrange relaxation
1. INTRODUCTION Recent innovations of electronic communication as well as advanced transportation technologies are accelerating the globalization of market outstandingly. It raises the importance of logistic decisions toward just-in-time and agile manufacturing much more than before since its effectiveness is pivotal to the efficiency of business process. As an essential part of such understanding, the industries have been paying keen interests on supply-chain management (SCM) [1], and studying about it from various aspects [2-4] Taking a logistic problem in SCM, in this paper, we will formulate it as a hub location and route selection problem attempting to minimize transport cost covering distributed demands over the interested area. The hub location problem has been an important issue for the design of network systems since hub systems are popular in various fields [5.6]. After all, it is referred to a kind of combinatorial optimization problem, which is difficult to obtain the exact optimum solution for real-life problems. After a general formulation, we will propose a practical and novel method for obtaining sub-optimal solution, and provide some results of numerical experiments for its validation. 2. PROBLEM F O R M U L A T I O N Viewing plants, terminals or suppliers, and customers as servers, hubs, and clients respectively, we have formulated the logistic optimization problem as p-Hub problem. In its development,
613 we suppose that the network is composed of n clients, q servers and p hubs'; the hub is placed on the client; only q (< p) hubs are directly connectable with the servers (they are called s-hub hereinafter); each client is connectable either with a hub or a s-hub; each hub is connectable with a s-hub (See Fig. 1). Demand of each client, and transport cost per unit demand between each node are given a priori. Moreover, it should be noted that capacity constraints is added on hubs and s-hubs to cope with a practical situation associated with physical material flows. After all, it is formulated below as a non-linear integer programming deciding simultaneously the location of hubs and routes from the servers to the clients via the s-hubs and/or hubs so that total transport cost should be minimized. (p.1)
Min ~dil~j=~XijtCiJ + ~"~YJ'klCt k=, CJk + st=l Xj,j =
p
(1)
j=l
xi, j = 1 (i = 1,...,n)
(2)
j=!
(n-p+l)xjj-s
>_0 ( j = l , . . . , n )
(3)
j=!
(4)
Yk,k = q k=l
(p--q+l)yk,k--s
k >_0 ( k = l , . - . , n )
(5)
j=l
Yj,k = xj,j ( j = 1,...,n)
(6)
zk, t = l (l = l , " ' , q )
(7)
z,,, : y,,~ (k : 1,..., , )
(s)
k=l
k=l q
i=!
Yk,k
~~(x,,jyj,kd,)<_cpk
(k = 1,...,n)
(9)
~=1 j = l n
xj,j(1- yj,j)Z(xi,jdi)
(10)
i=1
X ij , y /k , Z k, ~_ {0,1}, where binary variables x o, Yjk and zkt take 1 if node i is connected to hub node j, hub node j to s-hub node k, and s-hub node k to server node l respectively, and otherwise 0 in all cases. And c o denotes the transport cost between node i and j, di demand of client node i. Moreover c~ and /5' are discount rates for mass transportation regarding hub-hub and shub-server respectively. On the other hand, each constraint means that: Eq.(1) requires number of hub-location must be p; each client is connectable only to a hub (Eq.(2)); every client is not connectable with each other (Eq.(3)); numbers of s-hub must be q (Eq.(4)); every hub is not connectable
*They are altogether called nodes hereinafter.
614
Fig. 1. Client-server network concerned here Fig. 2. Schematic outline of proposed method with each other (Eq.(5)); each hub is connectable only to a s-hub (Eq.(6)); each server is connectable only to a s-hub (Eq.(7)); only s-hub is directly connectable to a server (Eq.(8)); demand cannot exceed the capacity of s-hub (Eq.(9)); and that of hub (Eq.(10)). 3. SOLUTION M E T H O D - HYBRID TABU SEARCH Since the above combinatorial problem becomes NP hard, developing a practical solution method is more desirable than rigid ones. From this aspect, we try to apply Tabu search (TS) that is promising for obtaining approximated solution effectively. It is not efficient, however, if we try to apply this method straightforwardly for the present problem. This is because the present searching space is too vast for the meta-strategy that relies just on the local search with probabilistic perturbation. Then we develop a new and novel method termed hybrid Tabu search (hybrid TS) to solve the problem formulated above. The method divides the solution procedure into two levels so that we can apply the appropriate methods to the respective resulting sub-problems. Its upper level solves a hub location problem based on TS, and the lower a route selection problem under the prescribed hub location by Dijkstra method in the aid of Lagrange relaxation to handle the capacity constraints practically (See Fig.2). Moreover, by taking specific properties of those subproblems, we improved their respective algorithms. Below each algorithm will be explained more in detail. 3.1. Decision on the initial hub location
In the meta-heuristic algorithm like TS, it becomes especially important for the performance of the algorithm where to start the search and how to give the probabilistic perturbations. In this aspect, we provide an effective method for estimating the initial solution of the hub location as follows: suppose an imaginary root of the graph connected to the servers, and obtain the minimum spanning tree (MST) of the network. And evaluate each tying number of nodes or number of arcs linked to each node. Then decide the initial hub location in order of greater tying number till the prescribed number p. 3.2. Lower level algorithm Once hub location is fixed, (p.1) without capacity constraints on hub refers to the
615 shortest path problem ~ from the root to every node. Moreover, in the present case, noticing that the connection route is limited in the order of server, s-hub, hub and client, we can apply Dijkstra method more efficiently by its a bit revision as follows. [Revised Dijkstra's Algorithm] Let r, s ~ S, and v ~ V be vertexes denoting root, server and client respectively, and H a hub node set and 11, a s-hub node set (_CH). And w[i, j] denotes length (transport cost) between node i and j, and L(v) denotes the minimum path length from r to v among the routes connected using the nodes only in V-T. Step 0: T:-V, L(r)=0 Step l:L(Vs~S)=0, T:=T-S Step 2: L(Vv~H,)=w[s,v], T:=T - 11, Step 3: L(Vv~H-H,) = Min {L(u) + w[u,v]} with respect to u ~ H s , T:=T- (H-H,) Step 4: L(Vv~ 7) = Min {L(u) + w[u,v]} with respect to u G H Step5: Stop In the next step, to handle the capacitated problem in the same framework, i.e. the shortest path problem, we call attention to the Lagrange relaxation. Since in this level, the hub location is given or variables Yjk and zu in Eqs.(9) and (10) are all set already, the idea is conveniently realized as the mimic of auction system on the transport cost t7]. That is, if a certain hub (s-hub) would violate its capacity constraint, we think this has happened due to the too cheap transport costs connectable to the note. So if we raise the cost, some connections may move on another cheaper routes in the next call. Thus adjusting the cost t depending on the violated amount of capacity, i.e. cij :=cij + ~t(Zkd k - c p i ) , all constraints will be satisfied at last. Here la demote a coefficient related to Lagrange multiplier. If such adjusting procedure does not succeed within certain iteration, we conclude that there exists no feasible solution for the prescribed present hub location. And then return the worst score to the upper level decision process.
3.3. Upper level algorithm At any stage coming back to the upper level from the lower, we have either a new feasible solution or an infeasible one. Anyway, since we can evaluate the quality of the hublocation, we are possible to search another (better if possible) solution based on the result. To work with such hub location problem, we adopt TS, and sophisticate its algorithm to improve the efficiency. Incorporation ant colony method and multi start routine with TS is the essence of the idea. The overall algorithm at this level is summarized as follows. Step 0: Set the amount of pheromonef (i= 1,"', n) zero for every node, and decide the initial hub location from the procedure based on MST as already mentioned(k = 1). Step 1: Select a leaving node from H (out-hub) based on the possibility related to the
1 Given a connected graph G=(V,E), a weight w(i,j) : E ( i , j ) ~ R +and a fixed vertex t in V, find a shortest path from t to each vertex v in V.
616 pheromone amount .1 (when k= 1, select randomly). Step 2: Evaluate the objective function for every candidate "2 within the niche .3 of the present solution. The best (feasible) solution within the niche becomes a new solution (in-hub). In the case where any feasible solutions would not be derived, we choose the solution with the least sum amount of constraints violation on capacity. And modify the pheromone .4 of the best hub nodes as f:=f+F• (Best c o s t - Initial cost), where F denotes a constant parameter. Step 3: Exchange the location between the in-hub and the out-hub. And mark the out-hub in the Tabu list. k=-k+1, If k < K, go back to Step 1. Step 4: Reset the Tabu list, and re-construct the initial location to be decided in the space possibly unexplored yet. It is done based on the amount of pheromone as follows. Letting the transport cost cij as cij + f + fj, obtain MST again, and decide the initial location just as done before. The algorithm stops after a given numbers of iteration. obtained so far becomes the final one.
After all, the best solution
Note 1: Probability that the hub node i must be the out-hub is given proportionally to 1/f. Note 2: We cannot select a node appeared in the Tabu list as a new in-hub during certain period (Tabu length). Note 3: Niche is defined as every possible hub location after the out-hub has been decided. Note 4: In our procedure, the pheromone amount will not fade out with the elapsed time. It is just accumulated for sake of re-start procedure (Step 4). 4. N U M E R I C A L E X P E R I M E N T S Taking a variety of problems with different numbers of plants, terminals, and customers, we evaluated the performance via comparison with two commercial mathematical programming software, i.e. LINGO (LINDO corp.) and OPL Studio (ILOG corp.). We generated parameters such like transport cost data randomly while capacity constraint data to control rate of active constraints on capacity. Then, we classified the bench mark problems into several group as shown in Table 1. For rather small problems belonging to HARD2, we show results in Table 2. From this, we know that the proposed method can obtain the equal (optimal) solutions to those of the commercial software with much less computation load. Moreover, as presented in Fig.3, expansion of computation load is considerably slow against the increase of problem size
Table 1 Property of problem Tag Rate of active constraints [%] EAs;Y .... 0~5 HARD 1 5"~20 HARD2 20~30 HARD3 30-'--
Table 2 Comparison with commercial package software No. of
Objective value (*Stop by 24hr computation :PC Duron 700 [MHz])
Client This work OPL studio .1 LINGO .2 8 33 33 *52 10 48 48 N/A 12 66 66 N/A 14 90 *289 N/A *qLOG corp. Ver. 3.5, *ZLINDO corp. Ver.3
617
Fig.3. Comparison in computation load (numbers of node) compared with the commercial software, and the problems of up to 80 nodes could be solved within admissible computation time. We also examined how much the auction mechanism was effective to achieve the constraint satisfaction, and sophistication of each algorithm to improve efficiency of the algorithm. For this purpose, we solved various problems classified in Table 1. As supposed apparently, more adjustments became necessary in the auction as the problem became tightly constrained or difficult to solve (from EASY to HARD3). Similarly, numerical results showed that according to the tightness of constraints, the sophistication of TS in terms of ant colony algorithm and multi-start routine had considerable effect on the performance. 5. CONCLUSION Noticing the importance of supply chain management, in this paper, we formulate a logistic optimization as the capacitated p-hub problem, and develop a practical solution method termed hybrid Tabu search. It divides the solution procedure into two levels so that we can apply the appropriate methods to the resulting sub-problems respectively. Through the numerical experiments that outperformed two popular commercial software, we confirmed the effectiveness of the proposed method. Eventually, the method is promising even for dealing with real-life applications. REFERENCES [1] I. A. Karimi, R. Srinivasan, and P. L. Han, Chemical Engineering Progress, 98 (2002) 3238. [2] R.Garcia-Flores, X. Z. Wang and G.E. Goltz, Computers and Chemical Engineering, 24 (2000) 1135-1141. [3] A. Gupta, C. D. Maranas, and C. M. McDonald, Computers and Chemical Engineering, 24 (2000) 2613-2621. [4] Z. Zhou, S. Cheng and B. Hua, Computers and Chemical Engineering, 24 (2000) 11511158. [5] Y. Shimizu, J. Chem. Engng. Japan, 32 (1999) 51-58. [6] J. Ebery, M. Krishnamoorth, A. Ernst and N. Boland, European Journal of Operational Research, 120 (2000) 614-631. [7] K. Yoneda, Proc. Scheduling Symposium (2000) 17-25. (in Japanese)
618
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Distributed Multi-Agents and Cooperative Problem Solving for On-Line Fault Diagnosis Dongil Shin a, Gibaek Lee b and En Sup Yoon c
aDepartment of Chemical Engineering, Myongji University, Yongin, Kyunggido 449-728, Korea bDepartment of Industrial & Engineering Chemistry, ChungJu National University, Chungju, Chungbuk 380-702, Korea CDepartment of Chemical Engineering, Seoul National University, Seoul 151-744, Korea Abstract: Though many good methodologies for process diagnosis and abnormal situation
management have been developed for the last two decades, there is no single panacea that always shows better performance over all kinds of diagnostic problems. In this paper, a framework of message-passing, cooperative, intelligent diagnostic agents is presented for cooperative problem solving of on-line fault diagnosis. The diagnostic agents in charge of each process functional perform local diagnoses in parallel; exchange related information with other diagnostic agents (possible to include (mobile) business agents); and cooperatively solve the global diagnostic problem of the whole process plant or business units just like human experts would do. For their better sharing and exchanging of process knowledge and information, we also suggest a way of remodeling processes and protocols, taking into account semantic abstracts of process information and data. The benefits of the suggested multi-agents-based approach are demonstrated by the implementations for solving the diagnostic problems of various chemical processes. Keywords: Fault diagnosis, Decisions support, Distributed artificial intelligence, Cooperative reasoning, Intelligent Agents, Knowledge sharing, Constraint satisfaction, Chemical process industries. 1. INTRODUCTION In the chemical and related industries, there always has been a push to produce higher quality products, to reduce product rejection rates, and to satisfy increasingly stringent safety and environmental regulations [10]. The company's environment is under increasing level of turbulences, characterized by a restricted possibility to plan upcoming events and short cycled, erratic changes in supplier and consumer markets. Thus, under the term "reconfigurability," new production paradigms are being emerged. Process operations that were at one time considered acceptable are no longer adequate. While process controllers can compensate for many types of disturbances occurring in the process, there are changes, which cannot be handled adequately by the controllers. These changes are called faults. To ensure that the process operations satisfy the performance specifications, the faults in the process need to be detected, diagnoses, and removed. As industrial systems enlarge, the total amount of energy
619 and material being handled increases, making early and correct fault detection and diagnosis imperative both from the viewpoint of plant safety as well as reduced manufacturing costs [5]. A dynamic structure designed according to reconfigurable concepts means that, in case of a reorganization of the respective production unit, the information and communicational relations especially have to be redesigned as well. The on-line diagnosis of operating status of process plants or business units is one of the most important components in obtaining the reconfigurability successfully. The Multi-Agent System (MAS) technology has been emerging to solve problems in complicated and distributed systems such as Internet and human bodies. Agent technology appeal is due to the benefits that agent-based organization and coordination allow in designing cooperative problem-solving systems [8]. MAS is one area of the distributed artificial intelligence (DAI) field and is known to be very appropriate to the problem solving in distributed systems. It has modularity, speed and reliability, which are the benefits of distributed intelligence, due to the enhancement of the distributed computing ability. Moreover, MAS has advantages of operation in knowledge level, easy maintenance and reusability [2]. Agent-based computing has been hailed as the next significant breakthrough in software development, with the potential to affect many aspects of computer science, from artificial intelligence to the technologies and models of distributed computation. Agent-based systems are capable of autonomous action in open, dynamically-changing environments. Agents are currently being applied in domains as diverse as business information systems, computer games and interactive cinema, information retrieval and filtering, user interface design and industrial process control. Though the concept of agents has various definitions among the researchers, it can be defined narrowly, in the domain of process operations support system we are mainly concerning, as all the software and hardware that may be used to complete the operation related tasks on behalf of human operators. Because the communication between agents are possible regardless of the various types of hardware, especially in this era of emerging widely accepted communication protocols, MAS has a great possibility in the application to the fault diagnosis of chemical processes, which have lots of heterogeneous process units required to be solved with specific knowledge. Multi-agent systems have been known to provide distributed and collaborative problem solving environment [2,6]. In the process systems engineering, there have been some efforts to apply agents in modeling and design of processes in the concept of concurrent engineering [ 1]. However, few efforts have been made in the application of agents as software components and agenthood in the process fault diagnosis domain. In this paper, the development of topologically distributed multi-agent systems for chemical process fault diagnosis is presented. This system uses the diagnostic agents in each process unit and makes them communicate with each other to solve the global fault diagnosis problem in a distributed, collaborative fashion.
(•
~ .....~/~..
DA:DiagnosticAgent SA:SchedulingAgent
Fig. 1 Multi-agent diagnostic system.
620 2. COOPERATIVE
FAULT
DIAGNOSIS
FOR
DISTRIBUTED
CONSTRAINT
REASONING Agent-based systems offer a powerful tool framework to handle situations, which demand a complex combination of multiple heterogeneous knowledge and problem solvers, but we can only benefit from distributed computing if this is supported by concrete and useful schemes of cooperation [8]. The concept of distributed fault diagnosis using topology-based multi-agents in chemical processes is shown in Fig. 1. Diagnostic agents (DA) corresponding to existing process units are placed in accordance to the topology of the process units and have fault detection module, knowledge bases and inference engine which performs reasoning as well as local diagnosis. The fault detection module has the information on possible fault types, measured and unmeasured variables of individual process unit and can determine if faults exist. Knowledge base determines the relationship between variables based on the functionbehavior model [4,9]. To perform the cooperative fault diagnosis based on process topology, by interpreting process units and their functions as DAs, and their functions and status variables of the units as the contents of the DAs, an approach different from Oh [9] is required. Basically, every equipment in a chemical process carries out tasks related to mass, energy and component. Mass, energy, component, etc. become objects of functions of process units; these objects are defined as keyword. The status of the keyword, keyword status, describes the unit's behavior depending on the function of equipment. For example, the abnormal high mass flow in a pipe may be described as MASS_STATUS HIGH. All process units have such keywords, and each keyword-related behavior of units can be interpreted together with behaviors of other units via keyword. Keyword status such as MAYBE_HIGH, MAYBE_ NORMAL or MAYBE_LOW describes the status estimated from that of nearby equipment when the corresponding variable of the equipment is not measured. UNKNOWN is the status that is not measured and cannot be estimated from nearby equipment. In case of keyword COMPONENT, which represents component composition, it can be extended as COMPONENT1, COMPONENT2, etc. 3. DIAGNOSTIC AGENTS Key issues related to agent-based cooperative systems, such as representation, ontology management, agent structure, system architecture, communications, system dynamics, overall system control, conflict resolution, legacy problems and external interfaces, have been discussed in [12]. Most of these issues are also applicable in designing and implementing the proposed agent-based fault diagnostic system.
Equipment_l:
MASS(Ft) ENERGY(Tt) COMPONENT
V (CA,)
Equipment_2: [----~ ( ' - - ' ~ MASS(L) ENERGY0"2) COMPONENT(CA9
Equipment_3: MASS0:3) ENERGY(T3) COMPONENT (CA3)
Fig. 2 A simple process for the explanation of diagnostic agenthood.
621
3.1 Agent Activities The diagnosis by a DA is carried out by exchanging status information and local reasoning with neighboring DAs as needed. Therefore, a DA sends queries for what it wants to know, performs reasoning on its status based on the answers to queries, and notifies the operator/system and neighboring units of detected faults by itself. The Process Specification Language (PSL) recently proposed by STEP and the National Institute of Standards of Technology (NIST) is one of those initiatives to ensure complete and correct exchange of process information among all established manufacturing applications [ 11 ]. Process topology based model and activities and role of DA are explained using a simple process, shown in Fig. 2. This system consists of three process units where input is fed into a tank unit through a pipe unit and output comes out through another pipe unit. DA infers causal relationship through communication with neighboring DAs and fundamental selfreasoning. The detecting module of DA is always active, and DAs perform diagnosis by sending and receiving messages when events leading to faults occur. DAs of Equipment 1, 2, and 3 are named DA1, DA2, and DA3, respectively. Let us assume a leak occurs at Equipment2, a tank. This event causes a symptom of low flow of Equipment_3 (F3) and low level of Equipment_2 (L). The symptom is expressed in the language of function-behavior model as follows: keyword status low of keyword MASS (MASS[low]) is detected in DA2, and MASS[low] is detected in DA3. Triggered by events leading to faults, DA3 asks a query to DA2 in order to verify if this symptom is propagated or intermittent. DA2 replies the answer to the query of DA3, triggered by the event of receiving a query message, and DA3 performs a local diagnosis from receiving reply in the input device. In this case, it can be concluded that the fault was propagated to the unit itself based on the fact that MASS[low] was received and the status of itself is MASS[low].
3.2 Reasoning Reasoning is started when a fault is detected and answers of queries to DAs are received. If an answer telling that the fault detected in input stream DA (cause-DA) is received, the DA concludes that the fault propagated from the input stream and sends a 'tell' message reminder to the cause-DA of being faulty in cause-DA. Message 'reply' is a response to a query, and the reasoning starts when a DA receives a 'reply' message from other DAs.
Fig. 3 A prototype implementation of MADISON.
622 3.3 Certainty Factor (CF)
Distributed fault diagnosis shows good performance in portability, expandability, and speed. However, conflicting diagnostic results may exist and should be resolved because local fault diagnosis and communication among agents are inherent in this system [3]. In this research, possibility of fault occurrence is quantified by introducing certainty factor (CF) to resolve the conflicting diagnostic results. For the weighting factor used in calculating CF, Gaussian distribution is assumed in the time domain from the start of the diagnosis to the current time. This approach is suitable considering the characteristics of fault diagnosis as the latest diagnostic results take priority over the past results and even old results are not ignored completely. An improved scheme of enhanced knowledge sharing and conflict resolution among heterogeneous DAs is being designed using the adaptive agency concept. 4. I M P L E M E N T A T I O N A N D C A S E S T U D Y
The testbed process used in the case study is a classic example for fault diagnosis, and has been used in many researches, including the work of Kramer and Palowitch [7]. The mathematical modeling of Sorsa and Koivo [13] was used for this study. There are three feedback control loops in the process, and PI controllers are used. They control the level of reactor, recycle flowrate of product, and reactor temperature, respectively. The level control of reactor is direct and no keyword change occurs; recycle flowrate control is reverse and no keyword change occurs; and temperature control of reactor is direct and keyword change occurs. Therefore, different function-behavior models are required for each controller. We implemented a prototype of the proposed system as a fault diagnosis system development tool for chemical processes, MADISON (Multi-Agent Diagnostic System ONline), on G2. Fig. 3 shows a multi-agent fault diagnosis system constructed by applying MADISON for the CSTR process. As process units are DAs, process topology of the real process is preserved in the diagnosis system. Fault diagnosis is carried out through local diagnosis unit-wise, and the interface is very user-friendly and easy to understand. Case Study: Heat Exchanger Fouling Because of fouling, the overall heat transfer coefficient of the heat exchanger is decreased. In early stage of the fault, the temperature of recycled product flow increased, so the temperature of reactor and the flow rate of cooling water, Fw, increased. Temperature of reactor could be controlled, and Tr went to the normal range. As the system settled down on a new steady state, the remaining symptom was the increasing Fw. As a result, PIPE-DA-11 suggested the high need of coolant, and state of HX-DA-5 was out stream energy-high. This suggests that the fault was occurred in the heat exchanger. The final results of diagnosis based on the local diagnosis and CF of each DA are summarized in Table 1.
Table 1 Diagnostic results for heat exchanger fouling. ID of DAs PIPE-DA-11 HX-DA-5
Keyword_status MASS_HIGH HOT STREAM OUT ENERGY HIGH
Diagnosis High_need_coolant Need reasoning
CF 0.9
PIPE-DA-6
ENERGY LOW
0.1
CSTR-DA-2
ENERGY_HIGH
Physically impossible High_reactant
-
0.1
623 5. CONCLUSIONS Due to its benefits of allowing designing cooperative problem-solving systems, the multiagent technology offers a powerful alternative for diagnostic problem solving in a lot of engineering applications, especially for reconfigurable systems like holonic manufacturing systems. In this paper, we suggested a multimodel-based, distributed multi-agent diagnostic system for chemical processes where diagnostic agents of each unit communicate by exchanging messages in parallel and try to cooperatively solve the global fault diagnosis problem by collaborative constraint satisfaction. The suggested approach was demonstrated by implementing a prototype system for various test cases, and its framework is being extended as a general framework of coordinating the cooperative problem solving in a group of various decision-supporting systems being used distributely in process operations. ACKNOWLEDGEMENTS
This work was supported in part by the Korea-Brazil Cooperative Research and Development Program of the Ministry of Commerce, Industry and Energy. REFERENCES
[1] R. Batres, S. P. Asprey and Y. Naka, "A KQML multi-agent environment for concurrent process engineering," Comp. Chem. Eng., 23S, 653-656, 1999. [2] J. M. Bradshaw (Ed.), Software Agents, AAAI Press/The MIT Press, San Francisco, CA, USA, 1997. [3] T. S. Chang The process fault diagnostic system based on multi-agents and functionbehavior modeling, Ph.D. Thesis, Seoul National University, Seoul, Korea, 2000. [4] S. Y. Eo, T. S. Chang, D. Shin and E. S. Yoon, "Cooperative problem solving in diagnostic agents for chemical processes", Comp. Chem. Eng., 24, 729-734, 2000. [5] D. M. Himmelblau, Fault detection and diagnosis in chemical and petrochemical processes, Elsevier, Amsterdam, The Netherlands, 1978. [6] M. N. Huhns and M. P. Singh, Readings in agents, Morgan Kaufmann Publishers, Cambridge, MA, USA, 1998. [7] M. A. Kramer and B. L. Palowitch, "A rule-based approach to fault diagnosis using the signed directed graph," AIChE J., 33(7), 1067-1078, 1987. [8] V. Loia and A. Gisolfi, "A distributed approach for multiple model diagnosis of physical systems," Information Sciences, 99, 247-288, 1997. [9] Y. S. Oh, A study of chemical process fault diagnosis based upon the function-behavior modeling, Ph.D. Thesis, Seoul National University, Seoul, Korea, 1998. [10]E. L. Russell, L. H. Chiang and R. D. Braatz, Data-driven techniques for fault detection and diagnosis in chemical processes, Springer, London, UK, 2000. [ll]C. Schlenoff, "Second process specification language (PSL) roundtable conference report," J. Res. Natl. Inst. Stand. Technol., 104(5), 495-502, 1999. [12]W. Shen and D. H. Norrie, "Agent-Based Systems for Intelligent Manufacturing: A Stateof-the-Art Survey," Knowledge and Information Systems, an International Journal, 1(2), 129-156, 1999. [13IT. Sorsa and H. N. Koivo, "Neural Networks in Process Fault Diagnosis," IEEE Trans. on Sys. Man and Cyber., 21, 815-825, 1991.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by ElsevierScienceB.V.
624
Development of Process Design Methodology Based on Life Cycle Assessment Himkazu Sugiyama and Masahiko Hirao Department of Chemical System Engineering, The University of Tokyo, 7-3-1, Hongo, Btmkyo-ku, Tokyo 113-8656, Japan
Abstract In order to improve the environmental performance of chemical industry, process engineering needs to incorporate environmental issues in a broader scope. In this work, a design procedure which incorporates LCA into the design phase of chemical processes was developed, and newly required activities, tools and information which are prerequisites to fulfill these activities were identified. A case study on the chemical recycling process of PET bottles was used to demonstrate the methodology. Keywords environmentally benign process design, Life Cycle Assessment, chemical recycling process, PET bottle 1.
INTRODUCTION
In order to incorporate environmental issues into chemical process design comprehensively, the scope should evolve as in Fig.1. Several methodologies have been proposed to evaluate environmental impacts from the Cradle-to-Gate view which includes production of raw materials, utility supply and waste treatment in addition to the core process [1-3]. However, the Cradle-to-Grave view, which introduces the product life cycle should also be included into the process design activities, because use and disposal phases of a product have considerable environmental impacts.
Fig. 1 Evolution of scope to design chemical process
625 Application of Life Cycle Assessment (LCA), which corresponds to the Cradle-to-Grave view in Fig. 1, in the design phase can be effective for the extension of our view [4]. Design procedures, which consider the result of LCA as environmental objective concurrently with economic objective, will make chemical industry environmentally benign by avoiding the end-of-pipe technologies and the end-of-life waste treatment technologies. However, such framework does not exist. The goal of this work is to show how to incorporate LCA and its related tools into process design, and to develop a design procedure where decisions can be made from the Cradle-to-Grave viewlxfint. 2.
INCORPORATION OF LCA INTO PROCESS DESIGN PROCEDURE
2.1.
Description of proposed design procedure In Fig.2, the proposed design procedure is shown using IDEF0 function modeling method. This is based on the conventional methodology which consists of five steps of activities [1]. For the initial input: desired function of process and constraints, design problem is flamed in the activity A l: Configure design problem. As an output of the activity A2: Generate alternatives and A3: Analyze process, we obtain process input/output for several alternatives fulfilling the design problem. LCA is performed concurrently with economic metric in the activity A4: Evaluate process, where process input/output for each design alternative is evaluated. Thus, standard LCA procedure [5]: Goal and scope definition, inventory analysis, impact assessment, and interpretation are included in A4. Promising design which is to be optimized in the following activity A5: Optimize process is found by Pareto analysis. Alter the optimization of design parameter, optimal design from both economic and environmental viewpoints is obtained as an overall output. 2.2.
Detailed description of the activity A4: Evaluate process
2.2.1. Perform life cycle assessment (A42) In the activity A42, LCA is perfonned for each process alternative. The rigorous LCA can be perfonned here for the first time, because the life cycle to be considered is dependent on the process input/output which is the output of the activity A3. After defining the product life cycle, the part of the life cycle, where changes occur after installation of designed process, is modeled by collecting data from LCA database. In the succeeding impact assessment step, impact category should be selected properly according to the environmental criteria to be considered. As an output of this activity, changes of environmental impacts after installing the process are obtained. Together with the result of economic evaluation, multi-objective evaluation plot is produced as an output of the activity A43. 2.2.2. Find thepromising design (A44) In the activity A44, the promising design is found by Pareto analysis for the multi-objective evaluation result. By weighting both economic and environmental objective, one design on the Pareto surface is selected which is to be op'ttmizxxlfiather in the following activity A5: Optimize proexoss.
626
Fig.2 IDEF0 representation of proposed design procedure with incorporation of LCA
Fig.3 Detailed description for the activity A4: Evaluate process
627 If technical options arise during the process design, which were not considered before, needs to regenerate altematives occur. In this case, designer should go back to the activity A2, to reconsider altematives such as unit operations, heat integrations, utility supply and design parameters. Another iteration loop is set to perform LCA again when the result is affected by a lot of assumptions and uncertainties. Sensitivity analysis helps prioritizing the parameters to be checked or analyzed on their uncertainty. 3.
CASE STUDY
To demonstrate the methodology described above, a case study has been performed on the design of chemical recycling system for used PET bottles, which depolymerizes PET to monomers such as ethylene glycol (EG), terephthalic acid (TA) or its derivatives. In this case, selection of depolymerization reaction determines the product, and affects the life cycle of PET bottle consequently. Therefore, decisions in the design phase should be based on the Cradle-to-Grave view. 3.1.
Process analysis Glycolysis where PET is depolymerized into bis-2-hydroxyethyl terephthalate 03HET) by EG was adopted as a degradation reaction, among several proposed reactions [6]. The obtained terephthalic derivative is dependent on the succeeding modification reactions. The first route to recycle used PET is to obtain BHET with purification of crude BHET yielding directly from glycolysis reaction. The second route is to reproduce dimethyl terephthalate (DMT) by methanolysis. The third route is to obtain purified terephthalic acid (PTA) from DMT by hydrolysis. While PET resin for polyester fibers can be reproduced by every terephthalic derivative described above, resin for beverage bottles can be
Fig.4 Life cycle of PET bottle considering three processes in the recycling step
628 produced via BHET and PTA. Therefore, the corresponding life cycle can be modeled as shown in Fig. 4. Those three options were modeled using process simulator. Glycolysis reaction was modeled using kinetics data obtained from literature [7]. As a utility supply, liquefied natural gas (LNG), heavy oil and powdered coal were considered and process input/output was obtained. 3.2.
Process evaluation and optimization by LCA
3.2.1. Evaluation by LCA In each recycling mute, flaked and washed PET after consumption is supplied at 10,000 t/yr, which is the functional unit in this system. The base case is set as system without a recycling route, i.e. all used PET bottles are incinerated. COz emission in the life cycle of PET bottle with route i, g2i(COz), is calculated by Eq. (1). Change of CO2 emissions from the base case is calculated using Eq. (2), ~, (co~) =
o,t(co~)+ Z 1,,(~).~(co~,~)-Z o,,t(~).r s
(1)
s
A~'~i,BaseCase(C02) = ~'~i(C02)- ~'~BaseCase (C02)
(2)
where Out(C02) is direct CO2 emission in the life cycle, In(s) and Out(s) is the input/output of substance s to/from the boundary, and ~(CO2, s) is the CO2 emission factor for substance s.
3.2.2. Interpretation of LCA Result Figure 5 shows the changes of CO2 emissions from the base case for each recycling mute. While the CO2 emissions increase within the recycling processes, they decrease in the processes which produce primary monomers due to the substitution by recycled monomers. Substitution of BIDET reduces direct CO2 emissions from the esterification process to produce primary BHET. In the incineration process, CO2 emissions decrease by stopping incineration of PET, while electricity generated from PET incineration is also reduced. This electricity is assumed to be compensated by commercial electricity production. Therefore CO2 emissions increase in the conventional electricity production. Summation of those increases and decreases represents the reduction of CO2 after installation.
Fig.5 LCA result for three recycling processes
629
Net present value [billion Yen]
Fig.6 Multi-objective evaluation of three recycling processes
3.2.3. Multi-objectiveevaluation and selection ofpromising design Concurrently with LCA, the net present value (NPV) of each process was calculated. Considering CO2 reduction after installation of processes and NPV, a multi-objective evaluation was performed as shown in Figure 5. The BHET route seems superior to other recycling routes, and BHET route with LNG or heavy oil as utility supply are the Pareto optimal altematives. Depending on the weighting of objective functions, one of them is selected as promising design to be fin'ther op 'mnized. 4.
CONCLUSION
A design procedure which incorporates LCA into the process design was developed. Newly required activities, tools and information which are prerequisites to fulfill these activities were identified. In the case study on chemical recycling process of PET bottles, decisions such as selecting a promising process were made possible based on LCA, concunmltly with economic metric.
Acknowledgement
The authors appreciate fruitful discussions with Prof. Dr. K. Hungerbiihler, ETH Zurich. This research was financially supported by the Alliance for Global Sustainability, AGS.
REFERENCES [ 1] [2] [3] [4] [5] [6] [7]
J.A. Cano Ruiz, G J. McRae, Annu. Rev. Energy Environ., 23 (1998) 499. V. Hoffmann, Ind. Eng. Chem. Res., 40 (2001) 4513. A.Azapagic, R. Cliff, Comp. Chem. Eng., 23 (1999) 1509. D.T. Allen, D. R. Shonnard, AIChE J., 47 (2001) 1906. ISO 14040 (1997). D. Paszun, T. Spychaj, Ind. Eng. Chem. Res, 36 (1997) 1373. J.R. Campanelli et al., J. Appl. Polym. Sci., 54 (1994) 1731.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
630
A Simulation Based Optimization Framework to Analyze and Investigate complex supply chains Xiaotao Wan, Seza OR~UN, Joseph F. Pekny, G. V. Reklaitis School of Chemical Engineering, Purdue University Abstract A simulation based optimization framework is presented to analyze complex supply chains under uncertainties, which includes three high level modules: deterministic optimization module, simulation module, stochastic optimization module. These modules coordinate to address challenging difficulties faced by supply chain management. This framework is applied to optimize the safety stocks of a three-stage divergent supply chain. The effect of supply chain network configurations on safety stocks is also examined. Keywords supply chain optimization, simulation based optimization 1 INTRODUCTION Supply chain is a network of suppliers, manufacturers, wholesalers and retailers, through which raw materials are transformed into final products to meet market demands. Managing supply chain is a challenging task, and many efforts have been put into this area. There are mainly three categories of models in literature. Analytic approach comprises multi-echelon inventory models [ 1-4] and economic theory based models [5]. These models generally neglect dynamics of the system, and the process details are aggressively simplified to avoid intractability. The second category [6, 7] has its root in traditional operation research (OR), where math programming techniques such as LP and MILP are generally used allocate resources efficiently. The majority of math programming based models assume centralized control, and they can not consider general uncertainties, even simple uncertainties pose tremendous difficulty when integer variables are present. Simulation is the central theme of the last category [8]. Simulation Based approach recognizes the crucial role of uncertainties in supply chain management. But as often quoted, lack of optimization capability is the major shortcoming of simulation models, thus external optimization module may need to be introduced [9]. In this paper, a simulation based optimization framework, which combines merits of the last two category model, is proposed in section 2. This framework is used to optimize safety stocks of a three stage supply chain in section 3. Conclusions are drawn in final section.
631 2 SIMULATION BASED OPTIMIZATION F R A M E W O R K Observing the pros and cons of different models, we propose a simulation based optimization framework to investigate and analyze supply chains. As shown in Fig.l, three separated modules are identified: deterministic optimization module, simulation module, stochastic optimization module. The basic philosophy behind this framework is decomposition and cooperation: different modules will concentrate on their specific domains, and they combine together to solve complex supply chain problems under uncertainties, which are insolvable through monolithic approach. Deterministic optimization module (DOM) ignores all the random elements in supply chain, solves deterministic math programming problems as in traditional OR field. The module can be as simple as a LP model, as complex as a hierarchical planning and scheduling system. It aims to fulfill the task of providing efficient resource allocation across different products and across time, which simulation models are not good at. Generally, different entities in simulation have their own domain specific DOM; simulation will provide system states, i.e. parameters to DOM, and solutions of DOM (e.g. sequence of activities) are directly feed into simulation module. Simulation module handles uncertainties while respecting guidelines provided by DOM. It acts as an execution system for planning and scheduling, responding to uncertainties online through local dispatch rules or through invoking DOM, resolving conflicts through information exchange and message passing. The focus of the coupling between simulation and deterministic optimization is not schedule evaluation [10] and reactive scheduling [11 ]; rather, the focus of the coupling is to provide valuable information such as performance measure to stochastic optimization module by emulating the system dynamics through many timelines. Stochastic optimization module (SOM) utilizes the information from simulation to search decision space systematically, trying to improve the performance of the whole supply chain. Compared to DOM, this module generally makes higher level decisions; the input for SOM comes directly from simulation as well as indirectly from DOM, and the output of SOM
.-1
Simulation Module
q
[
I Deterministic OptimizationModule
[
[
'
StochasticOptimization,,Module
[..
F
Fig. 1. Simulation based optimization framework
v
632 determines the overall structure (e.g. number of warehouse) or long term policies (e.g. safety stock policy) of simulation. The simulation based optimization framework presented above is quite general and flexible, and can be tailored to different supply chain problems. The complexity of the framework is justified by its potential to reflect the real life activities with high fidelity, which may lead to significantly different results than simpler approach as shown by Subramanian et al.[12] in pipeline management. Next section will give a detailed example where safety stocks are optimized for a central owned supply chain. 3 CASE STUDY Safety stock is an important factor affecting supply chain performance. High safety stocks may incur extra cost, and low safety stocks may cause late and missing demand, which impair customer satisfaction. In this section, simulation based optimization framework is adapted to optimize safety stocks for a supply chain, which demonstrates the framework's ability to analyze complex supply chains. 3.1 Supply chain network Fig. 2. shows the three stage divergent supply chain owned by one single company. One product made in production site is shipped to packaging site 1 and packaging site 2, where it is transformed into two final products A and B, which are then shipped to warehouses to meet customer demands. Supplier to production site is omitted by assuming infinite raw materials are available for production site. The complexity of this supply chain partly comes from the batch production processes in production site and packaging sites. Seven steps compose the production site process; ten equipments are involved and some of the steps share the same equipments where setup and changeover need to be considered; the cycle time is long and it takes months to go through all the steps. Packaging sites have three production steps with parallel equipments in the second step; product A and B compete for equipments Warehouse 1 Warehouse 2 Packaging Site I Warehouse 3 Production Site
Warehouse 4 Warehouse 5 ~
Packaging Site Warehouse 6 Warehouse 7
Fig. 2. Supply chain network for case study
~_~
633 and setup times are significant. Uncertainties also contribute to the complexity of this supply chain. Besides none-stationary random demands of each warehouse, equipment breakdown and processing time uncertainty greatly influence the short term and long term operations. It is quite clear that pure math programming and pure simulation can not satisfactorily model such a complex supply chain.
3.2 Modeling with simulation based optimization framework We have built a simulation based optimization model to optimize safety stocks for the supply chain described above. The implementation under the proposed framework is summarized in Fig. 3. The three major modules function as follows: Deterministic optimization module: two schedulers make detailed schedules for production site and packaging sites through solving MILP models. Simulation module: discrete event simulator accepts results from scheduler, evaluate supply chain performance under uncertainties for given safety stocks. Stochastic optimization module: real-coded genetic algorithm receives simulation results, updates safety stocks through genetic operators. 3.3 Sample results Fig. 4. shows the safety stocks in a sample run for one year horizon. Three uncertainties are considered: processing time variation for each step in production site, random breakdown and repairing time for parallel equipments in packaging sites, random demands for product A and B at warehouses. Results indicate that safety stocks are not hold at warehouses only, but along the whole chain. Fig. 6 compares safety stocks for two network configurations, where network configuration 1 indicates Fig. 3 and network configuration 2 indicates Fig. 5. Network configuration 2 removes all the warehouses, and packaging sites will directly satisfy demands, which are simply sum of demands for warehouses. In the comparison, all others conditions are the same except the network structure. Notice for network configuration 1 in Fig 6, packaging sites safety stocks include those at the downstream warehouses. It is no surprise that shorter supply chain length reduces safety stocks, as is well known that the reduction comes from risk pooling.
Discrete event simulator
IProduction site-~ Packagingsites q Warehouses ~L
k
!I
I
Genetic algorithm Fig. 3. Implementation of simulation based optimization for case study
634
Fig. 4. Safety stocks for supply chain
I
Packaging Site 1] Production Site Packaging Site 2[
I
Fig. 5. Modified supply chain network for case study
Fig. 6. Safety stocks for different network configurations
635 4 CONCLUSION A simulation based optimization framework is presented which has three high level modules cooperating each others. This framework allocates resources using math programming, handles uncertainties with simulation, and improves supply chain performance through stochastic optimization. Case study for a supply chain safety stocks optimization problem manifests this framework's ability to investigate and analyze complex supply chains under uncertainties. REFERENCES
1
Clark, A.J., Scarf, H., Optimal policies for a multi-echelon inventory problem, Management Science, 6, 475 (1960). Hochstaedter, D., An approximation of cost function for multi-echelon inventory model, Management Science, 16, 716 (1970). Rostling, K., Optimal policies for assembly systems under random demands, Operation Research, 37, 565 (1989). Ignall, E., Veinott Jr., A.F., Optimality of myopic inventory policies for several substitute products, Management Science, 15(5), 284 (1969). Tayur, S., Ram, G., Magazine, M., Quantitative models for supply chain management. Kluwer Academic Publishers (1998). Thomas, D.J., Griffins, P.M., Coordinated supply chain management, European Journal of Operation Research, 94, 1 (1996). Jayaraman, V., Pirkul, H., Planning and coordination of production and distribution facilities for multiple commodities, European Journal of Operation Research, 133, 194 (2001) Holweg, M., Bicheno, J., Supply chain simulation-a tool for education, enhancement and endeavor, Int. J. Production Economics, 78, 163
(2002)
Schunk, D., Plott, B., Using simulation to analyze supply chains, Proceedings of the 2002 Winter Simulation Conference. 10 Honkomp, S.J., Mockus, L., Reklaitis, G.V., A framework for schedule evaluation with processing uncertainty, Computers Chem. Engng.,, 23, 595 (1999). 11 Sabuncuoglu, I., Bayiz, M., Analysis of reactive scheduling problems in a job shop environment, European Journal of Operation Research, 126, 567 (2000). 12 Subramanian, D., Pekny, J.F., Reklaitis, G.V., A simulationoptimization framework for addressing combinatorial and stochastic aspects of and R&D pipeline management problem, Computers and Chemical Engineering, 24, 1005-1011 (2000).
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
636
Advanced information systems for process control and supply chain integration X. Z. Wang a, Garcia-Flores a, R. B. Hua b, M.L. Lu c aDepartment of Chemical Engineering and Keyworth Institute of Manufacturing and Information Systems, The University of Leeds, Leeds LS2 9JT, UK bThe Key Lab of Enhanced Heat Transfer and Energy Conservation, Ministry of Education of China, South China University of Technology, Guangzhou 510640, China r
Technology Inc. 10 Canal Park, Cambridge, MA 02141, USA
Abstract In this work we present a framework of a Distributed chemical supply Chain Management system (DCM) for supporting the integration and management of chain components in an analogous way that distributed control systems (DCS) are used for control and monitoring of unit operations of chemical plants. DCM is an Internet based platform or Grid that can be reconfigured to a specific chemical supply chain structure. It is also an open system so that tools and databases can be integrated into it in a plug-and-play way, data can be retrieved freely whenever and wherever they are required, and people can work concurrently and collaboratively to make compromise business decisions. At one end of the system are DCS and SCADA systems. Two key technological issues are discussed for developing such a system. These are (1) a framework that allows communication, negotiation and synchronisation management over the Internet, and (2) the standardisation of information structure, neutralisation of information format, as well as consistency in information interpretation. Keywords chemical supply chains, information modelling, information systems, concurrent engineering, software agents 1. INTRODUCTION Over the last ten years, the chemical and process industries have experienced changes which are both profound and irreversible. In the petroleum and commodity chemicals sector, mergers and acquisitions on a global scale have occurred at a fast pace and the profit margins have shrunk. In the specialty chemicals sector, the length of the product life-cycle has been decreasing continuously. The demand for a higher quality product with a shorter time to market is escalating, while the potential cost of production is rising. The trends imply that in order to improve the competitiveness and secure business, there is the need to not only optimise the design and operation of the process plant, which has traditionally been the focus of interest of chemical engineers, but also to fully integrate and improve the efficiency of all other processes involved in the product and plant life cycles including product discovery,
637 development and manufacture, as well as the whole supply network of raw materials suppliers, warehouses, plants, retailers and customers. Chemical supply chains are defined as the discovery, design, manufacture, and distribution of chemical products Ill and can be regarded as a broader definition of the concept of chemical process plants (Fig.l). Consequently Distributed supply Chain Management (DCM) systems are needed for integration and management of chain components in an analogous way that distributed control systems (DCS) are used for control of the unit operations of a chemical plant.
THECHEMICALPLANT F
~ Reaction
II~ ' ' 's'eparatj~
I I ~ HeatTransfer
Product
[ U$itySystems I I WasteTreatment
I
Waste
~" Computer Integrated Control and Optimisation e.g. DCS
NMarkeun ,[HL~
THECHEMICALSUPPLYCHAIN
Raw Material
N Markeun ,[H L~ ProductDiscoven/&
Process Development
Customer Demands & Forecast=
I
I I--'1' ' 'RawMaterialsSuppliers ' [i--I Warehouses, I ~ Purchasing,I ManufacturingPlantsUsingContinuous, DedicatedBatch,Multi-product,Multi-purpose, Pipeless,or Re-configurable moduleplants whicharedistributedovertheworld
~
~ Packaging,
Products Customer
Life-cycle Wastes
I
f DistributedChemicalSupplyChain Managementand Integration or DCM
~]
For Disposal
Fig. 1 Chemical plants vs. chemical supply chains
2. AN INTEGRATED F R A M E W O R K FOR SUPPLY CHAIN INTEGRATION 2.1. The framework
Fig. 2 shows a conceptual framework for chemical supply chain management and integration. A DCM system is an Internet based platform or Grid that can be configured to a specific supply chain structure to support the co-ordination and management of all the activities involved in chemical supply chains, including product discovery, development and evaluation, business processes such as making commitment to delivery time and cost and allocation of manufacturing resources, as well as scheduling and control of processes. Data can be retrieved and shared whenever and wherever it is needed and layered synchronisation management controls the time factors of data communication. Users and systems can work concurrently and collaboratively to make decisions. DCM is also an open system so that various databases and tools can be integrated into the system in a plug-and-play manner. Such an Internet platform that is aimed at fully integration of product discovery, business process and manufacturing can be seen as the next generation of distributed control systems (Table 1). A comparison of a DCM platform for supply chain integration and DCS for process control is detailed in Table 2.
638
Fig.2 The Intemet based framework for chemical supply chain integration
Table 1 The milestones in control s~,stem evolution a
Year
Milestones
1934 1938
Direct-connected pneumatic controls dominate market Transmitter-type pneumatic control systems emerge, making centralised control rooms possible 1958 First computer monitoring in electric utility 1959 First supervisory computer in refinery 1960 First solid-state electric controllers on market 1963 First direct digital control (DDC) system installed 1970 First programmable logic controllers (PLCs) on market 1970 Sales of electronic controllers surpass pneumatic 1975 First distributed control system (DCS) on market (Honeywell TDC 2000) Next generation Distributed chemical supply chain management systems (DCM) the milestones between 1938-1975 were summarised by Lukas t2l.
1.2. Technological challenges To develop the proposed DCM environment, the key technological challenges are:
Information exchange and sharing facilities. Using the DCM platform, chain components can exchange and share information electronically, seamlessly, transparently and dynamically over the Internet. Access to data should be allowed whenever and wherever the information is required, without the need to worrying about the data structure and format. This also means that tools that are plug into the DCM can also exchange data with other tools. Information
639 sharing and e x c h a n g e can be conducted at varied levels o f details and at different time intervals. Information modelling. For achieving the data sharing requirements, several technological issues need to be resolved. The shared data structures have to be the same; data format has to be neutral so that all tools can access the data, and the interpretation o f information must be identical. W e p r o p o s e to use S T E P (the STandard for E x c h a n g e o f Product m o d e l data) and X M L (eXtensible M a r k u p Language) as the data modelling and e x c h a n g e language.
Table 2 DCM for supply chain integration and DCS for chemical process control
DCMfor chemical supply chain management 9 Support supply chain management through integration of distributed chain components, information sharing and tools integration. 9 Improve and optimise chain efficiency. 9 Improve the competitiveness of supply chains. 9 Minimise operating cost of the chain 9 Minimise time from discovery to delivery 9 Various tools are integrated into the system in a way of plug-and-play. 9 Accesses to information wherever and whenever it is required. 9 Information exchange and sharing via DataIntranet and Internet information sharing in 9 Information can be in various forms including data, knowledge, tables, and the system graphics, transactions, negotiations. 9 Some information needs to be shared in real-time while some allows time delays 9 Intranet and intemet Enabling techniques 9 Information modelling languages such as STEP, information exchange and sharing techniques such as XML 9 Firewall techniques 9 Grid and agent techniques Relationship 9 DCM takes DCS as a constituent component that links to the plant Configuration 9 The DCM can be configured for a specific supply chain structure
Functions
DCS for chemical plant control 9 Support plant-wide control through coordination of distributed local control units and data sharing 9 Improve and optimise plant control 9 Improved product quality, safety and environmental protection. 9 Minimise operating cost of the plant 9 Various advanced control algorithms can be embedded into the system and configured for various applications 9 Provide more information for monitoring purpose 9 Data sharing between local control units via data highway 9 Data are mainly in numerical format 9 Data often need to be shared in realtime (apart from historical records) 9 Data highways Multi-processors 9 High speed processors 9 Computer memory 9 Programming languages 9 Advance control algorithms 9 DCS is a necessary component of DCM 9 The DCS system can be configured for a specific chemical process.
9
An open and distributed environment over the Internet. The e n v i r o n m e n t connects all players in the supply chain: databases, documents, machines, sensors and people and facilitates interoperability. N e w tools for storage, m a n a g e m e n t , manipulation and operation o f data should be a l l o w e d to be integrated into the system easily. I f n e w tools support standard STEP and X M L data m o d e l s for input and output, then the tools can be integrated into D C M in a plug-and-play way. I f the n e w tools do not support S T E P and X M L , then the D C M will provide a w r a p p e r that provides the front-end so that the tools can e x c h a n g e data with other tools. Interfaces to DCS and other real-time control systems. D C S is considered as an integral part o f D C M and therefore should be allowed to be integrated into D C M . In other words, at one end o f D C M , it is connected to on-line sensors and controllers.
640
Should support concurrent engineering activities of chain components. Concurrent engineering attempts to provide a comprehensive and integrated approach by taking full advantage of group work based on free and complete exchange and access to information required by everyone involved in the project team as it is required [5'6]. Concurrent process engineering also addresses the need that all facets of activities from market analysis, product discovery, conceptual plant design through detailed design and control to operation and disposal of products must be considered throughout the design life-cycle. Configuration and reconfiguration capability. DCM, like DCS for process control, can be configured for a specific supply chain and can be re-configured as the chain structure is changed. In developing the DCM platform for supply chain management, we regard the information flows as well as information modelling as the two most critical technological challenges. 3. AN AGNET-BASED SYSTEM FOR SUPPORT OF INFORMNATION FLOWS
Like in DCS systems for process control where data sharing facility is the key element, in a DCM system, the management, exchange, sharing, synchronisation and flow of information between all entities connected is also one of the most important issues. We have studied the use of agents to support the information flows TM, and use STEP (the STandard for Exchange of Product model data) and XML (eXtensible Markup Language) as contents level tools for facilitating the modelling and exchange of data within the multi-agent framework. 3.1 The agent-based system An agent is a self-contained problem solving entity that possesses certain properties, most importantly, communication for the purpose of co-operation and negotiation, learning so that performance can be improved over time during use, and autonomy, implying that agents can act pro-actively over its environment to provide service rather than waiting passively for commands. These properties clearly make agents different to traditional standalone software systems. Agents had the origin of distributed artificial intelligence and have proved to be a useful technique in designing distributed and co-operative systems in many industrial and business sectors, including telecommunication, air traffic control, traffic and transportation management, entertainment and medical care. We proposed to use agents to design systems to support information flows within a chemical supply chain system TMand developed a prototype system used a Java based agent system builder JATLite (Java Agent Template Lite). The prototype system for supporting information flows allows communication between agents using the standard Knowledge Query and Manipulation Language (KQML). KQML is both a message format and a message-handling protocol to support run-time knowledge sharing among agents. It can be used as a language for an application to share knowledge in support of co-operative problem solving. It uses performatives that define the permissible operations that agents may attempt on each other's knowledge base. 3.2 Information modelling and exchange At the level of message contents, we use STEP and XML for modelling and exchanging data. There are three critical technical issues that have to be addressed in order to allow components of a supply chain to exchange information effectively in a DCM system. First, the data need to be exchanged electronically, preferably over the Internet for distributed collaborative systems. Second, the chain components should share the same data structure and have the same interpretation. Thirdly, data should be exchanged in neutral formats.
641 STEP satisfies these requirements, although it is not as powerful as XML in exchanging data over the Internet and it lacks the impetus that the latter has gained in the research community. Both EXPRESS and XML have been studied as the "inner languages" of KQML for exchange of message contents [3'41. It was concluded that STEP and XML are viable options for message conveying inside the KQML shell of agents. We modelled the supply chain data at two levels, i.e. the productive level and administrative level Fig. 3.
SupplyChainDataExchange
T
I
Engineerin
Developme ,,
L ..........
A Manufacturin
: . . . . . . .
-I . . . . . . . . . . . . . . . . . .
Relevantstandard
application for productive
I
i
Marketin
Logi?tic
I
', ,i
Inf~176 Real[ ED
I
Transportatio |/ FI~
I Periodi
Manageme
t
I
Inventor Suppl Demln
Policie I / Loadin Contract " Forecastin Vehicle Routin
Fig. 3 Data model structure
4. I M P L E M E N T A T I O N The conceptual framework has been partially implemented in a prototype which was used in developing an agent-based demonstration system for distributed supply chain simulation and concurrent engineering decision support [4' 81 in a supply chain that involves multi-purpose manufacturing plants. The prototype is also being applied to developing a decision support system for a real supply chain of the Chinese medicine manufacturing industry in Southern China. [71
REFERENCES [ 1] I.E. Grossmann and A.W. Westerberg, AIChE Journal, 46(2000) 1700. [2] M.P. Lukas, Distributed control systems, Van Nostrand Reinhold Company, New York (1986). [3] R. Garcfa-Flores., X.Z. Wang and G.E. Goltz, Comput Chem. Eng., 24 (2000)1135. [4] R. Garcia-Flores and X.Z. Wang, OR Spectrum, 24(2002)343. [5] C. McGreavy, X.Z.Wang, M.L. Lu and Y. Naka, Concurrent Engineering: Res. Appl. 3(1995)281. [6] R. Batres, M.L. Lu and X.Z. Wang, PSE2003 (2003). [7] Q. Li, B. Hua and X.Z. Wang, ESCAPE13 (2003). [8] R. Garcia-Flores, A multi-agent system for chemical supply chain simulation and management support, Ph.D. thesis, The University of Leeds, UK.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
642
Cluster analysis Algorithm
and
Visualisation
Enhanced
Genetic
K. Wang 1, A. Salhi 2, E. S. Fraga 1. 1Centre for Process Systems Engineering, Department of Chemical Engineering, University College London, London, U. K. 2 Department of Mathematics, University of Essex, Colchester, U. K.
Abstract Process optimisation is a difficult task due to the non-linear, non-convex and often discontinuous nature of the mathematical models used. Although significant advances in deterministic methods have been made, stochastic procedures, especially Genetic Algorithms (GA), provide an attractive technology for solving these optimisation problems. However, a GA is not naturally suited to highly constrained problems. To overcome this limitation, we propose an enhanced GA, incorporating a new cluster analysis method through data visualisation that should provide knowledge about the feasible region, such as its size and location. At the same time, the genetic operator of mutation is redefined based on the gained feasible region knowledge to make the GA more suitable for constrained problems. A case study demonstrates that the combination of visualisation, cluster analysis and GA results in an effective process optimal design tool with high solution quality and consistency.
Keywords: Visualisation, Cluster Analysis, Non-linear optimisation, Genetic Algorithm, Artificial Neural Network, Non-linear Optimisation 1. INTRODUCTION One step in process design is the optimisation of a given flowsheet structure subject to given criteria, which are typically non-linear and non-convex. The result is a highly constrained problem that can be difficult to solve. Although significant advances in deterministic methods have been made, stochastic procedures, especially Genetic Algorithms (GA) (Goldberg, 1989), provide an attractive technology for solving such optimisation problems (Wang et al. 2000) although a GA is not naturally suited to highly constrained problems. In previous work (Wang et al., 2002), the linear feasible knowledge obtained by the Scan Circle Algorithm (SCA) was used to identify regions of interest in the domain of the constraints. Case studies demonstrated that a GA is enhanced for highly constrained problems with such feasible knowledge. In this paper, a new cluster analysis method for multi-dimensional data is proposed based on visualisation for feasible region identification. The analysis is used to define the mutation and crossover "Author to whom all correspondence should be addressed, [email protected]
643 operators to improve the effectiveness of a GA when tackling highly constrained problems. Some methods of cluster analysis (Gordon, 1981) work in the original n dimensional (n-D) space to generate the cluster knowledge directly. However, for large-scale problems, they have high demands on computing resources and their results are dependent on the initial parameter values. Moreover, the traditional tree-like presentation of cluster structure cannot directly present the cluster knowledge for ease in gaining understanding of the problem and for use in subsequent optimisation. Other methods work in the reduced dimensional (e.g. 2-D) space to generate the cluster knowledge; for example Kohonen's Self-Organising Map (SOM) method (Kohonen, 1995) is used widely. Such methods are readily explainable, simple and - perhaps most importantly - easy to visualise. However, even these approaches are dependent on several other factors besides the topographical composition of the data set. Also, the cluster information based on 2-D is difficult to reverse to n-D. In this paper, a new cluster analysis is proposed based on visualisation to address the problems mentioned above. First, a Cluster Constrained Mapping (CCM) is proposed for dimension-reduction mapping from the original n-D space to 2-D, conserving the cluster information in the reduced dimensional space. Then the agglomerative algorithm of Ward (1963) that works in 2-D space is called upon for cluster analysis. Its parameters are provided by visualisation directly. Finally, an Artificial Neural Network (ANN) is used to map the cluster analysis from 2-D back into n-D for use by a GA for optimisation. A case study on the optimisation of an oil stabilisation process demonstrates that the GA benefits from an understanding of the feasible region gained through the new cluster analysis method and finds better solutions more consistently. The combination of visualisation, data analysis and GA results in an effective process optimal design tool with high solution quality and consistency. 2. CLUSTER ANALYSIS VISUALISATION
OF
MULTIDIMENSIONAL
DATA
THROUGH
The new cluster analysis is based on visualisation and has four steps: (1) Given an n-D constrained optimisation problem, 10" points are generated randomly and N feasible points are selected to represent the n-dimensional data set. (2) A Cluster Constrained Mapping (CCM), conserving the distance between each pair of data points, is proposed for dimension-reduction mapping from n-D to 2-D for visualisation. A conventional single-hidden-layer feed-forward Artificial Neural Network (ANN) with back propagation is used. (3) In the 2-D mapping, cluster analysis is performed, supported by visualisation. Each cluster found is represented by a circle with centre ZE-Dand radius rE-O. (4) The clusters found in 2-D are mapped back into n-D with centre Z,-D and radius rn-D. The same ANN is used for this reverse mapping. Steps (2), (3) and (4) are described in more detail below.
2.1. Mapping CCM by ANN The motivation of CCM is that the topology information can be automatically preserved during the dimension reduction process for each and every pair of points by aiming to keep the ratio of the distance between the pair before and after the transformation the same. The formula for the objective is given in the following equation:
644
E = N(N-1)
= p'=l
,,,
liSp - Xp ll/n
where "I[-I[ " denotes the Euclidean distance between the two pattern points. The parameter O is the output vector (or point) in 2-D, X the given vectors (or point) in n-D, p and p' are the indices of points. The feed-forward ANN has three layers: input, hidden and output. In its back propagation process, the expression for updating the weights is obtained upon taking derivatives of the error E with respect to the weights.
2.2. Knowledge Discovery by Visualisation Supported Cluster Analysis supported by visualisation The Sum of Squares method (Gordon, 1980) is used for the classification of objects which can be represented as points in Euclidean space. The aim is to partition the set of N feasible points into G groups so as to minimise the total within-group sum of squares about the G centroids. In cluster g, let Ng denote the number of the set of points Og and Ogpk (g = 1..... G; p = 1,...,Ng; k = 1,...,2) the kth co-ordinate of the pth point Ogp, its centroid Zg has co-ordinates 1 Ng
zgk = ~-g ~p=,Ogpk
(k = 1.... ,2).
(2)
If the within-group sum of squares of the gm group is, Ng p=l
then the aim is to find a partition, which minimises G
S = ~Sg.
(4)
g=l
A cluster g is defined by the set of points Ogp which satisfy 2
y' (o k _ zgk )2 < r22- D , g ,
(5)
k=l ~t
where the radius of the circle r2_D,g =maxllOgp -Zglli, p = 1,2.... ,Ng.
(6)
Because such cluster knowledge is represented by a circle in 2-D, it is called circlecluster-knowledge. Note that such an analysis can be done by an agglomerative algorithm (Wishart, 1969) directly. However Fortier and Solomon (1966) presented a recursive relation for P(N,G) in the algorithm, the number of different non-trivial partitions of N objects into G groups:
P(N'G) =IGN -~~ ( G _G' i ) ! P ( U , i ) 1 /G!.
(7)
It is clear that the number increases very rapidly with N and G; for example, P (19,8)> 1.7• 1012. Koontz et al. (1975) described dynamic programming and branch-andbound algorithms which reduce the number of clusters which have to be examined in
645 order to locate an optimal one, but even with such saving a global search makes too heavy a demand on computing facilities. In this work, the algorithm is improved by visualisation based on 2-D: (1) the user defines, interactively, the optimal number of clusters by inspection; (2) the user also adjusts the radius for each cluster directly. Thus, the two parameters are guided by interaction with the user using visualisation. The cluster knowledge identified in the reduced 2-D is then converted back into the original higher dimension space, which are shown below: 2 E
2
( o k -- Z 2 - D , g k
~2 ---~ 2-D,g'
g = 1,2,...,G
k=l ( X t -- Zn-D, gt
)2 < /,.2 --
n-D,g ,
g = 1,2..... G
(8)
1=1
r,-o,g
G(N~-l)~,~ llx,-xp, ll/n
Where xi (i = 1..... n) denote the ith co-ordinate of the vector X. To generate Z,,-D from Z2-D, the same ANN is used; however, the input and the output of the net will be opposite of those in the former net. The objective function for training is
E = ~_~-~N (x'p, - Xp,/2
(9)
p=l t=l
where ANN.
x'~
(i = 1,...,n) denote the i t" co-ordinate of X, whose value is estimated by the
3. GA OPTIMISATION W I T H THE C I R C L E - C L U S T E R - K N O W L E D G E Applying a GA to a new problem requires the definition of the crossover and mutation genetic operators. In the new mutation operator, an individual is selected. It is mapped to 2-D to identify the cluster g to which it belongs out of all the clusters identified by the cluster analysis procedure. Cluster g is the cluster which includes the selected individual, or the nearest one if the selected individual is not within the feasible regions. A Gaussian mutation (Goldberg, 1989) is then carried out using the knowledge of the feasible regions for the randomly selected solution by emulating a coin toss for each variable in the genome. For example, in the cluster g, the variable of xi of the selected individual is mutated within the dynamic feasible region of
Zn_D,g , --
r._ o g
(xj j=l,j;~t
Zn_D,g I
Zn_D,g ' -st-
.-D,g --
(Xj /=l,jct
Zn_D,g I
The new crossover is applied to the selected parents as usual (Goldberg, 1989). If the resulting children do not belong to the identified clusters, the nearest clusters are selected and mutation is applied to the children as above. For the results presented below, the mutation rate is 0.1 and the crossover rate is 0.9. An overlapping population is used: newly generated offspring are added to the population and subsequently the worst individuals are destroyed. New offspring may or may not make it into the population depending on whether they are better than the worst in the
646 population. The termination function monitors the best individuals in each generation. If the best solution's value remains constant for a pre-specified number of generations (10 in the results presented below), the GA is terminated. 4. CASE STUDY AND CONCLUSIONS An Oil Stabilisation Process (OSP) (McCarthy et al., 2000) is used to demonstrate the effectiveness of the new clustering procedure. Fig. 1 shows the flow sheet structure, in which 1, 2, 3 and 4 represent flash vessels, M a mixer, and V a pressure valve. The feed consists of L "-[--~-] "12 hydrocarbons. There are n=5 continuous -optimisation variables: the flash temperature for the feed to each flash, Xl, x2, x3, x4, with normalised initial search region [0,1], and the target pressure for the valve, x'se [1,30] atm, represented as x5 ,, below in a normalised context (i.e. xse [0,1]). The optimisation problem is to maximise annual profit. The main constraint is a vapour pressure specification on the oil product and the amount of Fig. 1. OSP flowsheet, heavy components in the gas product. The objective function value of infeasible points is given an infinite cost, 1020; the value of feasible points is typically negative indicating a profit. In evaluating the objective function, infeasible points arise primarily for three reasons: (1) The process isn't valid: The temperatures for the flash vessels may be outside the ranges allowed by fundamental physical properties or the utilities available are not sufficient for the request. In this case, the flowsheet is given an arbitrary annualised cost of 1020 (i.e. infinity); (2) the process is valid, in so far as the individual processing units are concerned, but the gas product stream doesn't meet the specification imposed. In this case, the process is also given an infinite cost, i.e. 1020; (3) both the processing units and the gas product stream are fine but the oil product stream doesn't achieve the desired vapour pressure. In this case, the process can be costed fully but the product stream is given no value. Therefore, the process cost will be positive in value. For a feasible process (i.e. all of the constraints are satisfied), the cost will typically be negative, as said earlier, assuming enough oil product is generated. Jacaranda (Fraga et al., 2000) is used both for the generation of the 10" initial data within the initial search region, of which 1% are typically feasible, and for the evaluation of the objective function in Table 1 Solution obtained for OSP problem the GA. The initial (A) (B) (C) feasible points are Best Solution (106) 6.50 -360 -363 used to get the circle Average Solution (106) 6.53 -354 -361 knowledge about the Standard Deviation (106) 0.03 3 2 feasible regions. Worst Solution (106) 6.60 -349 -357 The generation to find the best 10 22 26 To evaluate the effectiveness of using the knowledge gained about the search space, we now consider applying the targeted GA, as described in Section 3. Three cases are considered: (A) a GA with no feasible region knowledge; (B) a
647 GA with linear clustering knowledge about the feasible regions (Wang et al., 2002); and, (C) a GA using the circle knowledge about the feasible regions. The population size is set to 100. Ten runs are performed for each case and the distribution of results, as well as the generation in which the converged population was first obtained, are shown in Table 1. In Table 1, the greater number of generations required for convergence are needed in Case (B) and (C). Moreover, considering the additional computation efforts needed for knowledge discovery, it is concluded that more computation requirements are needed in (B) and (C). However, in case (A), the GA is not able to find a feasible solution (no profit possible). In cases (B) and (C), feasible solutions are found and the behaviour is consistent in terms of the values obtained. Even if the GA parameters are adjusted (e.g. we increase the population size for (A) and decrease it for (B) and (C)), the relative behaviour remains consistent with that presented in Table 1. Cases (B) and (C) consistently find feasible solutions. We think the solution quality and consistency is more than efficiency, from this context, the combination of visualisation, knowledge (linear and circle) discovery and a GA results in a process optimal design tool with more effectiveness. Comparing (B) and (C), the results of linear and circle knowledge based approaches are similar. However, the visualisation of linear knowledge (Wang et al., 2002) is based on the Parallel Coordinate System (PCS) whereas the new method uses the 2-D Cartesian Coordinate System (CCS). The latter is easier for the user to understand intuitively and enables direct interaction with the user. The procedure actively involves the user to determine the number of clusters to define and the circle radius for each cluster, thus reducing the computational effort. Furthermore, the direct involvement of the user helps the user gain an understanding of optimisation problem, especially the feasible space. The result is a method with readily explainable results. The combination of automation with direct visual interaction can lead to improved results, as recently proposed by Wegner (1997) where it was proven that the combination of interaction with algorithms can solve a larger class of problems than algorithms alone.
Acknowledgements Funding provided by the EPSRC and valuable input from BP Amoco are gratefully acknowledged. REFERENCES Fraga E.S., M.A. Steffens, I.D.L. Bogle and A.K. Hind, Foundation of Computer-Aided Process Design, AIChE Symposium Series, 96:446-449, 2000. Fortier J.J. and H. Solomon, Multivariate Analysis, Academic Press, New York, 493-506, 1996. Goldberg D.E., Genetic Algorithms in Search Optimisation and Machine Learning, Addison Wesley, Reading, 1989. Gordon A.D., Classification, New York, Chapman and Hall, 1981. Kohonen T., Self-Organising Maps, Springer, Berlin, Heidelberg, 1995. Koontz W.L.G., P.M. Narendra and K. Fukunaga, IEEE Trans. Comput., C-24: 908-915, 1975. McCarthy E. C., E. S. Fraga & J. W. Ponton, Computers Chem. Engng., 22(Suppl.): 877-884, 1998. Wang K., A. Salhi and E. S. Fraga, European Symposium on Computer Aided Process Engineering-12 (ESCAPE12), 1003-1008, Hague, May, 2002. Wang K., T. Lohl, M. Stobbe, S. Engell, Computer and Chem. Engng, 24: 393-400, 2000. Ward J.H., J. Am. Statist. Assoc., 58: 236-244, 1963. Wegner P., Comm ACM 40(5): 80-91, 1997. Wishart D., Biometrics, 25: 165-170, 1969.
Process SystemsEngineering2003 B. Chen and A.W. Westerberg(editors) 9 2003 Published by Elsevier ScienceB.V.
648
Solving Batch Production Scheduling Using Genetic Algorithm** Lian-Ying Wu a Yang-Dong Hu** Dong-Mei Xu* Beng Hua b a College of Chemistry and Chemical Engineering, Ocean University of China, Qingdao, 266003, P. R. China. b Institute of Chemical Engineering, South University of Technology, Guangzhou 510640, E R. China. Abstract The optimal scheduling of multi-product batch process is studied. The relationship between production scale and production cost is analyzed. A new mathematics model is proposed which takes the maximum profit as objective function, which can be solved by the modified genetic algorithm (GA) with mixed coding (sequence coding and decimal coding) developed by us. The PMX crossover and reverse mutation are used for the sequence coding. At the same time, the arithmetic crossover and heteropic mutation are used for the decimal coding. An example is solved to demonstrate the effectiveness of the method. Keywords
1.
production scheduling; batch process; combinatorial optimization; genetic algorithm
INTRODUCTION
Batch processes are widely used in the chemical process industry and are of increasing industrial importance due to a great emphasis on low-volume, high-value-added chemicals and the need for flexibility in a market-driver environment. Production scheduling aims at allocating available manufacturing resource for the required manufacturing tasks and identifying the sequence and timing parameter values to accomplish these tasks. Effective production scheduling in the industry has the potential to achieve high economic retums. Scheduling has been attracting increasing attention in recent years because the competitive of products becomes more serious. Up to now, two main approaches to production scheduling, non-stochastic algorithm and stochastic algorithm, are addressed. Some scholars (X.Lin and C.A. Floudas ill ; Minseok Kim et[2l.; Hong-ming Ku and Iftekhar Karimi[3l; I.B.Tjoa et[4l; M.G. Ierapetriton and C.A. Floudas [51) addressed the production scheduling with MINLP or NLP approach. In recent years, the stochastic searching algorithm has caused increasing "Corresponding author: Yang-DongHu Tel: +86-532-2032141 E-mail: [email protected] **Supportedby the National Fundamental Research DevelopmentProgram of China (No.2000026308)
649 attentions for its simplicity and suitability for the large-scale problems. Kefen.Wang,et [61 and Jae. Iiak Jung [71 studied the scheduling of a multi-product batch plant using genetic algorithm. Y. Murakami et Is], Jun-Hyung Ryu.et [91 analyzed the production scheduling of batch chemical process using the simulation anneal. Two aspects, modeling and developing the effective algorithm, are equivalent important for production scheduling. In this paper, a novel mathematical formulation is presented for the scheduling of batch plants and a modified genetic algorithm featuring mixing coding (sequence coding and decimal coding), is developed, which can be used to solve the above problem effectively.
2. PRODUCTION SCHEDULING MODEL Although a number of diverse mathematical formulations are presented for solving the scheduling, most of them target the shortest total operation time for producing all products, i.e. the so-called make-span. However, the shortest make-span can't always ensure the maximum economic benefit for the enterprise, to overcome the deficiency, we formulate the scheduling model with the maximum profit as follows: F=Max(/3 / CNM-TF)
( 1)
For N products producing on M units, the completed time is calculated as follows: For j=l: Ci l--C (i-I)I-~-~ t (i-I),+aio+S i (i-I)k+ t i l+ai i
i-1, 2 ......N
(2)
Cil ~
i - l , 2 ...... N
(3)
C (i-l) l + a i o + t i l + A t ( i - I ) i + a i l + S i ( i - l ) l
For j=2. 3 ...... M:
C,j=Max[Ci(j-l) , C(~-,)j+s(i-,)~j+ai(j-,)]+t~j+a~j
i=l, 2 ...... N" j=2, 3 ..... .M
(4)
Cij~/ Ci(j-l)q- tij + aij
i=1, 2 ...... N" j=2, 3 ..... "M
(5)
i=l, 2 ...... N" j= 2, 3 ..... -M
(6)
Cij ~ C(i-l)jq-s(i-l)ij+t ij+aij
C,=Max [CN(Ml) , C(~-I)M+S(N-,)NM+aN(M-I)]+ t . + a .
(7)
N M
TF= y~ ~-" ct,(j_,)jS,(j_,)s +K
(8)
:=1 y=l
0 6i(j-l)j= C,j-C,o_l)
Cv < C,(j_I) C 9 ~ C,(j_l)
(9)
650 3. MODIFIED GENETIC ALGORITHM (MGA) Genetic algorithm is a stochastic search algorithm based on the mechanisms of natural selection and heredity. It can explore different regions of the search space simultaneously and hence are less prone to terminate in local minima. A difficulty of solving the combination-optimization problem using genetic algorithm lies in the coding of variables. For the above scheduling problem, there exist two kinds of variables in the model, which is inconvenient to be coded using binary coding simultaneously. Accordingly here a modified genetic algorithm featuring mixed coding, which uses two different crossover and mutation operators for the above two kinds of variables, is shown as follows.
3.1. Coding The chromosome based on mixing coding can be described as: { (pl, p2...... PN), (Ah, At2 ...... AtN) } Where the sequence coding (pl, p2...... pr~) , the first part of chromosome, represents the production sequence, and the decimal coding ( A h , At2 ...... A tN ) , the second part of chromosome, represents the time interval in the first unit from the finishing of a product processing to the beginning of the next one, which is restraint to the posterior processing and has a effect on the overall process.
3.2. Crossover Since coding of the above scheduling problem is very similar to that of the multi-machine layout studied by Cheng R. and M. Gen tl~ which adopts the PMX crossover and arithmetic crossover in solving process, therefore we also adopt the same crossover method, which can be described as follows. A. PMX crossover 1) Random select two individuals and determine the cross point: parent l-
1 2 ~ 4 5 ~ 7 8 9
parent2. 5 4 16 9 2 II 7 8 3 2)
Cross the above sub-blocks: Original offspringl: 1 2 [6 9 2 1[ 7 8 9 Originaloffspring2:5 4 ~ 4 5 ~ 7 8 3
3)
Determine the mapping relationship:
651 1-~--~ 6 ~ ' ~ 3 2~ 5 9 ~
4)
3 4 5 6 Legalize the offspring using mapping relationship: offspringl: 3 5 16 9 2 11 7 8 4 offspring 2- 2 9 ~ 4 5 ~ 7 8 1
B. Arithmetic crossover For the two arbitrary individuals V~, V2 (two parents), the offspring could be found as follow: offspring I:V(= V~+ ( 1 - k ) V 2
(10)
offspring 2: V~ = V2 + (1 - L)V~
(11)
The arithmetic crossover could hold the diversity of offspring and avoid prematurely convergence and local optimum. The PMX crossover and Arithmetic crossover are applied to different variable of the chromosome simultaneously in the evolution process. It is found that, there exists weak coupling between the sequence coding and decimal coding, consequently the preferable convergence can be ensured using the above crossovers, which can eliminate unfeasible offspring in the process of evolution. 3.3. Mutation The similar situation also appears in the mutation process. Here the reverse mutation [~] and the heteropic mutation Ill] are used for the sequence coding and decimal coding respectively. This mutation not only ensures the legalization of offspring but also improve greatly the efficiency of mutation. Table 1 Processing time for products t 0 and storage cost of unit time ( a ij) nit 1 2 3 4 5 6 7 8 9
1
23(0.2) 21(0.2) 15(0.3) 18(0.5) 12(0.2) 21(0.4) 12(0.1) 18(0.6) 20(0.3)
2
3
4
19(0.3) 13(0.6) 9(0.9) 8(0.1) 7(0.4) 6(0.3) 5(0.2) 5(0.1) 9(0.2) 10(0.6) 21(0.3) 12(0.1) 8(0.3) 14(0.2) 8(0.4) 17(0.2) 9(0.4) 16(0.3) 16(0.4) 14(0.5)21(0.2) 12(0.1) 19(0.3)10(0.4) 12(0.2)10(0.1)19(0.5)
5
18(0.1) 18(0.2) 10(0.4) 9(0.2) 7(0.5) 16(0.1) 6(0.3) 5(0.2) 8(0.6)
6
7
8
12(0.2)21(0.2) 17(0.7) 23(0.3) 21(0.5) 18(0.4) 15(0.5) 11(0.5) 23(0.2) 24(0.4) 12(0.3) 24(0.5) 20(0.1) 16(0.2) 18(0.1) 6(0.7) 19(0.6) 12(0.3) 9(0.2) 7(0.1) 17(0.2) 20(0.3) 20(0.4) 6(0.4) 18(0.4) 21(0.3)23(0.1)
After mutation, the selector operator selects only the best individuals among the parents and
652 the offsprings, which made the best the individuals be survived in one generation and the optimum solution be obtained step by step. According to the theory of schema theorem, the schema order could be lowered and the schema length could be diminished by using above method of crossover and mutation, thus they are not easy to be broken in the process of evolution, then the stability and convergence can be ensured. It must be denoted that two layers iteration to solve the problem, which the outer iteration is used for product order and the interior iteration is used for the interval time, is difficult to convergence and the result is not rational, which is proved in the solution process. So the simultaneous optimization is adopted for the two kinds of variable and the quality of convergence is improved obviously 4,
EXAMPLE
To investigate the performance of the approach above mentioned, a large problem of nine products and eight units is studied. The processing time of each job is listed in Table 1.The result is shown in Table 2. The result shows that the optimal scheduling changes with product price or storage cost of unit time of intermediate products. Apparently, different optimal sequences correspond to different time interval. The shorter the total completion time is, the higher the total storage cost due to the prolonged stack of intermediate products will be, vice versa. When the price of the product increases, the maximum profit can be achieved by raising productivity. In this case, the production cycle is shortened, although the total production cost rises up due to the increase of storage cost, which results the reduction of the profit of single product, the gross profit rise as a consequence of the high yield. On the other hand, when the price of the product decreases, the maximum profit can be obtained by diminishing yield. In this case, the production cycle is prolonged, i.e. the period of storage is cut down, which results the reduction of the storage cost, accordingly the maximum profit will be attained due to the cost cutting. 5, CONCLUSIONS The production scheduling of batch process with objective function of maximum profit has Table 2: Optimal result Case The optimal scheduling 1 { (3,5,2,6,4,9,1,8,7) (0,4.2,1,0,2,3.2,0,6.6,7.9)} 2
{ (3,8,4,1,9,2,6,5,7) (0,1.6,1.3,3,4,0,1.4,15.6,6)}
CNM 276.2
TF 32.1
264.9
40.5
3 {(3,5,2,6,4,1,9,8,7) (0,4.4,0,0,1,2.7,4.6,6.5,7.9) } 279 Notes: case 1:13 =10000 case 2:13 =20000 case 3: 13=10000,
31.0 a 3.4=1.0
been studied using the modified genetic algorithm. The mixed coding method has been put
653 forward and a novel strategy of crossover and mutation is adopted in the searching process. It could ensure that the optimal solution is searched in the feasible region, which results a steady convergence. It is found that, the production sequence and completion time are influenced by the production scale and operating cost. The maximum profit is always a trade-off of the production scale and operating cost. NOTATION random number between 0 and 1. the total completion time. average price of all products 13 TF the cost of production completion time of the ith product in the sequence in batch unit j where the product is CO finished transforming out of unit j and filling in unit j+ 1. processing time of product i in batch unit j tij transfer time of product i out of batch unit j to batch unit j+ 1 aij setup time required for product i-1 after product i in batch unit j S(i-l)ij the storage cost of intermediate product i between unit j- 1 and unit j in unit time a i (j-l) j the storage time of intermediate product i between unit j-1 and unit j i(j-l)j CNM
REFERENCES: [1] X. Lin, C.A. Floudas Computers and Chemical Engineering 2001,Vo125, 665-674 [2] Minseok Kim et, Ind. Eng. Chem. Res. 1996 Vol 35 4058-4066 [3] Hong-ming Ku and Iftekhar Karimi. Ind. Eng. Chem. Res. 1990 Vol 29 580-590 [4] I.B. Tjoa, et. Computers Chem. Engng. 1997 Vo121,S 1073-S 1077 [5] M.G.Ierapetritou.and C.A. Floudas Ind. Eng. Chem. Res. 1998 Vol 37 4341-4359 [6] Kefen. Wang, et. Computers and Chemical Engineering. 2000. Vol 24 393-400 [7] Jae. Iiak Jung et. Computers Chem. Engng. 1998 Vo122(11), 1725-1730 [8] Y. Murakami et. Computers Chem. Engng. 1997,Vo121, S1087-S1092 [9] Jun-Hyung Ryu,et. Ind. Eng. Chem. Res. 2001 Vol 30 228-233 [10] Cheng R. and M. Gen., Genetic algorithm for multi-row machine layout problem. Engineering Design and Automation, 1995 [11] Cheng Run-Wei et. ((Genetic Algorithm and Engineering Design)), Science Publishing Company 2000, 1
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
654
Synthesis of eco-industrial system considering environmental value using adaptive simulated annealing genetic algorithms XUE Dongfeng *~, LI Yourun, SHEN Jingzhu and HU Shanying Center for Industrial Ecology, Department of Chemical Engineering, Tsinghua University, Beijing, 100084, P. R. China
Abstract: This paper elaborates the establishment of the model of eco-industrial system including selection of members, environmental value and construction of eco-industrial chains. The model is formulated into a large-scale mixed integer non-linear programming (MINLP) problem. The hybrid adaptive simulated annealing genetic algorithm (ASAGA) is proposed to solve the problem of synthesis of eco-industrial system. A case study shows that the model is applicable and the ASAGA algorithm is effective.
Keywords: eco-industrial system, model, environmental value, MINLP, adaptive simulated annealing genetic algorithm
I. Introduction With the rapid growth of population and economy, the problems, such as environmental deterioration, natural resource depletion and economic issues, become worse at a global scale, which menace the survival of human and the development of society. As a result, developing "cyclical" production systems that increasingly reuse and recycle all materials causes great attention. Similar to biological ecosystems in which one organism's waste is the resource of food for the another, we can develop industrial systems in which the by-products or wastes from one unity can be effectively turned useful to the other. This recognition has lead to the concept of industrial ecology (IE) and the development of eco-industrial system. IE is a novel approach to achieve sustainable development. It aims to optimize the consumption of natural resources and energy and minimize the generation of waste. The means are addressed by which humans maintain a desirable carrying capacity given continued economic, cultural and technological evolution. In the field of IE, the methodology of ecology is used to study industrial production. The economy is viewed as a closed system similar to the natural ecological system in which the waste generated by one entity is or can be the nutrient for the other. The key feature of eco-industrial system is the construction of industrial chains, which indicates the relationship among industrial enterprises in an industrial ecosystem. Significant advancements have been made since Frosch and Gallopoulos introduced the conception of IE (1992). Lowe et al. (1995) provided an overview of the field of industrial ecology. Connelly and Koshland (2001a, b) used the thermodynamic interpretation of ecosystem evolution to strengthen the analogy of biological-industrial ecosystem. Anderson adopted IE as a framework to do the energy analysis using fish as a case study. Koenig and "Corresponding author. Fax. 0086-10-62770304. E-mail: [email protected]. Supported by National Science Foundation of China (No. 29836410)
655 Cantion (1998) presented a defining mathematical theory of IE. Chen et al. (2002) developed a mixed integer non-linear programming (MINLP) model of Eco-industrial Park. Many experiences of IE practice have been reported (Potts Carr, 1998; Co^te" and Cohen-Rosenthal, 1998; Lambert and boons, 2002). Most of the contributions, however, focus on the conception or application. This paper details the model of eco-industrial system considering environmental value with the superstructure approach. The model is often formulated into a large-scale MINLP problem. The traditional methods of solving a MINLP problem often fail to achieve the satisfactory results. For the non-convex mathematical form and complicated superstructure, it is necessary to develop an effective method to solve the problem. Genetic Algorithms (GA) and Simulated Annealing Algorithms (SA) have many advantages in dealing with MINLP problems. The intrinsic disadvantages, however, limit their further applications in combinatorial optimization, such as premature termination, long computing time and weak efficiency of hillclimbing in GA, and the neglect of historic information and slow exploitation in SA. This paper studies the eco-industrial system considering environmental value with the superstructure approach. The modified Adaptive Simulated Annealing Genetic Algorithm (ASAGA) is proposed to solve the MINLP problem. Furthermore, a case study illustrates the applicability of the model and the effectiveness of the algorithm. 2. Eco-industrial system
The essence of IE is to provide the optimal network of eco-industrial chain comprising of enterprises to utilize resources and energy completely within the industrial system, instead of only dealing with the terminal waste streams. The problem of synthesizing the IE is stated as follows: Given a series of enterprises generating pollutants in forms of terminal gaseous and liquid wastes, it is desired to identify a maximal-benefit strategy of pollutant reduction, to recycle and reuse the wastes within the eco-industrial network in order to utilize the resources and energy and reduce the pollutant load to the largest extent. An eco-industrial system consists of various eco-industrial chains formed by several industrial enterprises to minimize pollutant emissions. The idea is to use the wastes generated by one resident company or service unit as an input or raw material for the other. It is important to form the eco-industrial chains in the study of eco-industrial system. The problems of construction of the eco-industrial system include selection of members within the system, reuse or recycle of wastes, and introduction of environmental value. 2.1 Selection of members There are different members in the eco-industrial system similar to the producer, the consumer and the disintegrator in the natural ecosystem. Selection of members is fundamental to the eco-industrial system. The core enterprises are determined usually according to local dominant resources. Other members are chosen from several candidates with the proper evaluation in order to form the co-generate relation and make it exchange resources and wastes. The basic steps of member selection are as follows: (1) The relationship degree is obtained from the relation matrix of the candidates. (2) The index system of selection is established including the relationship degree of product, the consumption of resource and energy, and the economic profit.
656 (3) The judgment matrix method on the base of expert marking is used to determine the weight of various indexes. (4) The evaluation system of member in eco-industrial system with fuzzy mathematics method is set up to deal with the uncertainty and randomicity. As the data of the candidates are input, the members of eco-industrial system are chosen using above fuzzy evaluation approach. 2.2 Environmental value
IE purchases total or real benefit, including economic benefit, environmental benefit and social benefit. Therefore, the environmental value is introduced in this model. The environmental value is a dynamic concept, which means that environment can provide necessary foundation of material and energy and mental satisfaction for the survival and development of human (Li et al., 1999). It comprises of substantial resource value and abstract ecological value. It is environmental pollution that causes the losing of environmental value. Generally, the total losing caused by the environmental pollution consists of contributions from four aspects: the losing of production, capital cost, human health and environmental quality. The last item is usually ignored in the traditional cost accounting. The total losing of environmental pollution = production losing + capital cost losing + human health losing + environmental quality losing (1) The total losing of environmental pollution can be quantified as the commercialized value. Here, each item will be analyzed in detail. (1) The production losing caused by environmental pollution Generally, the production losing means production reduction of some products, quality decline of some products or increasing of product cost. (2) The capital losing caused by environmental pollution The capital losing refers to shortening of use life or increasing of upkeep-costs of fixed assets such as machines, buildings, ductwork or equipment. (3) The health losing caused by environmental pollution The health losing is the largest losing caused by environmental pollution. It means increasing the incidence of disease and death rate. Human capital approach is usually used to estimate the health losing. (4) The environmental quality losing caused by environmental pollution The restore-cost of original environmental quality or the losing of amenity service can substitute the losing. It is estimated by the treatment cost restoring the polluted environment to the origin state. The sum of the total losing of environmental pollution and treatment cost of pollution is the total expense of environmental pollution. As pollutants are reused and recycled, the corresponding environmental benefit is obtained. In this paper, environmental value is introduced as a part of objective functions in order to estimate real benefit in the system. 2.3 Model of eco-industrial chain
The model of eco-industrial chain aims at determining the connection of resource, energy and information flow among various members. It includes the route, amount and composition of flow and the selection of resource-processing or waste treatment technology. Part of the parameters is considered as 0-1 variables, such as the route, technology and composition. The
657 others are continuous variables. The logic expressions are introduced to denote the connection constraints. Together with other constraints and conditions, the problem of construction of eco-industrial chain is formulated into a large-scale MINLP model.
3. Adaptive simulated annealing genetic algorithms For this non-convex mathematical form and complicated superstructure, the traditional methods for solving a MINLP problem often fail to achieve the satisfactory results. It is necessary, therefore, to develop an effective method. As stochastic optimization methods, GA and SA have many advantages, but their intrinsic deficiencies limit their further applications. In order to take advantage of GA and SA, the hybrid ASAGA is proposed. 3.1 Essential ideas of ASAGA The essential ideas of ASAGA are as follows: (1) SA has a two-loop structure whereby alternative configurations are generated at the inner loop and the temperature is decreased at the outer loop. GA, generally accepted to be good at global optimization, is embedded into SA to strengthen its inner optimization. (2) In order to improve the performance of GA, the strategy of adaptive crossover and mutation probability is introduced, which can balance the diversity of population and the pressure of selection. It is useful to perfect the proposed hybrid algorithm. (3) In order to maintain the ratio of acceptance to rejection of new solution in the process of random selection in the simulated annealing algorithms, the strategy of adaptive step adjustment is introduced. It can make use of the historic information and improve the efficiency of the hybrid algorithms. (4) In order to deal with the MINLP problem, the integer variable is obtained through the dispersion of continuous variables. Then, the MINLP problem is transformed to a non-linear programming problem and it can improve the efficiency of the algorithms and the quality of solution greatly. 3.2 Basic procedure of ASAGA The basic steps of the GAA are as follows: Step 1. Initialize various parameters, including the initial temperature, stopping criterion, coefficient of annealing cooling, adjustment period of step, adjustment period of temperature, number of evolution generations of GA in the inner loop and other parameters. Step 2. At the inner loop, neighborhood solutions are generated N times along various coordinate directions in the current temperature. Principle of Metropolis and the strategy of adaptive step adjustment are applied in this process. Then, the A certain number of solutions are selected as the initial population and GA is performed for the global optimal in which the number of generation is fixed and the strategy of adaptive mutation rate and crossover rate. Step 3. At the outer loop, the annealing temperature is adjusted according to the expression of adaptive annealing cooling rate. Step 4. The algorithms end if the termination criterion is satisfied, or else return to step 2. Tested by various types of test functions, ASAGA is proved more effective and stable in constraint optimization than other methods, such as GBD approach and modified GA. This algorithm has been applied in solving several typical MINLP problems in the chemical process successfully. (Xue, 2001).
658 In order to apply ASAGA to solve the problem of IE, a suitable coding is defined. The coding of individuals of the population should be able to represent the searching space and all of the feasible solution. The independent variables of the system should be identified, which are coded with real numbers. To express a superstructure containing all potential solutions, a real number of [0, 1] is employed instead of binary to denote whether a unit is selected. If the flow rate equals to zero, this unit corresponding to the number will not be selected. The objective function is to maximize the total profit of eco-industrial system. 5. Case study
Synthesis of eco-industrial system by ASAGA has been programmed in a case study of the coal-based eco-industrial system in China. There are coal excavating, coal washing, coal power plant, coking, coal-gasification, smelting plant, foundry and methanol in the coal-based industrial system. For using coal resource widely in the industries and residents, 802, CO2, gangue and fly ash are the main pollutants causing the wasting of resource and worsening of ecological environment. For the convenience of computation, the value of total input of coal resource is fixed. The MINLP model is established by the superstructure approach. This problem has 7 binary variables and features nonlinear and non-convex. In addition, there are 24 continuous variables and 248 constraints in this problem. The solution of the IE problem is obtained by the proposed algorithm. An industrial system is established, which consider coal as raw material, electric power, methanol, DME, coke as leading products, and cement, water-purificant as supplement product. In this system, material exchange, information communication and manage cooperating of every producing flow of coal are realized. Integrated coal gasification combined-cycle (IGCC) system is adopted in the coal power plant. Aluminum in the gangue from the coal washing plant is utilized to produce A12(SO4)3 together with SO2 from the tail-gas of coking. Fly ash from the power plant is used to produce the brick. Fly ash, SO2 and wastewater will be settled or improved while the coal resource is made full use of. The total profit considering environmental value is higher 30% than the simple sum of the profit of each plant. 6. Conclusion
The model of eco-industrial system considering environmental value is developed in this paper. The hybrid adaptive simulated annealing genetic algorithm (ASAGA) is proposed to solve the formulated MINLP problem. Case study illustrates that the established model is applicable and ASAGA is effective. REFERENCE [1 ]Chen D. J., Li Y. R., Shen J. Z. and Hu S. Y.(2002). A MINLP Model of Eco-Industrial Parks, The Chinese Journal of Process Engineering, 2(1):75-80. [2] Co^te" R. P. and Cohen-Rosenthal E.(1998). Designing eco-industrial parks: a synthesis of some experiences, Journal of Cleaner Production 6:181-188. [3] Connelly L. and Koshland C. P.(2001). Exergy and industrial ecology. Part 1: An exergy-based definition of consumption and a thermodynamic interpretation of ecosystem evolution. Exergy Int. J. 1(3) 146-165.
659 [4] Connelly L. and Koshland C. P.(2001). Exergy and industrial ecology. Part 2: A non-dimensional analysis of means to reduce resource depletion, Exergy Int. J. 1(4) 234-255. [5] Frosch R. and Gallopoulos N. (1992). Towards an industrial ecology. In: Bradshow A. D. et al.(eds.), The treatment and handling of wastes, Chaoman and Hall, London, pp269-292. [6] Koenig H. E. and Cantlon J. E.(1998). Quantitative industrial ecology. IEEE transactions on systems, man, and cybernetic, Part C, Vol.28, No. 1, pp 16-28, 1998 [7] Lambert A.J.D. and Boons F.A.(2002).Eco-industrial parks: stimulating sustainable development in mixed industrial parks, Technovation 22: 471-484. [8] Li J. C., Jiang W. L., Jin L. S. and Ren Y.(1999). Theory of ecological value, Chongqing University Press, Chongqing. [9] Lowe E. A., Warren J. L. and Moran S. R.(1995). Discovering Industrial Ecology m An Executive Briefing and Sourcebook, Battelle Press, Columbus, Ohio. [10] Potts Carr Audra J.(1998). Choctaw Eco-Industrial Park: an ecological approach to industrial land-use planning and design, Landscape and Urban Planning 42 : 239-257. [ 11] Xue D.F.(2001). Study on mass integration for waste minimization. Ph.D. dissertation. Dalian University of Technology, Dalian.
660
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
On an Object-Oriented Modeling of Supply Chain and its Operational Strategy Hisaaki Yamaba and Shigeyuki Tomita Miyazaki University, 1-1, Gakuen Kibanadai Nishi, Miyazaki, Japan Abstract In recent years, researchers have devoted a great deal of attention to supply chain management (SCM) and various production models have been proposed. In such chains, a lot of decisions have to be made to achieve the production objectives under a variety of uncertainties. In order to operate SCM efficiently, it is indispensable that some rational and sound operational strategy for realizing cooperative operation among such chains. A simulation-based approach is one of the most effective ways as a design method of a production model together with its strategy under uncertain conditions. In this research, an attempt was made both constructing such a environment by means of Object-Oriented approach and, utilizing such a environment, exploring the possibility of a new production model between a maker and suppliers where both "Build-To-Order" and "Build-To-Stock" are used to suit the customer's needs. The validity of the model and the strategy proposed here was examined through computational experiments by using a simulator developed based on this approach. Keywords Supply Chain Management, Object-Oriented analysis/design, simulation 1. Introduction In recent years, researchers have devoted a great deal of attention to supply chain management (SCM) and various production models have been proposed. In such chains, a lot of decisions have to be made to achieve the production objectives under a variety of uncertainties such as arrival of urgent jobs and delay in arrival of materials. However, all decision-making in a certain department are currently carried out independently of those of other departments, and this causes low performance of the whole system. Thus, in order to operate SCM efficiently, it is indispensable that some rational and sound operational strategy for realizing cooperative operation among such chains. A simulation-based approach is one of the most effective ways as a method of performance evaluation of such strategies under various configurations of supply chains. However, a support environment that generates simulators is indispensable in order to deal with modification of a
661 simulator, which is needed to meet changes of applied strategy candidates. In this research, an attempt was made both constructing such a environment by means of Object-Oriented approach and, utilizing such a environment, exploring the possibility of "BTOS": a new production model between a maker and suppliers where both "Build-To-Order" and "Build-To-Stock" are used to suit customers' needs. 2. Basic Concept of BTOS The idea "BTOS" is a hybrid of the one of"BTO" (build-to-order) and "BTS" (build-to-stock). The basic concept of BTOS is derived from configuration-to-order (CTO), which offers high customer satisfaction by customization of products accepting requires of customers. BTO manufacturing process was introduced in order to reduce dead stock. For example, some PC vendors compose PCs of parts and require other companies, which are called "(Parts) Suppliers", to produce the parts. These assembly dealers have achieved zero-product-stock, because they have parts stock enough by ordering parts in advance according to the forecasting of demands from customers. While the forecasting is correct, since they have appropriate amount of parts, they can start production as soon as they accept orders from customers. Here, the cost for stock of parts is required under this model. In this work, we call this model "BTO-BTS". On the other hand, we call another manufacturing process "BTO-BTO" model: assembly dealers order parts every time they receive orders. This model does not require the cost for stock of parts, however it makes lead times longer. It depends on the characteristics of the target market which model should be adopted. In general, if quick delivery is needed than low price, "BTO-BTS" model may be more suitable. On the other hand, if low prices needed than quick delivery, "BTO-BTS" model may be better. It is assumed that "BTOS" model introduced here allows customers to indicate which is prefer, "low price" or "quick delivery". In case that quick delivery is required, the order is achieved using parts stock. On the other hand, needed parts are ordered for the requirement when the requirement is given when quick lower price is selected. 3. Behavior of BTOS production system It is assumed that the production system has one and only assembly dealer. And also, it is assumed that each kind of parts can be supplied by one part supplier. Furthermore, each supplier is assumed to have materials enough to achieve required orders. An assembly dealer and a part supplier are also consist of several sector; order receiving sector, planning sector, stock sector, production sector, etc (Fig. 1). A customer tells an assembly dealer the desirable date of delivery together with the specification of his order; the performance of the PC and/or peripherals he needs. At first, the dealer replies the estimated price and delivery date. Since each company has its productivity limitation, a replied delivery date may not answer the request. In case a customer is not satisfied an estimation, he can modify
662
Partssupplier
Assembledealer ~
3 ~ PlanningSector ~
Orders ~
Order Receiving Sectol ~
Orders of Parts
9
,nestimatec priceand eliverydate
"f OrderReceiving1 ~ Sector
te Progress T
Products~ I ~, [ Warehouse ]) ProductionSector /
Pa~l
I Schedules
Shipment~ ProductionSector j
Fig. 1.
A draft of the "BTOS" production system.
the date or the specification of his order again and again. An assembly dealer makes a production plan every week. It orders parts needed in the plan to suppliers. Each order has to be specified its kind, amount, and delivery date. Each supplier receives orders from an assembly dealer, produces the parts, delivers them at their delivery date. It is assumed that a stock of parts or products costs in proportion to the amount of them. Therefore, it is assumed that just-in-time production is oriented. An example of a timetable of the production system employed under this model is shown in Fig.2. The buffering span of orders is set to 1 week. Whether parts-stock are used or parts-order are made for each order is determined by its delivery date: in case an order received at the i th span, the delivery date of the order is 9 included the i+1 th span : parts-stock are used, 9 after the i + 2 th span : the parts are ordered to suppliers. The former is called "Quick delivery" orders and latter is called "lower price" orders in this work. So, "Quick delivery" orders given in the i- I th span and "lower price" orders given in the i - 2 th span are scheduled to be produced in the i th span. An assembly dealer make a purchase plan of parts according to the production plan. Parts for "lower price" orders are ordered after receiving the requirements, however, parts for "Quick delivery" orders have to be ensured to be supplied in advance. Therefore, forecasting of orders requested in the next span has to be made. After all, parts for "lower price" order and parts in order to supplement the estimated consuming amount of parts according to the forecasting of customers' requirements.
663
Fig. 2.
A timetable of the operation of the example production system
4. A Simulator G e n e r a t i n g E n v i r o n m e n t
A simulation model is used rather than analytical approaches under an environment with demand uncertainty. However, it is a very hard task to construct simulators for every structure chains and operational strategies. Therefore, a computer-aided environment for generating such simulators automatically is indispensable. One of the purposes of this work is to provide generic components of which various simulators can be composed. Such components should be designed so as to distinguish interactions among domain objects from activities of each one; it is desirable that a component is independent of other components in order to deal with changes of a domain model. Components of simulators with various configurations are designed by means of Object-Oriented approach. Components of simulators are classified into two groups: "Models" and "Strategies". Each "Model" represents a component of production systems such as a supplier, a maker and so on. Furthermore, each function of suppliers and/or makers is achieved by an object of the corresponding class: "Models" are composed of Scheduler class, some Manager classes, and so on. This achieved that various type of components in supply chain can easily be build from those objects. On the other hand, interactions among "Models" are encapsulated in "Strategies". Each method of the classes is described according to the Object-Oriented analysis of the target system. And also, other classes such as Order class and Plan class are introduced in order to represent various data used in SCM.
664
Fig. 4.
Fig. 3.
The Result of the experiments (BTO-BTS).
The result of the experiment (BTOS).
Concretely, the components of simulators are designed and implemented by Java language. These Java classes are used as super classes of concrete classes implemented for each domain. "Strategies" were implemented as subclasses of Class Thread in order to realize the simultaneous parallel operations. 5. Experiment
Simulation experiments were carried out by making use of the simulators composed of these classes. A PC vendor and a parts supplier were assumed. Customers can choose a mother-board from 3 types, a CPU from 3 types, a hard disk from 4 types and a memory from 3types. A price
665 Table 1: The number of orders whose delivery dates were delayed.
Type
N u m b e r of Orders
BTO-BTO
1388
BTOS
1116
BTO-BTS
911
of each parts is different each other. It is assumed that orders of each parts set are given according to some exponential distribution. Therefore, too many orders can be given to the dealer and some of their delivery dates have to be delayed because of the productivity limitation of the company and lack of part-stock. The purposes of the experiments are as follows: 9 A production system adopting the BTOS process can be operated 9 Comparison between BTOS and "BTO-BTS" from the viewpoint of amount ofpart-stock an assembly dealer should have. 9 Comparison between BTOS and "BTO-BTS" from the viewpoint of how many orders a dealer delays their delivery dates. First, it was confirmed that several configuration of supply chains were easily built and ran normally under BTOS process. Next, comparison of performances between 2 "BTOS" and "BTO-BTS" were examined through 50 weeks simulation experiments. The required amounts of parts-stock are shown in Fig.3 and 4. The number of orders whose delivery dates were delayed are shown in Table 1. These results show that a production system operated by BTOS process could certainly reduce the amount of parts-stock rather than a system operated by BTO-BTS process. However, a BTOS system caused much production delay in return to the advantage. 6. Conclusion
An attempt was made to implement generic components that compose simulators for supply chains in order to realize a generic computational environment of such simulators by means of Object-Oriented approach. Objects were classified into two groups: "Models" and "Strategies". The components of simulators are designed and implemented by Java language together with a simulator generating system, which compose simulators of such components. Some operational strategies that should be adopted in some occasion of decision-making were examined using these simulators, and a rational operational strategy was proposed. Finally, the validity of the model and the strategy proposed here was examined through computational experiments by using a simulator developed based on this approach.
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
666
Variable decomposition based global-optimization algorithm for process synthesis Jian Zhang a, Bingzhen Chen a, Shanying Hu a, Xiaorong He a aDepartment of Chemical Engineering, Tsinghua University, Beijing 100084, China Abstract The key-task of process synthesis is to solve a mixed-integer non-linear (MINLP) model. As the number of integer variables increase, one is faced with a large combinatorial problem. At the same time, due to the nonlinearities the MINLP problem is tended to converge to a local optimal solution. To overcome these difficulties, a variable decomposition based global-optimization algorithm is presented, in which the integer variables and the continuous variables are treated sequentially. There are three steps in the algorithm: The first step is to classify the logical variables into independent and dependent variables, meanwhile, the logical constraints are classified into deductive constraints and restrictive ones. Logical deductive graph is generated to get the feasible logical variable sets. The second step is to construct a non-linear continuous (NLP) sub-problem for each feasible logical variable set and to solve the sub-problem using general global optimization algorithm. The third step is to obtain the global optimum solution. A process synthesis example is presented to compare the algorithm with the standard Branch-and-Bound algorithm. Keywords:
global optimization algorithm, logic-based MINLP model, process synthesis
1. INTRODUCTION Process synthesis is an important part of process systems engineering. The mixed-integer non-linear programming (MINLP) model is widely used in process synthesis. In an MINLP model, continuous variables are used to model the technical parameters in chemical process, such as flows, temperatures and compositions. The 0-1 variables (only have value of 0 or 1) or logical variables can be used to model, for instance, altemative candidates and the topological relations among equipments. There are two types of MINLP model: algebraic MINLP model and logic-based MINLP model [11. The disadvantage of the algebraic MINLP model is that the topological relations expressed by 0-1 variables are not direct, and hard to modeling. It limits the industrial usage of the algebraic MINLP model. In a logic-based MINLP model, the topological relations are expressed clearly by logical relations. Heuristic knowledge would also be integrated in the model to eliminate candidate flow sheets that conflict with technical practice, and would accelerate the solving process. So the novel global optimization algorithm proposed in this paper is based on the logic-based MINLP model. The global optimization MINLP algorithm includes Branch-and-Simplify method,
667 Hybrid Branch-and-Bound and Outer Approximation method, txBB algorithm [21 and symbolic reformulation algorithm. L31 For each iteration of the branch strategy, only one from the integer variables is fixed to 0 or 1, and the rest ones are relaxed into continuous variables. So the number of iterations is exponential with the number of logical variables. Furthermore, the relaxation of integer variables leads to an increase of the number of continuous variables and the scale of model, also, it may make some linear constraints transfer to nonlinear constraints. These make the relaxed model hard to solve. In this paper, a variable decomposition based global-optimization algorithm is presented. This algorithm is described in the second section. The specific steps of the algorithm are listed in section three. A reaction process synthesis example is presented in section four to test and compare the algorithm. The fifth section is a conclusion and discussion. 2. A L G O R I T H M DESCRIPTIONS The key of the variable decomposition algorithm is to treat the logical variables and continuous variables of the MINLP model sequentially. A set of logical variables whose values satisfy all logical constraints is called a feasible logical variable set. Locating the feasible logical variable sets is an exact reasoning problem. The logical variables' values of each feasible solution of the original MINLP model certainly are in the feasible logical variable region. Corresponding to each feasible logical variable set, a non-linear continuous (NLP) sub-problem is constructed. The global optimization solution of the original MINLP model is obtained from solving these sub-problems. In a process synthesis problem, although the combination number of integer variables may be very large, the number of feasible sets is few. For example, in a reaction process synthesis problem t4] there are 13 logical variables, and the combination number is 213=8192. But the number of feasible logical variable sets is 32, which is 0.39% of the combination number. For an ethylene refrigerant synthesis problem, I51 there are 25 logical variables, and the combination number is 225=33554432. But the feasible number is only 265, which is 0.00079% of the combination number. Especially, when integrating the heuristic knowledge, the flow sheets that do not satisfy technical experience are eliminated, and the number of feasible logical variable sets will be further reduced. 3. A L G O R I T H M The algorithm includes 3 steps: getting feasible logical variable sets; generating continuous variables NLP sub-problems corresponding to the feasible logical variable sets and solving them using the NLP global optimization algorithm; obtaining the global optimization solution. In a logic-based process synthesis model, the constraints include equipment constraints, connection constraints, and logical relations among equipments t6]. 3.1. Locating the feasible logical variable sets In process synthesis, some types of logical constraints are often used, and these
668 constraints would deduct logical variables directly: 1) f(y2,ys,...,yn)-~ y~: in this constraint, the value of yl
is partly restricted by
f(Y2,Y3 .... ,yn): if f(YE,Y3 .... , y ~ ) = l , then yl-1; if f(yE,Ya,...,y~)=O, then yl would have the value of 1 or 0. 2) f(Y2,Y3 ..... y,,)~--~ y~: in this constraint, the value of yl is fully determined by
f(Y2,Y3 ....,Y,). 3) yl v Y2 v Y3 v . . . v yn : this constraint can be transferred to ~(Y2 v Y3 v which is the first type. 4)
Y~ @Y2 EDY3~)'"EDYn:
--n(y 2 V
Y3 V . . . V y, ) ~
in
this
constraint,
variable
yl
can
be
...
V y , ) --~ y~,
expressed
by"
y~, which is the second type. And the original constraint changes to
Y2 @ Y3 ~) .-. ~) Y, ~ -n(Y2 v Y3 v ... v yn). 5) Y2 ~ Y3 ~) " " E~)y, @-'flY2 v Y3 v
... V
Yn). in this constraint, variable y2 can be expressed
(Y3 V . . . V Yn) --~ -lYE, which is the first type. And the original constraint changes to Y3 @ Y4 ~ -.- ~ Y, ~ ~ ( Y 3 v . . . v yn ). The new constraint has the same form with the original constraint, thus more dependent variables would be derived from it. All the deductive constraints form an oriented non-loop graph, which called deductive graph. Each node in a deductive denotes a logical variable, and the arc that goes into a node denotes the deductive constraint that deducts the logical variable, and the arc that goes out from a node denotes the constraint that is used to deduct other logical variables. Not every logical constraint can be transferred to deductive constraint. In some logical constraints, all member variables are dependent variables, or the deduction would cause loops in the deductive graph, so they could not be transferred to deductive constraints. These constraints restrict the existing logical variable sets, and are called restrictive constraints. A restrictive constraint would not deduce a logical variable, but it could be used to check whether the deduced logical variables are satisfied with it, so as to reduce the number of logical variable sets. When all logical constraints were transferred to deductive constraints or restrictive constraints, there would be some residual logical variables that were not set to dependent variables. These variables are called independent variables and used as the starting nodes of deduction. The meaning of "independent" is that when generating the feasible logical variable sets, the values of these variables are chosen freely. After the classification of the logical variables and logical constraints is completed, the feasible logical variable sets will be generated as follows: the nodes of independent variables, as starting nodes, have no arcs entering them, and the values of these variables are chosen freely. Then these nodes and their associated arcs that go out from them are hidden, and new starting nodes would appear in the graph. The value of a node is determined through its associated arcs that enter the node. When the values of the nodes are determined, these nodes and their associated output arcs would be hidden. If a restrictive constraint is met, existing variable sets would be restricted by the constraint and infeasible variable sets would be eliminated. Repeat these steps until all the nodes are treated with, then the finally existing logical variable sets are the feasible logical variable ones.
by:
669 It should be noticed that different sequence of logical constraints and logical variables would generate different deductive graph. But different deductive graph would only affect the intermediate results, and would not affect the final feasible logical variable sets. The feasible variable sets are only correlated to logical constraints of the model.
3.2. Generating the continuous variables NLP sub-problems For each feasible logical variable set, a corresponding continuous variables nonlinear sub-problem can be generated. The sub-problem is used to optimize the technical parameters when the flow sheet is determined. The constraints of the NLP sub-problem include equipment constraints and connection constraints, and the objective function of the sub-problem comes from the original objective function. The symbolic reformulation global optimization algorithm [3] is applied to solve the NLP sub-problem, and the global optimum for each sub-system will be obtained. 3.3. Obtaining the global optimum solution The global optimum solution is obtained from solving all the sub-problems as follows: 1) If this sub-problem has no solution, it will be discarded. 2) If the lower bound of this sub-problem is higher than the current solution, this sub-problem will be discarded. 3) If the optimal solution of this sub-problem is lower than the current one, the current solution will be updated. After repeating these steps for each feasible logical variables set, the final solution is the global optimum one of the original MINLP model. 4. PROCESS SYNTHESIS EXAMPLE Consider a process synthesis example [41 shown in Fig. 2. The equations of equipments yl-y8 and logical constraints are listed below. Yl :x3 - ln(x2 + 1)
1) 1 <---~y 9
Y2 :x5 = 1.2 ln(x 4 + 1)
2) Yl ~ Y2
Y3 :X8 = 1"5X9 + Xlo
3) Y3 V Y4 V Y5 v Y13
Y4 :Xl2 = 0"8X13
4) ~Y4 V ~Y3
3X14 --<X12 -----6X14
5) ~Y4 V ~Y5
(6y4
Y5 :X15 = 2X16
6) Y6 V Y7 ~
Y6 "X20 = 1.5 ln(xl9 + 1)
7) ~(Y6 V Y7 ) ~) Y6 1~) Y7
Y7 :X22 = ln(x21 + 1)
8) Y6 V Y7 ~'ff Y4 V Yll
Y8 :x~8 = ln(x~o + x17 + 1)
9) ~(Y3 v Y5) --~ ~Yl0
0.4X17 < X10 < 0.8X17
10)
Y4
Y3 V Y5 V Y10 ~
11) Y8 ~ Yl2 12) Yl2 v Yll
,,......... ( 3 ) ~
Y8
Fig. 1. Deductive graph
(11)~
670 The objective function is: min 5y~-10x 3 + 8 y 2 - 1 5 x 5 + 6 y 3 +40X 9 +10y4 +15X14 +6Y5 -[- 80X16 +7Y6 +25X19 - 60X20 + 4 y 7 + 35X21- 80X22 + 5y 8 + 15x10 + x I + 45X25-65x~s + 122
In this example, there are 25 continuous variables, 13 logical variables and 12 logical constraints. Based on the algorithm, 2 independent logical variables, 11 dependent variables, 11 deductive constraints and 1 restrictive constraint are obtained. The logical deductive graph is shown in Fig. 1. The double-circles nodes denote the independent variables. The single-circle nodes denote the dependent variables. The connections with the solid line denote the,~deductive constraints. The connections with the dashed line denote the restrictive constraints. The depth-priority algorithm is used to get the feasible logical variables sets in a deductive graph. The numbers of logical variable sets after each deduction step are listed in Table 1. During the deductive process, the number of logical variable sets is increased slowly. And the calculation costs will not increase exponentially with the number of logical variables.
x x
~
,x.o
A-.
-i Y6 } X20
,
Y~. J
xg"
-'
xS I Y
Fig. 2. Superstructure of a process synthesis example. Table 1 Inference steps Constraint Inferred Var.
Var. sets
Constraint
Inferred Var.
Var. sets
(1) Independent var. (2) Independent var. (7) (6)
y9 Yl Y2 y6 Y7 y4
1 2 2 4 6 6
(5) (9) (10) (11) (12) (8)
y5 Ylo Y8 y12 Yll Restrictive cons.
12 18 18 18 30 16
(4)
Y3
8
(3)
YI3
32
Table 2 Optimal solution 0-1 Variables
yl 0
Continuous Variables
x~ 17.0
y2 1
Y3 0 x2 3.4684
y4 1
y5 0
y6 1
Y7 0
y8 0
y9 1
ylo 0
yll 1
yl2 0
y13 1
X3
X4
X5
X6
X7
X8
1.8667
1.6018
2.3333
1.8060
0.3111
1.4948
671 There are a total of 213=8192 combinations with these 13 logical variables, and the number of feasible integer variable sets is 32, which is 0.39% of the former. 32 continuous variables nonlinear subproblems can be generated through these logical variable sets. When solving these subproblems, the global optimum solution is found, which is 66.616. The corresponding values of continuous and logic~.l variables are shown in Table 2. For comparison, this example is also solved with Branch-and-Bound algorithm using Lingo 5. The local optimal solution is found after 154 iterations, which is 68.480 It would be noticed that the iterations of this algorithm are le '~an the Branch-and-Bound algorithm. In B-B algorithm, integer variables are relaxed . ;m 3us variables, and each sub-problem has 47 constraints and 21 continuous vari' ~c, this algorithm, each continuous variables NLP sub-problem has only 6 constraints in ,,verage, and some sub-problems are linear programming. So the sub-problems are solved rapidly. Furthermore, the global optimal solution would not be obtained in B-B algorithm, but in this algorithm, the global optimal solution can be found. 5. CONCLUSIONS AND DISCUSSIONS Currently the global optimization algorithms for M1NLP problem are improved from the Branch-and-Bound algorithm. But in these algorithms, the integer variables are relaxed into continuous variables, and it makes the model more complex and hard to solve. In this algorithm, logical variables and continuous variables are treated sequentially. Logical variables denote the flow-sheet structures, and continuous variables denote the technical parameters for a determined flow sheet. In theory, the number of candidate flow sheets is increased exponentially with the number of logical variables. If there are many logical variables, the candidate flow sheets will be more than what we can deal with. But the number of feasible flow sheets is far less than the candidate ones. The feasible flow sheets will be obtained through the logical deductive graph. For each feasible flow sheet, the un-selected devices and flows are deleted from the model, so the scale of the corresponding sub-problem model is much reduced, and it is easy to solve. Because all the feasible logical variable sets are obtained and the global optimization algorithm is used when solving each NLP sub-problem, to obtain the global optimal solution of M1NLP problems is guaranteed. REFERENCES [1] M. Turkay and I.E. Grossmann, Computers Chem. Engng., 20 (1996) 959. [2] C.A. Floudas, Deterministic Global Optimization: Theory, Methods and Applications, Kluwer Academic Publishers, Dordrecht, 1999. [3] E.M.B. Smith and C.C. Pantelides, Computers Chem. Engng., 23 (1999) 457. [4] M.A. Duran and I.E. Grossmann, Mathematical Programming, 36 (1986) 307. [5] J. Zhang, B.Z. Chen and S.Y. Hu, Symposium on PSE in China, (2001) 349. [6] J. Zhang, B.Z. Chen and S.Y. Hu, Chinese J. on Chem. Engng., 53 (2002) 178.
672
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Publishat by IRscvics"Scicncc B.V.
The Approach of Multi-factor Life Cycle Assessment and Product Structure Optimization ~ Xiaoping Zheng, Shanying Hu, Yourun Li, Jingzhu Shen Department of Chemical Engineering, Tsinghua University, Beijing 100084, China
Abstract: Currently, the approach of Life Cycle Assessment (LCA) has been widely used for product and process environmental management. However, when an enterprise or an industrial park makes product decision, the economic and social performance should also be considered besides environmental impact. So an approach that can be applied for economic, environmental and social LCA is in need now. In order to meet this kind of need, one LCA index system considering product economic, environmental and social performance is established in this paper. Using the composite index method, product overall economic, environmental and social index can be calculated. When multiple products are to be compared, these indexes can be applied as the criterion. Also in this paper this assessment index system is combined with optimization technology of process system to optimize one organic silicone product structure. This approach can make compositive assessment of product and give an enterprise or industrial park proper product design, guaranteeing best economic, environmental and social overall performance. Keywords: life cycle assessment, compositive index, product structure, optimization, organic silicone 1. INTRODUCTION As the ISO defined, LCA is to analyze and assess the input, output and their potential impact of a product process. Compared with other environmental impact assessment tools, LCA extends the system boundaries and it can make a comprehensive evaluation. So the result of LCA for a product can show the product's whole environmental performance. This result is very useful for product choosing. However, during the process of product decision making, besides the environmental impact which an enterprise should consider, the product's economic and social performance should also be considered since economic and social profit is its most important target. The LCA method includes the whole process of a product from its material extraction and its last disposal, so it can also be used to fully evaluate a product's economic and social performance to help the product decision making. In this paper, based on the LCA method, a product decision making model is proposed considering a product's environmental, economic and social performance. And an index system has been built to evaluate all the performance quantificationally. This model can be used to compare different products and to help with product decision making. 2. ESTABLISHMENT OF ASSESSMENT MODEL The procedure of establishing the assessment model consists of three phases as follows: definition of LCA stages, definition of indexes and composition of indexes. IThis work is financially supported by the National Science Foundation of China (NSF29836140) and National "863" project of China (2001AA413320).
673 2.1 Definition of LCA stages In order to assess a product's economic and social performance as well as its environmental impact, the transportation and sale stage is added into the traditional LCA stages (They are primary resource extraction and processing, production, use, recycle). So there are five stages during the product life cycle which will be considered in the model, as Fig 1 shows.
resource extraction& processing
....
& sale
, I -!
Fig 1 Life Cycle Stages 2.2 Definition of indexes For each stage, there are three types of indexes: environmental, economic and social. Since there are different economic activities during each stage, it is necessary to define different sets of economic indexes for each stage. However, for all the stages, the environmental and social consideration is the same, so they can share one set of environmental indexes and social indexes. 2.2.1 Economic indexes (1) Primary resource extraction & processing stage Table 1 shows all the economic indexes defined for product's primary resource extraction & processing stage. Table 1 Economic Indexes of Primary Resource Extraction and Processing Stage Index name Description Index type Minimum possible price divided by the actual price. Less Price Index Positive index value shows the cost of resource is more. Take 1000000 years as the upper limit of available time of nonregenerative resource. The index value is expressed as Supply Time Index Positive the division of resource available time and 1000000. The index value of regenerative resources is 1. Supply & Demand The ration of the capacity of resource supplying to the Positive Relationship Index capacity of resource demanding. Use the ratio of actual resource price and the maximum price Quality Index of that resource as the index value. It represents the Converse requirement for the resource quality. (2) Production stage The economic indexes defined for product's production stage are showed in Table 2. Table 2 Economic Indexes of Production Stage Index name Description Index type Capital Asserts Index The amount of capital investment per unit of product. Converse Current Asserts Index The amount of current investment per unit of product. Converse Cost Price Index The ratio of cost per unit of product to its price. Positive Payback Time Index The time of recovery all the investment. Converse (3) Transportation & sale stage There are two indexes defined for this stage showed in Table 3. Table 3 Economic Indexes of Transportation and Sale Stage Index name Description Index type Transportation Cost Index The transportation cost per unit of product. Converse
674
[sale Cost ~dex I The cost per unit of product during the sale pr0cess. .1 Converse ] (4) Use stage Tow indexes are defined for product use stage as are showed in Table 4. Table 4 Economic Indexes of Use Stage Index type Descripti.on Index type Use Cost Index The cost per unit of product during its use process. Converse After Service I n d e x The cost of after service per unit of product. Converse (5) Recycle stage There is one index defined for this stage: Recycle Cost Index, which is expressed as the ration of recycle cost to the material with same quality produced from primary resource. 2.2.2 Environmental indexes There are three categories of environmental indexes: resource consume, energy consume and pollutant emission. All these indexes are converse type. (1) Resource consume index (RI) 9 Individual index For one specific resource, the index can be calculated by using the method proposed by Holland Environmental Sciences Center as follows:
RIP~ =re, c, Ri
(1)
Where m is the amount of resource consumed per unit of product, c is the annual capacity of the resource and R is national proven reserve of this resource. The value of R and C can be obtained from national statistical department or enterprise statistical department. 9 Compositive index When one product needs more than one kind of resource, the eompositive index is calculated using the following formula:
RI = ~ w, x R/P,
(2)
t=!
Where wi is the weight of the resource without dimension, n is the number of resources (n>~2), Pi is resource individual index. (2) Energy consume index (EI) The method of calculating energy consume index is the same as resource consume index. (3) Pollutant emission index (WI & AI) Here the water pollution index (WI) and air pollution index (AI) are taken as the example. 9 Individual index
PEIP, = C,
(3)
S, Where Pi is the individual index, whose unit is cubic meter per unit of product. Ci is the weight of pollutant generated for producing one unit of product. Si is the government limit value of this kind of pollutant and its unit is the weight of pollutant per cubic meter of air or water. 9 Compositive index For water pollution,
675
(4)
WI = ~.~wl x PEIP~ i=I
Where WI is the water pollution index, cubic meter polluted water per unit of product, wi is the weight of all kinds of pollutants. For air pollution, (5)
AI = ~ w, x PEIP, i=i
Where AI is the air pollution index, cubic meter polluted air per unit of product, wi is all kinds of pollutants. 2.2.3 Social indexes Five indexes are defined for the social stage, which are listed in Table 5. Table 5 Social Indexes . . . . Description Index name Employment The employment volumeper unit of industrial scale can offer. Volume Index 1 means absolute safety. The value Of this index will be estimated Safety Index empirically. Estimated by the cost caused by health problem per unit of Health Index industrial scale. The requirement level Of technology of the product. Compareall Index the products involved in the assessment and give each a mark. L Policy Index Two value index. 1 means illegal. 0 means legal.
T. olo
the weight of
Index type Positive Converse Converse Converse Converse
3. METHOD OF CALCULATING COMPOSITIVE INDEXES Generally, there are three steps for calculating compositive indexes, which are index conversion, dimension eliminating and index synthesis.
3.1 Index conversion Just as showed in the previous tables that there are two kinds of index: Positive and Converse. Before calculating compositive index, all the indexes should be of same kind, normally positive. The reciprocal of converse index is positive index.
3.2 Dimension eliminating It is required that all the indexes involved in calculating compositive index must have same dimension. This requirement is difficult to meet in this model, so all the index must be eliminated their dimensions. Here the method of linear threshold value is applied. Ind =
I d - min(possible) max(possible)- min(possible)
(6)
Where Id is the value of index with dimension, min(possible) is the minimum value of this index, max(possible) is its maximum value and Ind is the value of index without dimension. After dimension eliminating, all the indexes become relative indexes. 3.3 Index synthesis In this model there are three categories of index: environmental, economic and social. Indexes of same category are synthesized by multiplication, while indexes of different categories are
676 synthesized by addition. (1) Synthesize economic/environmental/social indexes of every life cycle stage For example, to synthesize the economic indexes of the stage of primary resource extraction and processing, the addition method will be used to get the eompositive economic index of this life cycle stage. P1 _ E C O I = W ! x P I + W 2 x r 1 + W3 • S D I + W 4 x Q I
(7)
Where PI_ECOI is the compositive index, Wl, W2, W3 and W4 are weights. The summary of all the weights equals one. (2) Synthesize economic/environmental/social indexes through all the life cycle stages To synthesize the economic indexes of the five life cycle stages, the multiplication method will be used. ECOI =
fn
),
I
Pi_ECOI
w'
w,
,-i (8) Where ECOI is the compositive economic index of the product life cycle and wi is the weights of every life cycle stage. (3) Synthesize economic, environmental and social indexes There are three indexes from step (2), which are economic index (ECOI), environmental index (ENVI) and social index (SOCIOI). To synthesize these three indexes, the addition method will be applied. SUMI = WI x ECOI + W 2 x ENVI + W 3 x SOCIOI
(9)
Where SUMI is the compositive index of the product, Wl, W2 and W3 is the weight of each index. 4. APPLICATION OF THE MODEL Here the model is applied to optimize the organic silicone product structure. All the organic silicone products can be mainly classified as four categories: silicone oil, silicone rubber, silicone resin and silane coupling agents. The original product structure includes four product chains, each representing one class of silicone products. This product structure includes fifteen products. Their assessment index values are calculated according to the method proposed previous, which are showed in Table 5. Table 5 The value of indexes of organic silicone products Product Economic Environmental Social Dimethyldichlorosiiane 0.0646 0.0225 0.0223 Octamethylcyclotetrasiloxane 0.0817 0.0603 0.0296 Dimethicone 0.0772 0.0296 0.0289 Trimethylchlorosilane 0.0763 0.0225 0.0240 Hexamethyldisiloxane 0.0773 0.0282 0.0222 M ethyl dichloro silane 0.0629 0.0212 0.0193 Methylvinyldichlorosilane 0.0526 0.0457 0.0182 Methylvinyl Silicone Rubber 0.0762 0.0348 0.0204 Conductive Silicone Rubber 0.0848 0.0547 0.0208 Methyltrichlorosilane 0.0633 0.0317 0.0188 Methanesiliconic Acid 0.0743 0.0416 0.0166 . . . . .
. . . . . .
677 Water Repellent S~ Methylsilicate Trichlorosilane Vinyl Trichlorosilane SilaneCoupling Agent Peroxide
0.0687 0.0623 0.0649 0.0834
0.0666 0.0537 0.0486 0.0498 ,
0.0219 0.0188 0.0179 0.0198
Along the product chain, the previous product can be used to produce the nearest following one and the proportion of the former put in to production is to be optimized. The objective can be either the whole product structure's economic, environmental and social performance independenti~t or its overall performance. It is supposed that the initial quantity of each organic silicone monomer (dimethyldichlorosilane, trimethylchlorosilane, methyldichlorosilane, methyltrichlorosilane and trichlorosilane) is 1 and the quantities of other products are all zero. If the objective is economic factor, the optimized product structure includes the following four products: hexamethyldisiloxane, conductive silicone rubber, methyltrichlorosilane, and silane coupling agent peroxide. If the objective is environmental factor, the optimized product structure includes the following five products: hexamethyldisiloxane, methyldichlorosilane, conductive silicone rubber, water repellent sodium methylsilicate, and trichlorosilane If the objective is social factor, the optimized product structure includes the following five products: octamethylcyelotetrasiloxane, dimethicone, methyldichlorosilane, methyltrichlorosilane, and silane coupling agent peroxide. 5. CONCLUSIONS In this paper a new LCA model is proposed and an index system is built under this model to evaluate the environmental, economic and social performance of products. Applying this model with process optimization technology, a product structure can be optimized to meet the specified profit target. REFERENCES [1] Qihong Shun, Nianqing Wan, Yuhua Fan, Summary of foreign LCA research, Environment Management, Volume 12 (2000.12), 24-25 [2] Adisa Azapagic, Life cycle assessment and its application to process selection, design and optimization, Chemical Engineering Journal 73 (1999) 1-21 [3] Jun Li, Zhuang Chen, Research on the assessment system for product life cycle, Chong Qin Environmental Siciences, Volume 21 No. 6 (1999.12), 10-12 [4] ISO/DIS 14040, Environmental Management - Life Cycle Assessment - Part 1, Principles and Framework, 1997 [5] J. Fava, R. Dennison, B. Jones, M.A. Curran, B. Vigon, S. Selke, J. Barnum (Eds.), A Technical Framework for Life-Cycle Assessment, SETAC and SETAC Foundation for Environmental Education, Washington, DC, 1991
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
678
Coordination of Multi-Factory Production Planning in the Process Industry Based on Internal-Prices Zhou Wei, Jiang Yong-heng and Jin Yi-hui Dept. Automation, Tsinghua University, Beijing, China, 100084 Abstract A large enterprise may consist of many factories with decision-making authority. The traditional centralized planning method is usually inefficient for this kind of enterprise. A distributed coordination algorithm based on internal-prices is presented in this paper. Within this algorithm, only the factory production planning model needs to be set up, which can be solved by its local information. Through updating the internal-prices in the coordination center, a near optimal solution of the multi-factory production planning is achieved. Numerical testing results show that the internal-prices converge. Keywords 1.
production planning, internal-price, coordination algorithm
INTRODUCTION
The problem discussed in this paper is a practical problem observed in a large process enterprise in China with a coordination center and many factories located in distinct geographical locations. In this enterprise, due to the differences in technology and equipment, intermediates need to be transported among the factories to satisfy the demand of the productions. Every factory has decision-making authority to some extent and makes a production planning by its local knowledge. The coordination center does not learn the local information of every factory in detail, and its responsibility is to coordinate the amount of flow among the factories. Multi-factory planning problem is a large-scale combination problem and NP-Hard, which has been researched widely Ill[21. However, almost all of the methods reported up to now are centralized. For these methods, the coordination center must attain all the local information in each factory, which is impossible for the problem discussed here. So, it is necessary to develop an effective distributed method without accessing local information of each factory. In this paper, a distributed coordination algorithm based on internal-prices is presented. In the approach, the interactions among the factories are modeled by a set of inter-factory material balance constraints as presented in Section 2. Motivated by the real pricing mechanism in the market, these constraints are relaxed by using a set of internal-prices. These internal-prices are Lagrange multipliers associated with these inter-factory constraints and present marginal costs per unit product for the violation of such constraints. So, the overall problem is decomposed into factory-level subproblem, where every factory can optimize its planning only by the local knowledge and internal-prices. Coordination is then achieved by an iterative price updating process as presented in Section 3. With the internal-prices updated, the whole near optimal planning can be achieved. At last, a conclusion is given in Section 4.
679
2.
PROBLEM FORMULATION
The enterprise discussed here is a large process enterprise with a coordination center and many factories located in distinct geographical locations. As the enterprise is enormous, it is impossible and unnecessary for the coordination center to get all the local information in each factory. So, the whole enterprise planning cannot be achieved by common centralized optimal methods. To solve this problem, a coordination method based on inner-prices is presented. By giving an intemal-price to every intermediate transported among the factories, the independent factory model can be set up and solved only by its local problem according to the intemal-prices. Then, the optimal solutions of all the factory planning are submitted to the coordination center. After updated, the internal-prices are given the factories again. Through some iterations, a near optimal enterprise planning can be attained.
2.1 Formulation of Individual Factory Problems The enterprise considered is as follows: the enterprise consists of a fixed set of process factories located in distinct geographical locations; the products of a factory are provided to the customers as well as supplied to the other factory as intermediates internally; a time horizon divided into a number of time periods is also given with known demands at the end of each time period. Every factory has a limited production capacity and a limitless stock holding capability. The stock level can not be lower the safety stock level. There is a fixed production cost and variable production cost for a product. Variable costs are proportional to the production amount. Transportation costs are proportional to the transported amount of the intermediates, and the transportation time is ignored. The goal in the every factory is to determine the production planning over the full horizon to minimizing production, inventory and transportation costs. The transportation cost of intermediates is included in the cost of the factory sending them. Our model benefits from the work of McDonald and Karimi TM. The model of any factory n in the enterprise can be presented as follows:
(Rn) Min L.: ~ Z v , . . X ..... + Z Z p .....c ..... + Z Zh,,. I ..... + Z ~~po,,.6, ..... t6IkI pJ4 t
t~l pa~t t
+ E ZZz t~l\l e'u m
t
t~l\l nM t
t~I\l t~
t
(1)
.... e : ....
s.t. I .... t
=1
.... ,-1-[-Yi.n,t-Jt-Zpt,rm,.n,t-Z m~
I,.,--I L ,
,
t,n,l
>0
X .... , < M S ....,
C,,., t =
~ flr,,,.X,,,.., J': fl,.,,., s O
pp
t,n,m.t
-din,
V i ~ I \ I RM , .
(2)
m
Vi ~ I \ I RM
Vi~IklRM
Vi ~ I \ I FP
(3) (4)
(5)
680 Vi ~ 1 Ip
(6)
Vi ~ I \ I ~
(7)
C .... , = ~ p,.~,..,, m'
X ....,Ri,.. t < Hi,., ' X , ., , , C ,,. ,, , I , ,. ,, , P,[.,m., PP >0 l,m,n,l ,
6 , ,. ,, e {0,1}
'
(8)
Constraints (2) describe the material balance in factory n. Constraints (3) indicate that the inventory level cannot be lower than the safety target. Constraints (4) ensure the fixed production cost to be incurred whenever producing. Constraints (5) represent the consumption of raw or intermediate materials using the bills of material. Constraints (6) force all material shipped to factory n to be consumed there in the same time period. Constraints (7) enforce the production capacity limitations of factory n. 6,,..~., is binary variable, which is 1 if factory n produce product i in the time period t, and 0 otherwise.
2.2 Formulation of the Overall Problem The goal of the enterprise is to determine the amount of intermediates transported among the factories to minimize the overall cost, i.e. the sum of individuals' objectives. (P)
Min L = ~ L,,
(9)
n
s.t.
P,[.,ma= Pp,,.,m,'
(10)
Vi ~ 1 le
The inter-factory constraints (10) indicate that the amount of intermediate i transported promised by factory n should be equal to the requirement of factory m. 3.
SOLUTION METHODOLOGY
For the whole enterprise model (P), constraints (10) are relaxed by the internal-prices, i.e. Lagrange multipliers 2, . . . . . t"
J
"- ZLn
+ E Z Z Z ~ , I ,
n
1
n
m
.... t(Pt,rn,m,t-
(11)
Ppt,n,m,t )
l
Collecting all the terms related to factory n in (11), the modified model for factory n is described as follows:
(R,~
L.
(12) i
m'
t
i
m
t
s.t. (2),(3),(4),(5),(6),(7),(8) It can be found that all the variables in (R'n) are local, so (R'n) can be solved by the local information. The internal-prices can be updated by:
681 Ak§ = A,k +akg(A, k)
(13)
Here, k is the iteration index, and g(2) is subgradient attached to ~. and specifies the violation degree of constraints (10): g(2 . . . . . . 6t k
(14)
t) = et,r,m,t - P pl,m,ll,l
is the step size at the kth iteration which is given by: Q
a k=~
j_jk
g(~ )~g(~ )
,0<~<2
(15)
J is an estimate of the overall optimal solution, and
jk
is the value of J= ~ J ,
at the kth
tl
iteration. It has been proved this method can converge at the rate of geometric progression t41. The step sizing mechanism used here adopts the version of Fisher [51. decrease the step size by half whenever j k fails to decrease in some fixed number of iterations. When the solution of J cannot decrease after fixed number of iterations, the coordination algorithm will stop. Whereas, et,r,m,t is usually not equal to
Pet,n.m,tat
that moment. We can
r , and el 1" ..... t as the a m o u n t et,P,m,t of intermediate i from factory n to take the average of P, .....
m finally. If the amount is beyond the production capacity of factory, the factory's maximal capacity is used. Pi ,tlp ,m ,l is given by:
PP ...... ,
= min{ 21( e l .r. . . .
, -[- e ~ , P , m , t ) , n t , n , , e .... , }
(15)
Thus, the coordination algorithm of the whole enterprise can be represented as follows: Step 1" The coordination center gets the initial internal-prices 2,,,,m,, , which can be zero. Step2: The coordination center gives 2, ..... t to the responding factory n. Step3: For any factory n, (R'n) is solved by local information according to the internal-prices, and the optimal solution
Jn,
P~1" ..... t and P,,m',,,, are submitted to the coordination r
center. Step4: After all (R'n) are solved, the coordination center judges whether the algorithm is end or not. If not, the coordination center updates the internal-prices by (13), return to Step 2. Step5: Get ei,n,m,t by Eq.(15), and end.
682 The coordination procedure is summarized in Fig. 1, in which only the procedure in factory n is shown and the procedures in another factories are the same. 4.
CONCLUSION
The production planning problem for a large enterprise with the local information that cannot be attained can be solved by the proposed coordination algorithm. An internal-price to every intermediate transported among the factories is introduced. By using the internal-prices, the independent factory planning models are set up, which can be solved by the local knowledge. Through updating internal-prices, a near optimal solution of the multi-factory production planning is achieved. A practical enterprise with six factories is studied, and the numerical results show that the internal-prices converge.
Fig.1. Coordination Procedure of Achieving the Whole Enterprise Planning NOMENCLATURE Set /~w: set of raw materials //P: set of intermediates N: set of factories in an enterprise
/FP: set of finished products
I=I T M T:
U1 te U I FP
set of periods in the planning horizon
Variables X ...., 9Production amount of i ~/~/RMat factory n in the time period t
C,.,,.," Consumption of raw material or intermediate i ~/~/FP at factory n in the time period t I ....t" Inventory level for i ~/~/RMat the end of the time period t at factory n
683 P,,~.,m.t " Flow of intermediate i ~/~fP from factory n to factory m requested by factory m in the time period t PP 9Flow of intermediate i E //e from factory n to factory m promised by factory n in the l,n,m,t
time period t 6,,.., : Binary variable, 61,., , = 1 if X,..,, > 0 and 6,..,, = 0 otherwise Parameters:
2, ..... ," Price of intermediate i ~//P from factory m for factory n in the time period t, which can be attained from the coordination center directly o,,." Variable cost to produce a unit of product i ~ / ~ M at factory n
p .... ~" Price of raw material i ~/RM at factory n in the time period t po ..... 9Fixed cost of producing i ~ / q ~ h,,. : Inventory cost for holding a unit of product i in inventory at factory n for the duration of
the time period t f .... 9Variable cost of transporting a unit of intermediate i ~/tP from factory n to factory m I~.,t : Safety stock target for product i at factory n in the time period t
d ...., : Demand for product i at factory n in the time period t tic.,.." The yield-adjusted amount of raw or intermediate i~ 1~IFP that must be consumed to
produce a unit of i '~ 1FP at factory ns R .... t" Production time per unit of product i ~ / q M at factory n in the time period t H .... , : Available capacity of factory n in the time period t for product i
M:
A large number
REFERENCE 1. 2. 3. 4.
A.Drexl, A.Kimms. European Journal of Operational Research, 99(1997):221 A.Gupta, C.D.Maranas. Ind.Eng.Chem.Res, 38(1999):1937 C.M.McDonald, I.A.Karimi. Ind.Eng.Chem.Res, 36(1997): 2691 P.B. Luh, M.Ni, H.X. Chen, and L. S. Thakur. IEEE Transaction on Robotics and Automation (Accepted) 5. M.L.Fisher. Mgmt.Sci., 27(1981): 1
684
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
A Hierarchical Architecture of Modelling on Integrated Supply Chain Optimization Systems of Continuous Process Industries Zhangyu ZHOU, Siwei CHENG and Ben HUA Key Lab for Heat Transfer Enhancement and Process Energy Conservation of the State Education Ministry, South China University of Technology, Guangzhou, 510640, P.R.China
Abstract Towards the integration of supply chain and investment decision making, this paper proposed a hierarchical architecture of modelling on integrated supply chain optimization system, which consist of such three levels as integrated long range planning and decision making level, sustainable supply chain planning and scheduling level, unit process simulation and optimization level. At long range planning level, a goal programming (GP) model is utilized to debottleneck the supply chain. If there is some expansion needed, investment evaluation will be executed at this level basing on the analytic hierarchy process. The MIGP approach is utilized to make an optimum decision on which action or coupled actions to be selected. At supply chain midterm planning basing on GP model and short-term scheduling basing on the MIGP or MINLP model will be executed at various time horizons from annual to daily. All models on these two levels are simplified with the parameters offered by the bottom level. At the bottom level, in order to get quick solution, the ANN models combined with GA are utilized to simulate and optimize the unit operation processes under the constraints offered by the higher levels. The rigorous models will be utilized to training the ANN models. The application of this hierarchical architecture to a refinery plant is given. Key words supply chain, hierarchical architecture, modelling 1. INTRODUCTION The planning models available in the process systems engineering literature can be categorized into three distinct groups based on the time frames they addressed II-31. Long-range planning or capacity expansion models are strategic planning models that aim to identify the optimal timing, location and extent of additional investments in processing networks over a relatively long time horizon. Short-term scheduling models or operational planning models constitute the other extreme of the spectrum of planning models, which are characterized by very short time periods over which the various manufacturing tasks have to be fully sequenced. The third class of models incorporates some feature from both the long range and the shortterm models, which are characterized by dealing with multiple production facilities. However, in an integrated supply chain planning and scheduling system, all these models should be incorporated. How to construction the complex structure and find a feasible modeling methodology are the new challenges.
685 Although many rigorous unit operation models and software have been developed, simplified unit operation models were still used in supply chain planning and scheduling in order to get a quick solution. Using the existing unit operation models and software effectively is a sound way to improve the consistence of planning results with production practices t4l. However, it is hard to utilize rigorous unit process models in the planning and scheduling models with the complexity of unit processes. In this work, a hierarchical structure of modeling on integrated supply chain optimization system is proposed, in which supply chain long range planning and investment decision making, sustainable supply chain mid-term planning and short-term scheduling are integrated. Modeling methodology on all levels is presented under this structure. In order to improve the applicability of this integrated planning and scheduling system, unit process simulation and optimization is also incorporated in this system. A case study is given. Market forecasting and facility capacity Level 1 Long range planning
Level 2 Mid-term planning and scheduling
Integrated supply chain long range planning and investment decision making system
Modified parameters of simplified unit operation models
~i
II* SupplyplanningChain annual "i"~:!
Supply chain I~ monthly planning
--Suoolv chain scheduling Level 3 Unit process simulation and optimization
~
.., ~. .
..,
Unit process optimization and simulation ~
On-
achievement /overachievement of last time period and market reforecasting
Rigorous model ANN training
..........................................................................................................................................................................................
Figure 1" Hierarchical architecture of modeling on supply chain optimization system 2. HIERARCHICAL A R C H I T E C T U R E OF MODELING ON INTEGRATED SUPPLY CHAIN OPTIMIZATION SYSTEM According to the hierarchical structure of process system proposed by Hua [6'71, the global optimization of process system includes three hierarchical levels as integrated design and
686 investment decision making level, integrated process operation and marketing level, and integrated control and management level. The hierarchical structure of modeling on integrated supply chain optimization system of process industry is similar, and can be ascribed to such three levels as integrated supply chain long range planning and investment decision making level, sustainable supply chain mid-term planning and short-term scheduling level, and unit process simulation and optimization level. The hierarchical structure of modeling on integrated supply chain management system is shown in figure 1. At the first level, i.e. integrated supply chain long range planning and investment decision making level, the integrated supply chain long range planning and investment decision making level system proposed by Zhou [8'9] is utilized to debottleneck the supply chain. At the second level, i.e. sustainable supply chain midterm planning and short-term scheduling level, supply chain planning and scheduling will be executed at such three time horizons as annual, monthly and daily. Under the guidance of long range planning, a discrete time annual sustainable supply chain planning model should be addressed firstly. Then, multi period supply chain optimization model is required to decompose the annual planning results into monthly and daily scheduling in consequence. All unit process models at this level are simplified with the parameters offered by the bottom level. At the bottom level, unit operation processes are simulated and optimized under the constraints offered by the higher levels. Using the results of unit process simulation and optimization, the simplified unit process models at the higher level can be modified with better consistence with practice. According to information uncertainty, the planning and scheduling systems must be multiperiods, rolling systems. Re-planning and re-scheduling are necessary when markets demands changed or un-achievement/over-achievement of the last time period occurred. 3. MODELING ON INTEGRATED SUPPLY CHAIN OPTIMIZATION SYSTEM
3.1 Integrated Supply Chain Long Range Planning and Investment Decision Making According to the integrated supply chain long range planning and investment decision making system, a hierarchical structure of investment evaluation and decision making system should be set up firstly under three strategic objectives - enterprise benefits, social benefits and customer benefits. Secondly, evaluate the relative weights of various qualitative and quantitative elements in the hierarchical structure with the Analytical Hierarchy Process (AHP) utilized. Thirdly, a multi-objective multi-period mixed integer nonlinear programming model under the hierarchical structure of investment evaluation and decision making system is formulated to streamline the operations and suggest design modifications that will improve the efficiency and sustainability of the supply chain of process industry. It can be utilized to anticipate the impact of market changes and introducing new unit process, thus supply chain bottlenecks can be identified and debottlenecking projects can be suggested. Finally, Goal Programming (GP) is utilized to solve the multi-objective multi-period mixed integer nonlinear programming model and optimized investment project or coupled projects are selected according to the relative weights of objectives. The economic objective of long range planning and investment decision making is
687 maximizing the increase of fixed capital along the long time horizon through the optimum arrangement of investments. Distinguished from the operation optimization model of supply chain, capital investment, loan interests, facilities' depreciation charge and maintenance except for operation costs are important components of total cost of supply chain. In order to improve the efficiency of supply chain, process operability is also should be considered when debottleneck the supply chain. Since operability is qualitative variables, it is should be quantified. Investment constraints are also should be satisfied except for operation constraints. These issues were discussed in detail in literature [8,9]. 3.2 Sustainable Supply Chain Mid-term Planning and Short-term scheduling According to the integrated supply chain optimization strategy, such three models should be presented at this level as discrete time linear programming model for sustainable supply chain annual planning, multi-period linear programming models for supply chain planning decomposition and supply chain scheduling respectively. All unit process models can be simplified in these models with linear yielding rates and energy consumption. Fig. 2 shows the hierarchical structure of the integrated supply chain planning and scheduling system.
Given simplified unit process model Supply chain [~ annual planning [-"
[ Modified simplified [ unit process model
IPlanningdecompositionI Optimized Parameters
Supply chain Scheduling Task, / Constraints~ Unit process simulation and optimization Is the gap between simulation and optimization acceptable? Yes ~
No
Satisfied planning and scheduling results Figure 2: Integrated supply chain planning and scheduling system Sustainability involves the multiple objectives of social, economic, resources and
688 environmental sustainability, some of them are conflicting. For social sustainability, products should ensure that the needs of population are met. The economic objective of supply chain planning and scheduling is to maximize the increase of currency around the supply chain through optimum operation strategy. For resources sustainability, the goal is to minimize nonrenewable resources consumption. For environmental sustainability, resource use should be efficient and do not create permanent environmental damage. Therefore, sustainable supply chain planning is a multi-objective optimization problem that can be solved by GP combined with the AHP. The annual planning model and multi-period planning decomposition model can be represented basing on the sustainable supply chain optimization model proposed by Zhou et al [1~ There are several kinds of operation constraints must be satisfied, that includes balance constraints, capacity constraints, cost constraints, market constraints, maintenance constraints, quality constraints, technical constraints, policy constraints, risk assurance and non-negative constraints. All objectives and constraints were well addressed in literature [4] and [ 10]. The goal of supply chain scheduling is to determine the operation tactics to carry out the planning results. Binary variable should be introduced here to define the status of facilities. It can be represented as a multi-period mixed integer linear programming model. The objective of scheduling model is maximizing the economic benefits of supply chain, and other objectives in planning model can be set as summation constraints. Inventory cost and transportation cost play important roles in the operation cost of supply chain. 3.3 Unit Process Simulation and Optimization The transport model, inventory management model, purchase and sales optimization model can be represented as mixed integer non-linear or linear programming models, that can be easily integrated with planning and scheduling models. However, processing unit process is very complicated in process industries and various modeling approaches were utilized, it is hard to integrated rigorous unit process models with planning and scheduling models. The artificial neural network (ANN), which offers quick solutions and can be easily incorporated with other approaches, can be utilized to modeling the unit process. In order to improve the accuracy of ANN model, it can be trained by various rigorous unit process models or software, and tested by industrial practices. In order to optimizing the operation parameters, Gene Algorithm is utilized to seek optimum solutions to the ANN models. In order to integrate the unit process model with the planning and scheduling models, all variables should be coordinated. The input variables of unit process model should contain the output variables of planning and scheduling models, whereas the output variables of unit process models contain the optimum parameters of simplified unit process models in planning and scheduling models.
4. APPLICATIONS This architecture is applied to supply chain optimization of a refinery plant. At long range planning and decision making level, it emphasized on the debottlenecking of transport, inventory and processing stages of supply chain. At sustainable supply chain planning and
689 scheduling level, all stages of supply chain are considered. The debottlenecking suggestions and sustainable supply chain planning and scheduling obtained are approved by the decision maker. 5. CONCLUSIONS A hierarchical architecture of modelling on integrated supply chain optimization system is proposed in this paper, which consist of such three levels as integrated long range planning and decision making level, sustainable supply chain planning and scheduling level, unit process simulation and optimization level. The integrated supply chain long range planning and investment decision making system proposed by Zhou et al [1 ] is utilized to deal with supply chain long range planning and investment decision making issues simultaneously to debottleneck the supply chain. The discrete time sustainable supply chain optimization model is reformulated into multi-period supply chain midterm planning and short-term scheduling models, based on various time horizons from annual to daily. The artificial neural networks combined with genetic algorithm are utilized to simulate and optimize the unit operation processes to improve the applicability of planning and scheduling results. The application of this hierarchical architecture to a refinery plant illustrated that it is feasible. ACKNOWLEDGEMENTS
Financial supports from The National Nature Science Fund of China (No. 79931000) and The Major State Basic Research Development Program (G20000263) are also fully appreciated. REFERENCES
[ 1] Shobrys, D.E. and White, D.C., (2000). Computers & Chem Engng, 24(2/7), 163-173. [2] Das, B.P., Richard, J.G., Shah, N., et al., (2000) Computers & Chem Engng, 24(2/7), 1625-1631. [3] Gupta, A., Maranas, C.D., (1999) Ind & Eng Chem Res, 38(5), 1937-1947. [4] Zhou, Z., Cheng, S. and Hua, B., (2000).. Petrochem Tech, 29(11), 858-862. [5] Fransoo, J.C., (1998). Computers in Industry, 36(2), 163-164. [6] Hua, B., Zhou, Z., Yang, S., et al, (1999). Manufacturing Automation, 21(s), 299-304. [7] Hua, B., Zhou, Z. and Cheng, S. (2001). Chinese JofChem Eng. 9(4), 395-401. [8] Zhou, Z, Cheng, S., Hua B. et al, (2001), Proceedings of The 6th World Congress of Chemical Engineering. Melbome, Australia. September 23--27. [9] Zhou, Z, Cheng, S., Hua B. et al, (2001), Chinese J of Chem Eng. 9(4): 402-406. [10] Zhou, Z, Cheng, S. and Hua B., (2000), Computers & Chem Engng,. 24(2/7), 1151-1158.
690
Process Systems Engineering 2003 B. Chen and A.W. Westerberg (editors) 9 2003 Published by Elsevier Science B.V.
Computer-aided molecular design with BP-ANN and global optimization algorithm Xiang ZHOU, Xiaorong HE, Bingzhen CHEN and Tong QIU Department of Chemical Engineering, Tsinghua University, Beijing 100084, China
Abstract
A new method of computer-aided molecular design (CAMD), in which back-propagation artificial neural network (BP-ANN) is employed to establish the quantitative structure-activity relationship (QSAR) model and the a BB algorithm is applied to search for the greatest activity and the corresponding structural parameters, is presented in this paper. In order to achieve precise QSAR models with BP-ANN, a self-clustering algorithm and a genetic algorithm are introduced to the modeling process. The convex lower bounding function for BP-ANN model is constructed to perform the a BB algorithm. The efficiency of the proposed method is estimated by an illustrative example, and the limitation of the method is discussed. Keywords CAMD, QSAR, BP-ANN, a BB algorithm 1. INTRODUCTION The CAMD approach has been proved to be an effective alternative to the traditional synthesize and test method. The procedure of CAMD consists of the solutions of two problems: the forward problem is to establish a QSAR model so that the activity can be predicted given the molecular structure; the backward problem is to identify the appropriate structural parameters given the desired activity, which is an optimization problem Ill. An accurate QSAR model is the basis of CAMD. However, for many complex molecules, the correlations between the structural parameters and the activity are very difficult to formulate. Recently, BP-ANN was introduced to construct the QSAR models for this kind of molecules because of its unique computing architecture and powerful mapping ability t2'31. On the other hand, precise model is not easily achieved by BP-ANN because of some unfavorable problems. Furthermore, no attention has been given to solve the backward problem based on a BP-ANN model. In this paper, a novel method for CAMD using BP-ANN and the a BB algorithm is described (Fig. 1). Two improvements are made to the modeling process of BP-ANN in order to ensure the precision of the QSAR model. The a BB algorithm, which is a global optimization algorithm [41, is applied to solve the backward problem so that the global optimal structural parameters providing the greatest activity can be obtained. An illustrative example about the design of dopamine fl -hydroxylase (D fl H) inhibitors is carried out to examine the method. And conclusions are made with a discussion of the merits and limitations of the proposed method.
691
Start) [ Collectdata samplesI
,,................................................................................................... ~ ......................................................................................................................... .
I Partitionthe sampleswith self-clusteringalgorithm1 .~I
i
v~
with GDR-EGAalgorithm
'
N ~
~ . .........................................
~
i !
problem
i
............................................................................................. :
'
[ Construct the lower] bounding f u n c t i o n
~ Search for the global ] optimal structural parameters] with theBB algorithm
.......................................................................................................................
,
the forward
demand?~ _
' i
i J ' l
I
Solving
Solving
the
backward problem
..................................................................................................................................
."
( to0) Fig. 1. The flowchart of the proposed method 2. SOLVING THE F O R W A R D P R O B L E M WITH BP-ANN M O D E L I N G
BP-ANN model is a kind of black box model which establishes a correlation between the inputs and outputs of the network. In the application to QSAR modeling, the structural parameters and the activity of a molecule are treated as the inputs and output of the network respectively. Thus the QSAR model is as follow: y: f(X) where X is a vector of structural parameters and y is the activity of the molecule. There are two main obstacles lie in the modeling process of BP-ANN that may result in a failure to obtain a precise model: the correlations in the samples cannot be generally learned by the network with an improper partition of training and testing samples; false local minima trap the training of the network. In this paper, two improvements are made to address the two obstacles respectively. 2.1. Classification of the samples with a self-clustering algorithm
Before the samples are partitioned into training and testing samples, an overall examination of the distribution of all of the samples is necessary. A Euclidian distance based self-clustering algorithm is used to classify the samples. Each sample is attached to the correct cluster according to the Euclidian distance between the sample and the kernel of different clusters:
A(s, K):
I
692
. . , s) 2, r where S = ,(s~
9s a , represents a sample; and K = Z / nS t t=l
is the kernel of a cluster /
'
with n samples. If A(S,K m) = min{A(S,K~)}, then S is admitted into the cluster whose kernel is K m. The program of the self-clustering algorithm is as follow: Step 1: each sample is treated as an initial cluster. Step 2: isolated samples are attributed to the existing clusters. Step 3: small clusters are divided up into large ones. The samples in each final cluster are partitioned for training and testing in the same proportion so that the mapping conditions in all of the samples can be comprehensively learned by the network. 2.2. GDR-EGA algorithm for training In many cases, the training algorithm of BP-ANN is generalize delta rule (GDR) tSl, which is a gradient search algorithm. GDR is provided with a high convergent speed until the search is stuck into a local minimum point. An extended genetic algorithm (EGA) including real number coding, competitive reproduction, multipoint crossover and multipoint mutation is employed to overcome false local minimum. During the training, a false local minimum can be estimated when the
training error E is large and the gradient M is close to zero at the same time, and is overcome when E is reduced and M is enlarged. In order to increase the possibility of escaping from a false local minimum, the fitness function of EGA is composed so that the solutions with relatively small E and large M are highly evaluated:
fitness(W) = ]M(W)]
E(W)
(1)
where W denotes a set of weights and boundaries. After escaping from a false local minimum, the training continues with GDR until the precision demand of the network is achieved or another false local minimum is encountered. 3. SOLVING THE B A C K W A R D P R O B L E M WITH ct BB A L G O R I T H M
As mentioned above, the backward problem can be regarded as an optimization problem: max y = f ( X )
s.t. g(X)<_O
x <_x, <_xy X~R" In this paper, the objective function, i.e. the QSAR model, is formed by three-layer BP-ANN. The transfer function in the network is sigmoidal function:
693
S(x) = ~
1
(2)
l + e -x
According to the mapping mechanism of BP-ANN Is], the objective function is twice differentiable, which means it is possible to search for the global optimum with the a BB algorithm [41. A convex lower bounding function of the objective function is required to perform the a BB algorithm. In many cases, the lower bounding function can be defined as follow: n
L ( X ) = f ( X ) + a~-] (x L - x,)(x~ - x,)
(3)
t=l
where a is the key parameter to guarantee the convexity of L ( X ) . An interval Hessian matrix of f ( X ) is the prerequisite for the calculation of a . The interval Hessian matrix of a three-layer BP-ANN is as follow:
where H I =(h~) h,j = (1 - 2y)(y - y2 )
wih,k whokl (hout k - hout~ )
wihjk whokl (hout k - hout~ )
Lk=l
(4)
+ (Y - Y2 ) ,= [c~/ wih'k wihjk wh~ (l - 2h~
- h~
)]
In Eq. (4), nhid is the number of hidden nodes, wih denotes the connection weights between input nodes and hidden nodes, and who denotes the weights between hidden nodes and output node, hout represents the output value of a hidden node. As we can see, Eq. (4) can be reformulated as follow:
h,j = P x Q , xQj + R x T , j
(5)
where P = (1 - 2y)(y - y2 ) nhtd
Q, = ~ wih,k whokl (hout k - hout~ ) k=l
R =y_y2 nh~d
T,j = Z wih,~ wihjk whokl (1 -- 2hout k )(hout k - hout 2 ) k=l
In this paper, P , Q, R and T are regarded as independent intermediate variables, so that the_ lower bounds and upper bounds of them can be determined separately. And then h,j and ho are calculated accordingly. Finally, an accurate a is obtained based on Gerschgorin's theorem for interval matrices [4].
694 4. I L L U S T R A T I V E E X A M P L E In this section, a design example is described to examine the CAMD method proposed in this paper. Much research has been carried out about the QSAR modeling of D/3 H inhibitors (Fig. 2) based on 47 existing sample molecules. And three models were compared by So and Karplus [6]" a molecular shape analysis (MSA) model with 5 parameters, a receptor surface model (RSM) with 2 parameters and a genetic neural network (GNN) model. The precision of different models were measured by a cross-validated parameter as follow: 2
47
c r o s s - v a l i d a t e d q2 = 1 - ,=1
~-~[Y,,obse~,ea--Yobd~
(6)
....
t=l
For the maximum possible precision, q2 takes a value of 1. In this paper, the QSAR model was established with a three-layer BP-ANN. The 5 structural parameters used in MSA model, V0 9a measure denoting the common overlap volume against the most active compound; Q3,4: 9the sum of partial atomic charges on atoms 3,4; Q6" the partial atomic charge on atom 6; :r 0 9 the molecular lipophilicity and n:4 9 the water/octanol fragment constant of the 4-substituent, are selected as the inputs of the network to predict the activity. Before training, the 47 samples are classified into 4 clusters by use of the Euclidian distance based self-clustering algorithm. The precision of the four models is compared in Table 1. It is easy to see that the QSAR model established with BP-ANN is provided with the highest precision. Based on the BP-ANN model, the global optimal structural parameters were obtained through the a BB algorithm (Table 2). As a reference, the structural parameters of the molecule with the maximum activity among the 47 sample molecules are also included in Table 2. As we can see that the global optimum differs much from the reference in rc0 and rc4 , while the values of V0, Q3,4,5 and Q6 are very close together. The indication is that a new molecule with greater activity can be acquired if the modifications can be made to the reference molecular structure so that the molecular lipophilicity and the water/octanol fragment constant of the 4-substituent is increased. S
R3~ N ' ~ N H 5
Fig.2. Structural formula of D/3 H inhibitors Table 1. Precision of the four models q2 MSA model
0.76
RSM GNN model BP-ANN model
0.79 0.77 0.88
695 Table 2. Bounds of the structural parameters, the global optimum and the reference parameters Lower bound Upper bound Global optimum Reference
Y
Vo
Q3,4,,
Q6
~0
~r4
0 10 9.63 7.13
0 1 1.0 1.0
-1 1 0.42 0.5
-1 1 -0.07 -0.06
0 40 26.51 7.09
-1 1 -0.34 -0.67
5. DISCUSSION AND C O N C L U S I O N S
A new method of utilizing BP-ANN modeling and the a BB algorithm for CAMD has been demonstrated in this paper. As mentioned in the Introduction, the structure-activity correlations of many molecules are implicit, which is a great drawback for the design of this kind of molecules. In this paper, BP-ANN is employed to construct the QSAR model for these molecules. High precision of the model is guaranteed with the aid of a self-clustering algorithm and an extended genetic algorithm. Reasonable global optimal structural parameters are calculated by used of the a BB algorithm to serve as a guidance to modify the molecular structure correctly. On the other hand, the proposed method suffers from some limitations. Because of the Sigmoid transfer function, the output values of the network are limited in [0,1], so that the values of activity should be normalized into [0,1] in modeling and optimization. Therefore the precision of the QSAR model is influenced by the upper bound of the activity, which is difficult to predetermine. Even an accurate global optimum is missed when upper bound is too small. Therefore, further research needs to be carried out to attempt different transfer functions for the network model so that the normalization can be avoided. REFERENCES 1. 2. 3. 4. 5. 6.
Venkatasubramanian, V., Chan K., Caruthers J. M., Computer-aided molecular design using genetic algorithms, Comput. Chem. Engng., 18(9), 833(1994). Aoyama, T., Suzuki, Y., Ichikawa, H., Neural networks applied to structure-activity relationships, J. Med. Chem., 33(3), 905(1990). Tetko, I. V., Luik, A. I., Poda, G. I., Applications of neural networks in structure-activity relationships of a small number of molecules, J. Med. Chem., 36, 811(1993). Floudas, C. A., Deterministic global optimization, Kluwer Academic Publishers, Netherlands(2000). Yao, X. L., Study of application of artificial neural network in optimization of petrochemical process, Ph.D. Thesis, Tsinghua University, Beijing(1992). (in Chinese) So, S. S., Karplus, M., Three-dimensional quantitative structure-activity relationships from molecular similarity matrices and genetic neural networks. 2. Applications, J. Med. Chem., 40, 4360(1997).
This Page Intentionally Left Blank