Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2480
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Yanbo Han Stefan Tai Dietmar Wikarski (Eds.)
Engineering and Deployment of Cooperative Information Systems First International Conference, EDCIS 2002 Beijing, China, September 17-20, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Yanbo Han Institute of Computing Technique P.O. Box 2704, 100080 Beijing, China E-mail:
[email protected] Stefan Tai IBM T.J. Watson Research Center P.O. Box 704, 10598 NY Yorktown Heights, USA E-mail:
[email protected] Dietmar Wikarski FH Brandenburg, University of Applied Sciences Magdeburger Str. 50, 14770 Brandenburg an der Havel, Germany E-mail:
[email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Engineering and deployment of cooperative information systems : first international conference ; proceedings / EDCIS 2002, Beijing, China, September 17 - 20, 2002. Yanbo Han ... (ed.). - Berlin ; Heidelberg ; New York ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2480) ISBN 3-540-44222-7 CR Subject Classification (1998): H.2, H.4, H.5, C.2.4, I.2, H.3, J.1 ISSN 0302-9743 ISBN 3-540-44222-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by Markus Richter, Heidelberg Printed on acid-free paper SPIN: 10871411 06/3142 543210
Preface Today, technologies for engineering and deployment of cooperative information systems have become increasingly critical in the construction of practically all types of large-scale distributed systems. Stimulating forums with different focuses are thus still in need of researchers and professionals from academia and industry to exchange ideas and experience and to establish working relationships. The idea to organize in China an academic event focusing on current topics in the field was born during the IFIP World Computer Congress 2000 that was held in Beijing, China. And here are the proceedings of EDCIS 2002! This volume comprises the technical research papers accepted for presentation at EDCIS 2002. Of the initial 159 paper submissions involving nearly 500 authors from 14 countries of all continents, 45 papers were carefully selected. Every paper was reviewed by at least three members of the program committee, and judged according to its technical merit and soundness, originality, significance, presentation quality, and relevance to the conference. The accepted papers cover various subjects such as workflow technology, coordination technology, advanced transactions, groupware systems, semantic web, ontologies, mobile agents, and enterprise modeling, and enterprise application integration. Herein we would like to express our deepest appreciation to the authors of the submitted papers, to the members of the program committee, and to all the external reviewers for their hard and fine work in reviewing the submissions. We would also like to thank the following institutions for their sponsorship: China Computer Federation, EASST, IEEE, Institute of Computing of the Chinese Academy of Sciences, Fraunhofer Institute for Software and Systems Technologies, Brandenburg University of Applied Sciences, Tsinghua University, and IBM Research. Finally, we would like to thank the following individuals for their efforts towards making EDCIS 2002 a successful conference: Wolfgang Deiters, Jens-Helge Dahmen, Katharina Menzel, Edgardo Moreira, Donglai Li, and Jing Lu. July 2002
Yanbo Han, Stefan Tai, and Dietmar Wikarski
General Co-chairs R. Lu H. Weber M. Shi
Academy of Mathematics, China Technical University Berlin, Germany Tsinghua University, China
Executive Co-chairs (Program and Organization) Y. Han S. Tai D. Wikarski
Institute of Computing Technology, China IBM T.J. Watson Research Center, USA University of Applied Sciences Brandenburg, Germany
Members of the Program Committee W.M.P. van der Aalst Y. Bi C. Cao X. Cheng T. Cheung P. Dadam W. Deiters J. Eder C. (Skip) Ellis J. Fan Y. Fan G. Faustmann A. Fuggetta P. Grefen J. Gu T. Herrmann S. Jablonski A. Jacobsen M. Li J. Li F. Lindert
Eindhoven University of Technology, The Netherlands University of Scranton, USA Institute of Computing Technology, China Institute of Computing Technology, China City University of Hong Kong, Hong Kong University of Ulm, Germany Fraunhofer Institute for Software and Systems Engineering (ISST), Germany University of Klagenfurt, Austria University of Colorado, USA Institute of Computing Technology, China Tsinghua University, China Berufsakademie Berlin, Germany Politecnico di Milano, Italy University of Twente, The Netherlands East China Normal University, China University of Dortmund, Germany Friedrich-Alexander University of Erlangen-Nürnberg, Germany University of Toronto, Canada Institute of Software, China Institute of Computing Technology, China Fraunhofer Institute for Software and Systems Engineering (ISST), Germany
Conference Organization
L. Liu F. Pacull R. Reichwald R. Reinema I. Rouvellou K. Sandkuhl G. Schwabe A. Strohmeier Z. Tari R. Unland X. Ye G. Yu
VII
Beijing University of Aeronautics and Astronautics, China Xerox Research Centre Europe, France Technical University of Munich, Germany Fraunhofer Institute for Secure Telecooperation (SIT), Germany IBM T. J. Watson Research Center, USA Fraunhofer Institute for Software and Systems Engineering (ISST), Germany University of Koblenz, Germany Swiss Federal Institute of Technology in Lausanne, Switzerland RMIT University, Australia University of Essen, Germany University of Inner Mongolia, China Northest University of China, China
VIII
Conference Organization
Additional Reviewers J. Andreoli S. Angelov D. Arregui B. Bachmendo D. Buchs C. Bussler H. Chen B. Chidlovskii M. Diefenbruch B. Dongbo P. van Eck R. Eshuis Q. Feng M. Fokkinga Y. Gao W. Gruber V. Gruhn F. Gu S. Haseloff D. Hiemstra M. Hoffmann J. Hu D. Hurzeler C. Ihl D. N. Jansen T. Kamphusmann M. M. Kandé M. van Keulen A. Kienle J. Kienzle M. Lehmann K. Loser Q.Z. Lu C. Meiler T. A. Mikalsen M. Reichert B. Schloss J. Si K. Sikkel C. M. Stotko J. Tan W. Tian
G. Vollmer B. Wang F.J. Wang H. Wang L. Wang S. Weber J. Willamowski C. Zhang J. Zhang Z. Zhao Y. Zheng L. Zhou X. Zhou Q. Zeng
Table of Contents Workflows I A Data Warehouse for Workflow Logs. ...................................................................... 1 J. Eder, G. Olivotto, W. Gruber Workflow and Knowledge Management: Approaching an Integration. .................... 16 J. Lai, Y. Fan Linear Temporal Inference of Workflow Management Systems Based on Timed Petri Net Models............................................................................................. 30 Y. Qu, C. Lin, J. Wang
Workflows II Discovering Workflow Performance Models from Timed Logs. .............................. 45 W.M.P. van der Aalst, B.F. van Dongen Performance Equivalent Analysis of Workflow Systems Based on Stochastic Petri Net Models........................................................................................................ 64 C. Lin, Y. Qu, F. Ren, D.C. Marinescu An Agent Enhanced Framework to Support Pre-dispatching of Tasks in Workflow Management Systems............................................................................... 80 J. Liu, S. Zhang, J. Cao, J. Hu
Ontologies TEMPPLET: A New Method for Domain-Specific Ontology Design ..................... . 90 Y. Dong, M. Li An Environment for Multi-domain Ontology Development and Knowledge Acquisition............................................................................................................... 104 J. Si, C. Cao, H. Wang, F. Gu, Q. Feng, C .Zhang, Q. Zeng, W. Tian, Y. Zheng Applying Information Retrieval Technology to Incremental Knowledge Management……………………………………………………………… .............. 117 Z. Yang, Y. Liu, S. Li
Semantic Web Visualizing a Dynamic Knowledge Map Using a Semantic Web Technology. ...... 130 H.-G. Kim, C. Fillies, B. Smith, D. Wikarski
X
Table of Contents
Indexing and Retrieval of XML-Encoded Structured Documents in Dynamic Environment. ........................................................................................................... 141 S. Kim, J. Lee, H. Lim Knowledge Management: System Architectures, Main Functions, and Implementing Techniques................................................................................. 155 J. Ma, M. Hemmje
Enterprise Application Integration A Dynamic Matching and Binding Mechanism for Business Service Integration................................................................................................................ 168 F.Wang, Z. Zhao, Y. Han A Uniform Model for Authorization and Access Control in Enterprise Information Platform. .............................................................................................. 180 D. Li, S. Hu, S. Bai Constraints-Preserving Mapping Algorithm from XML-Schema to Relational Schema. .................................................................................................. 193 H. Sun, S. Zhang, J. Zhou, J. Wang
Mobile Agents Study on SOAP-Based Mobile Agent Techniques. ................................................. 208 D. Wang, G. Yu, B. Song, D. Shen, G. Wang Securing Agent Based Architectures. ...................................................................... 220 M. Maxim, A. Venugopal Service and Network Management Middleware for Cooperative Information Systems through Policies and Mobile Agents.......................................................... 232 K. Yang, A. Galis, T. Mota, X. Guo, C. Todd
Enterprise Modeling Research on Enterprise Modeling of Agile Manufacturing. .................................... 247 H. Xu, L. Zhang, B. Zhou HCM – A Model Describing Cooperation of Virtual Enterprises. .......................... 257 Y. Zhang, M. Shi A Description for Service Supporting Cooperations. .............................................. 267 M. Wiedeler
Table of Contents
XI
Distributed Systems Analysis Analysis of an Election Problem for CSCW in Asynchronous Distributed Systems.................................................................................................................... 280 S.-H. Park A Linear-Order Based Access Method for Efficient Network Computations. ........ 289 S.-H. Woo, S-B Yang A Petri-Net Model for Session Services. ................................................................. 303 J. Shen, Y. Yang, J. Luo
Software Engineering Software Processes for Electronic Commerce Portal Systems. ............................... 315 V.Gruhn, L. Schoepe Architecture Support for System-of-Systems Evolution.......................................... 332 J. Han, P. Chen Expressing Graphical User’s Input for Test Specifications. .................................... 347 J. Chen
Architectures An Intelligent Decision Support System in Construction Management by Data Warehousing Technique. ......................................................................................... 360 Y. Cao, K. Chau, M. Anson, J. Zhang Distributed Heterogeneous Inspecting System and Its Implementation. ................. 370 L. Huang, Z. Wu Architecture for Distributed Embedded Systems Based on Workflow and Distributed Resource Management.......................................................................... 381 Y. Lin, X. Zhou, X. Shi Knowledge Management and the Control of Duplication. ...................................... 396 D. Cook, L. Mellor, G. Frost, R. Creutzburg
Transactions An Execution and Transaction Model for Active, Rule-Based Component Integration Middleware. .......................................................................................... 403 Y. Jin, S. Urban, A. Sundermier, S. Dietrich
XII
Table of Contents
Implementation of CovaTM. ................................................................................... 418 J. Jiang, M. Shi Resource-Based Scripting to Stitch Distributed Components. ................................ 429 J.-M. Andreoli, D. Arregui, F. Pacull, J. Willamowski
Coordination I Multi-agent Coordination Mechanism in Distributed Environment. ....................... 444 M. Xu, Y. Zhuang Coordination among Multi-agents Using Process Calculus and ECA Rule. ........... 456 Y. Wei, S. Zhang, J. Cao An RBAC-Based Policy Enforcement Coordination Model in Internet Environment. ........................................................................................................... 466 Y. Zhang, J. You
Coordination II A CORBA-Based Negotiation Strategy in E-Commerce. ....................................... 478 Y. Zhao, G. Wang Negotiation Framework for the Next Generation Mobile Middleware Service Environment. ........................................................................................................... 487 M. Sihvonen, J. Holappa
Groupware I An Internet-Based Conference System for Real-Time Distributed Design Evaluation................................................................................................................ 499 K. Dai, Y. Wang, X. Xu Intention Preservation by Multi-versioning in Distributed Real-Time Group Editors...................................................................................................................... 510 L. Xue, M. Orgun, K. Zhang Supporting Group Awareness in Web-Based Learning Environments.................... 525 B. Hu, A. Kuhlenkamp, R. Reinema
Groupware II Raison d’Etre Object: A Cyber-Hearth That Catalyzes Face-to-Face Informal Communication. ...................................................................................................... 537 T. Matsubara, K. Sugiyama, K. Nishimoto
Table of Contents
XIII
The Neem Platform: An Extensible Framework for the Development of Perceptual Collaborative Applications. ................................................................... 547 P. Barthelmess, C.A. Ellis Author Index............................................................................................................ 563
A Data Warehouse for Workflow Logs Johann Eder, Georg E. Olivotto, and Wolfgang Gruber Department of Informatics-Systems University of Klagenfurt A-9020 Klagenfurt, Austria {eder,gruber}@isys.uni-klu.ac.at,
[email protected]
Abstract. Workflow Logs provide a very valuable source of information about the actual execution of business processes in organizations. We propose to use data warehouse technology to exploit this information resources for organizational developments, monitoring and process improvements. We introduce a general data warehouse design for workflow warehouses and discuss the results from an industrial case study showing the validity of this approach.
1
Introduction
Workflow management systems (WFMSs) improve business processes by automating tasks, getting the right information to the right place for a specific job function, and integrating information in the enterprise [8, 15, 1, 21, 2]. Workflow management systems support the execution of business processes as they require the definition of processes, automate the enactment of process steps and their execution guided by business rules and execution logic, and finally they document the execution of all steps of a business process. In particular, workflow logs [14, 17] contain the processing information for all instances of activities of workflow instances. Typically, they give account when which actor performed which task. So workflow logs contain very valuable information of the actual execution of business processes (as opposed of merely specified or desired descriptions of business processes). Thus they could be a very valuable resource for business process improvement, for reorganizations, and business process re-engineering. Workflow logs can also provide information for process controlling, and process monitoring. Workflow logs are also sources for information for process specifications and scheduling information like due dates, durations, or branching probabilities for conditional constructs (or-split). Hunting for the treasures in workflow logs requires appropriate tools. Here we propose to use data warehouse technology and OLAP (online analytical processing) tools for aggregating, analyzing and presenting information derived from workflow logs. For further and deeper analysis, a data warehouse can also be used as base for data mining and knowledge discovery techniques. Data Warehouses are structured collections of data supporting controlling, decision making and revision [22, 10, 12]. Data Warehouses build the basis for analyzing data by means of OLAP tools which provide sophisticated features for Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 1-15, 2002. Springer-Verlag Berlin Heidelberg 2002
2
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
aggregating, analyzing, and comparing data, and for discovering irregularities. Data Warehouses differ from traditional databases in the following aspects: They are designed tuned for answering complex queries rather than for high throughput of a mix of updating transactions, and they typically have a longer memory, i.e. they do not only contain the actual values (snapshot data). The most popular architecture for data warehouses are multidimensional data cubes, where transaction data (called cells, fact data or measures) are described in terms of master data hierarchically organized in dimensions. Transaction data, here the data about executing a certain instance of a certain workflow activity, is aggregated (consolidated) along these dimensions. The OLAP-operations (drill down, roll up, slicing, dicing, etc.) allow analysts to rapidly derive reports in different levels of abstractions and view the data from different perspectives. Predefined reports deliver sophisticated management ratios and ad-hoc OLAP-queries allow to analyze the causes of deviations in these figures. There is little work published so far on data warehouses for workflow logs. We are only aware of [4] where a data warehouse for workflow data for the HP process manager is described. The design goals were the simple installation and application of the Workflow–Data Warehouse which helps to detect critical process situations or quality degradations under different circumstances, e.g. different log sizes or different data loading and aggregation requirements. Our approach is more general since it is based on a general workflow meta model and explicitly collected queries which should be answered. Additionally, we were able to prove the concept with a prototype for a reasonably large workflow-log of an installation of the workflow management system @enterprise. The rest of the paper is organized as follows: In section 2 we present our workflow meta-model which serves as basis for understand the structure of the workflow log and as as basis for the information supply. Section 3 presents the information demand side for the design of the workflow data warehouse in form of a set of interesting queries. In section 4 we present the architecture of our workflow data warehouse. In section 5 a case study is presented and discussed where our architecture was used on real data from workflow logs of a large workflow installment. Finally, in section 6 we draw some conclusions.
2
Workflow Meta Model
A workflow is a collection of activities, agents, and dependencies between activities. Activities correspond to individual steps in a business process, agents (software systems or humans) are responsible for the enactment of activities, and dependencies determine the execution sequence of activities and the data flow between them. The meta model shown in Fig. 1 [6, 18] gives a description of the static and the dynamic schema aspects as well as of the organizational aspects. The workflow model we base our development of a warehouse architecture is able to capture workflows in different representation techniques, block-structured as well as un-
A Data Warehouse for Workflow Logs
3
Superdept. 0..* Department 0..1 Subdept. 0..* User
Role 0..*
is_parent_of
0..*
super 0..1 ComplexActivity_I
UserRole
{overlapping, complete}
ownes 1..1
Participant
creates_instance 1..1 0..*
Workflow_I
wf_uses_ Act_Inst 1..1 0..*
0..* sub
0..*
Activity_I
ElementarActivity_I
0..*
0..*
1..1
0..*
manages 1
next
0..1
is_instance_of
prev
ExternalWorkflow_I
0..* 1..1 Workflow 1..* wf_consists_of 1..* SpecTransition
wf_uses 0..* Activity
1..1
belongs_to
-sub
parent
* next belongs_to
* SpecOccurrence *
* {disjoint, complete}
wf_has_Model
0..1 prev *
*
-super 1
childOccurrence * ExternalWorkflow
ElementarActivity
ComplexActivity
ca_consists_of {disjoint, complete}
0..1
ActivityOccurrence WFModel *
ControlOccurrence
has split
1
0..1
*
join
is_counterpart * * -sub
me_parent 0..1*
prev ModelElement *
specified_by ModelTransition
*
-name * next
-super 1..1
corresponds_to
{disjoint, complete} ModelActivityOccurrence
ModelControlOccurrence
0..1 split
*
join
is_counterpart
Fig. 1. Workflow meta model
4
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
structured workflows, text based programming language style representations as well as graph based representations. The meta model consist of the following parts: The specification level contains the description of workflow types and activity types together with their composition structure via occurrences (see below). The model level contains the expanded workflow specifications, such that all activity appearances may have their individual characterizations (like due dates, agent, etc.). The instance level contains information about the execution characteristics of activities and workflow instances, and finally the organization level represents the agents, and the organizational structure of the company. We did not take the dimension of data into account, since there the differences between workflow systems are too big to allow an more general treatment. In the following we want to describe briefly the notion of workflows, activities, occurrences and model elements used in the meta model [6]. A workflow consists of activities which are either workflows, external workflows, elementary activities or complex activities. Complex activities consist of other activities, represented as occurrences in the composition of the complex activity. The hierarchical relationship between activities, e.g. in which parent-activities an activity appears, is also declared. Within a complex activity a particular activity may appear several times, where every of those appearances can be unambiguously identified by the concept of occurrences. An occurrence is associated with exactly one activity and represents the place where an activity is used in the specification of a complex activity. Each occurrence, therefore, has different predecessors and successors, which is expressed by the association class SpecTransition. There are two reasons for the distinction of an activity and its’ (multiple) occurrence(s). The first is the possibility of activity reuse which means that an activity is defined once and can be used in several workflow definitions. The second reason is the simplification of maintenance which means that a subprocess has to be changed only once, and it is changed for all workflows where it appears. This allows that new workflows can easily be composed using predefined activities. Such a composition is also called a workflow specification. For the purposes of a workflow-log-data warehouse the identification of multiple appearances of the same activity is very important, as it allows to aggregate execution data of the same activity in different positions of the workflow and even in different workflows. When a complex activity is used several times within a workflow, we also have to distinguish between the different appearances of occurrences, and the resulting elements are called model elements. Similar to the specification level, a workflow model has to be aware of its model elements and the hierarchical- and transition-relationship of those model elements. On the instance level, the classes Workflow I and Activity I are used for the representation of the instances of workflows and their activities during runtime. Analogous to the workflow model, a workflow instance consist of activity instances, and for those, the predecessor- and successor-activity instances (if ex-
A Data Warehouse for Workflow Logs
5
isting) are specified. Furthermore, a workflow instance belongs always to exactly one workflow (specification). The Participant (agent or processing entity) is responsible for invoking workflows and the execution of activity instances. In our meta model, the participant can more precisely be modelled with the help of users and roles. The assignment of users to activities doesn’t always take place directly. Ideally it is done by the usage of a role concept which allows a more generic user assignment [16] but in the meta model, presented in this paper, additionally a direct user assignment is also possible. Generally, we can express that users, roles or users in roles can participate in different workflows for different departments.
3
Requirements for Warehouse Architecture
The design of a Data Warehouse is determined by the available information in the workflow management system and by the information needs of the decision makers and the analysts. Therefore, each workflow warehouse will need adaption to the peculiarities of a certain enterprise information system. Here we introduce a prototypical data warehouse architecture for workflow histories based on prevailing standards of workflow systems on one hand, and on typical information needs of process managers and analysts. In the next section we will then show how this general data warehouse architecture can be adjusted to a particular application of a particular workflow management system. For the information supply we build on the workflow metamodel we presented in the previous section. For the information requirements we should be able to answer the following questions – who are the users of the system – which data are required – which queries should be answered The users could be both the administrators (= system administrators, workflow modellers etc.) and the users of workflow systems. The relevant components of workflow systems are the workflows, their activities, the participants and the used servers. The formulation of a set of queries is of particular importance for the design of the Data Warehouse structure and is given in the following [18]: – Query 1: how often are particular workflows enacted (the information provided by this query could be used to identify core processes which should be preferred in optimisation and priority assignment); – Query 2: which activities within these workflows are enacted (with this query preferred navigation paths through workflows can be identified); – Query 3: the enactment of which workflows regularly lead to deadline misses; – Query 4: the enactment of which activities lead to these deadline misses; – Query 5: what is the amount of these deadline misses;
6
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
– Query 6: how often are the workflows, which cause deadline misses, enacted (with this information one can decide if it is worth to optimise the workflow or not); – Query 7: how many different users participate in the enactment of particular workflows; – Query 8: who are the users participating in activities which lead to deadline misses frequently; – Query 9: who are the regularly overloaded users and who have regularly free capacities; – Query 10: which users participate in which workflows; – Query 11: comparing measures of different versions of a process; – Query 12: comparing a process–type during several periods of time; – Query 13: analysing the executions of an activity in different processes; – Query 14: analysing the executions of an activity by different users; – Query 15: monitoring the performance of users during several periods of time; Another important component is time [18]. Various analysis of time aspects could provide valuable results. The detection of peak times could probably help to solve the problems detected by queries three and four by enhancement of the time management. This could help to avoid expensive process optimisations. The detection of peaks could also permit a basis for the dynamic distribution of the load on different servers.
4
Warehouse Model
Based on the requirements in the previous section we introduce now the design of a hypercube (fig. 2) that allows us to answer all the queries formulated above. It consists of the following six dimensions: – – – – – –
workflows participants organisation servers time measures
The dimensions of the workflows and the participants result directly from the meta model presented in figure 1. The occurrences, which are necessary for reuse purposes, are the level of granularity (LoG) in the workflow dimension. Starting from here there are different alternatives to build up the structure of the workflow dimension. One possibility is the separation of the workflows and the activities in two different dimensions to cope the problems arising from the reuse of particular activities in different workflows. The disadvantage of this approach is that there is no chance to drilling down to the occurrences starting from the workflows. Another alternative is to consolidate the occurrences to the relevant
A Data Warehouse for Workflow Logs
Time
Workflows
Weekhierarchy
{ }
Monthhierarchy
Year
Year
{ }
Workflows
{ }
Activities
{ }
Half Year
{ }
Activities/ Workflows
{ }
Activities/ Workflows
{ }
Quarter
{ }
Occurrence
{ }
Occurrence
Calendar Week
{ }
Month
{ }
Day
{ }
Day
Measures Workflows Participants Server Time Departments
Serverhierarchy
{ }
Sites
{ }
Server
Measures
Workflow Monitoring
Server
Server total
Activityhierarchy
{ }
{ }
{ }
Workflowhierarchy
{ }
Starttime
{ }
Runtime so far
Updatetime Starttime
{ }
Runtime rest
est. Finishtime - Updatetime
Participants
Userhierarchy
{ }
Finishtime
{ }
estimated Finishtime
{ }
estimated overtime
Departments
Rolehierarchy
Departmenthierarchy
{ }
User
{ }
Roles
{ }
Departments total
{ }
UserRoles
{ }
UserRoles
{ }
Superdepartment
{ }
Participant
{ }
Participant
{ }
Department
Fig. 2. Data Warehouse schema
Updatetime est. Finishtime
{ }
actual overtime
Updatetime Finishtime
{ }
average runtime
7
8
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
activities and to consolidate the activities to the corresponding workflows. The problem that occurs in this solution is that every activity can belong to different workflows and their values would have to be prorated to different workflows. Our solution is to consolidate the occurrences in the first step to elements which represent a combination of the corresponding activities and workflows. These elements can be consolidated either to the workflows or to the activities. This approach allows the drill down to the occurrences starting from the workflows and prevents an ambiguous assignment of activities to workflows. In the Data Warehouse the benefit of the consideration of reuse aspects is given by the possibility to compare the occurrences of an activity appearing in different workflows. For the representation of the participants and the departments there are different alternatives too. In the first we consider the separation of the users and the roles into two different dimensions whereby the departments are built by the consolidation of the users. The meaning of this solution is that every user can act for his department in any role. In practice this solution is insufficient because it is not unusual that a user works for different departments. In our approach we separate the participants and the departments in two different dimensions. The lowest level of the participant dimension is represented by the participants themselves which are consolidated to elements which are a combination of users and roles. In the next step these elements are consolidated either to the users or to the roles. The reason for the separation of the users and the departments in two different dimensions is the fact that users can participate in different workflows for different departments. Thus it is not possible that a department is represented as a consolidation of its’ members. The introduction of a server dimension has technical reasons and provides the possibility to get information about the load of the different servers which could be used as a basis for a distribution of the load. The structure of the server dimension is very simple and self–explanatory. In the time dimension we selected a calendar day which consolidates to months, quarters and years as a chronon [13]. The consolidation to calendar weeks must be done in a separate hierarchy because of the fact that in different years a week could belong to different months. For the measure dimension we selected the following set of measures: – starttime (used to determine points of time on which activities usually are started) – runtime elapsed – runtime rest till deadline – finishtime – estimated finishtime – estimated overtime – actual overtime – average runtime
A Data Warehouse for Workflow Logs
9
Fig. 3. example of analysis 1
The throughput, the turnaround time and the resource consumption could be further examples for measures. The Data Warehouse schema contains information about the build time properties as well as the run time properties of the workflows. The build time properties are given in the dimension structures and the run time properties are represented by the data values. Additionally we want to emphasize the different meanings of the term time as a dimension and as a measure. The relevance of a time dimension is fundamental for a Data Warehouse and is already part of its definition [11]. It is used to represent data which belong to others than the actual point of time. As a measure we need time e.g. to get information about the execution duration of activities, to determine points of time at which usually many instances are created or to detect deadline violations.
5
Case Study
The goal of the case study was the prototypical implementation of a Log–Data Warehouse to analyze the validity of the theoretically elaborated concepts. The source workflow system was @enterprise of Groiss Informatics GmbH, Klagenfurt, (www.groiss.com) with a large number of installations in German speaking countries. @enterprise is partly based on the prototype workflow system Panta Rhei [7] which was developed at the University of Klagenfurt. For this case study we gratefully received real data from a large organization on basis of anonymity. The data stems from the workflow log of the time period from December 1999 to September 2000. The workflow implementation
10
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
Fig. 4. example 1: started activities graphical
contained, at the time of the case study, 22 different workflows, 325 different activities, 2.740 occurrences, ∼171.100 instances (∼5.300 workflow instances and ∼165.800 activity instances), ∼1.000 users and 75 roles. The following adaptions to our general workflow model were necessary: The workflow descriptions contain neither duration information nor due dates. Therefore, all measures requiring this information and all queries based on these data had to be abandoned. With the resulting multidimensional system, several of the queries formulated in the previous section can be answered easily. The adaptations of the Data Warehouse structure in the case study were made to adjust to the specifics of the particular workflow model of @enterprise and its specific log structure. Experimenting with the workflow data warehouse we soon found out that the results of the given queries quickly rose other queries and lead to new information demands. Several of those queries could again be easily answered with our prototype system, e.g.: – activity–transitions concerning the number of transitions and the time consumed by the transitions; – determining the types of activity termination (cancelled, finished or compensated) in absolute and relative numbers; – activity runtimes; – determining how long activities are in the worklists of different users; – monitoring the evolution of the number of instances of different activities; – determining how many instances are created by different users;
A Data Warehouse for Workflow Logs
11
Fig. 5. example 1: Activity durations graphical
In the first example we want to analyse the correlation of the number of the instances created by agent 8590147888 and the times the activities are in his work list. Figure 3 contains the source data. The first line of the table denotes that in December 1999 a total number of 118 instances were created by agent 8590147888 and the sum of the times the instances were in his work list was 76.28 days. The remaining lines can be interpreted analogically. The black line in figure 4 represents the number of created instances for the particular months the white line shows that the number of created instances on the average is growing. Figure 5 shows how long the instances were in the work list of agent 8590147888. Again the black line shows the values for the particular months. The white line represents the falling trend. The conclusion of this example is that although the number of created instances is increasing the processing times of the instances are decreasing. Two possible interpretations could be that either the agents’ experience is increasing continuously or the increasing number of instances raises the agents’ efficiency. For the second example shown in figures 6 and 7 we use two computed measures: The average runtime of occurrences is computed by dividing the sum of the occurrence runtimes through the number of their instances. The performance measure relates the average runtime of an occurrence executed in a particular department with its overall average. Figure 6 shows that department 8590147029 has a bad performance in executing occurrence 8590170011. Interested in the performance of the other occurrences executed in this department we could easily transform the table in figure 6 to the one presented in figure 7.
12
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
Fig. 6. example of analysis 2
6
Applications
The major application of the data warehouse for workflow logs is to support the business process improvement cycle. The analysis of the workflow execution data should provide insight for possible improvements in duration, throughput, adherence to deadlines, resource consumption, etc. The Data warehouse also provides statistical data for business process simulation. If business process reengineering leads to new workflow definitions, the execution of these new workflows can be compared with the old versions. Furthermore, benchmarking of workflow execution is supported on a very detailed level. The data warehouse can also be used as a basis for data mining, for e.g. for fraud detection, for finding typical execution patterns, etc. It can also be used to derive key quality figures for measuring the performance and quality achievements of organizational units. If the log data is fed into the log-warehouse shortly after their creation, the warehouse also supports process monitoring in an excellent way. Here an operational data store as intermediary between the workflow management system and the log warehouse could be very helpful. Such an architecture has been successfully applied for controlling and monitoring industrial production processes, e.g. in the semiconductor industry. Comparisons between planned execution and actual performance can constantly be monitored by predefined OLAP reports and workflow controllers can benefit from an improved overview. Another important area of application is the application of the data warehouse approach for workflow logs for process mining. For this purpose, the log is used for discovering the business rules and the workflow models which lead to the execution patterns which are observed. Data mining in the context of workflow logs to discover information about the workflow instances of various kinds is addressed e.g. in [3, 5, 9, 19, 20]. Different methods have emerged, targeting different kinds of data, such as workflow control structure, resource allocation, time consumption parameters and so on. The
A Data Warehouse for Workflow Logs
13
Fig. 7. example of analysis 2 - detail
specialized methods take advantage of the nature of the specific data to which they are applied. With the results, reverse engineering of the process specification can be performed to improve the system as a whole. Compared to these approaches we emphasized application areas where the workflow process model is known. Nevertheless, the proposed warehouse model can also store log data of ad-hoc workflows and may thus serve as a basis for process mining techniques mentioned above. The focus of our work is to exploit the workflow log and building a data warehouse to obtain aggregated information such as i.e. to detect critical process situations or quality degradations under different circumstances, rather than re-engineering workflow specifications form the log. However, these process mining techniques can deliver important data for discovering typical execution scenarios, dependencies between decisions and probabilities of workflow instance types. Business process re-engineering and workflow improvement will benefit form a combination of the approaches.
7
Conclusions
How an organization really works is precisely documented in the logs of its workflow management system. These workflow histories, therefore, are a very valuable source of information for a multitude of applications, from workflow monitoring and controlling to business process re-engineering, from statistics generation to fraud detection. For exploiting these valuable data we propose data warehouse technology. Most workflow management systems provide already some functions for analyz-
14
Johann Eder, Georg E. Olivotto, and Wolfgang Gruber
ing and browsing workflow-logs. We see the main advantage of a data warehouse based systems as follows: – Sophisticated OLAP tools can be used for analyzing the data. These OLAP tools offer much higher functionality and are higher optimized than typically monitoring interfaces to workflow systems. – Workflow execution data is typically only a part of an information system and decision support infrastructure. Using warehouse technology allows seamless integration of these data. – Frequently, larger corporations employ several workflow management systems for the execution of different business processes. The log-warehouse approach offers integrated and/or comparative analysis of the data produced by the different systems. – Using an independent log-data warehouse allows also to measure the success of a change (replacement) of workflow management systems. – For analysis and benchmarks workflow execution data has to be augmented with other data from different information systems of the organisation or data from external sources. Again, using warehouse technology opens this option for decision support. – Last but not least, the separation of operational process support and analytical decision support is mostly a performance improving architecture. Moreover, warehouses typically can hold data for longer periods of time and thus are more suited for trend analysis, etc. We developed a very general architecture of a log-data warehouse based on a workflow metamodel and typical information needs of process managers. In a large case study with data from an actual workflow installation we could show that the approach is viable. It was interesting to witness how the proposed model could be used to rapidly generate unexpected reports, generate various comparisons and trend analysis. Here the power of OLAP tools could be used very efficiently.
References [1] Work Group 1. Glossary: A workflow management coalition specification. Workflow Management Coalition, Brussels, Belgium, V 1.1 Final, 1994. [2] Work Group 1. Interface 1: Process definition interchange. Workflow Management Coalition, V 1.1 Final(WfMC-TC-1016-P), October 1999. [3] Rakesh Agrawal, Dimitrios Gunopulos, and Frank Leymann. Mining process models from workflow logs. In Hans-J¨ org Schek, F`elix Saltor, Isidro Ramos, and Gustovo Alonso, editors, Advances in Database Technology - EDBT’98, 6th International Conference on Extending Database Technology, Valencia, Spain, March 23-27, 1998, Proceedings, volume 1377 of Lecture Notes in Computer Science, pages 469–483. Springer, 1998. [4] Angela Bonifati, Fabio Casati, Umeshwar Dayal, and Ming-Chien Shan. Warehousing workflow data: Challenges and opportunities. In The VLDB Journal, pages 649–652, 2001.
A Data Warehouse for Workflow Logs
15
[5] Jonathan E. Cook and Alexander L. Wolf. Discovering models of software processes from event-based data. ACM Transactions on Software Engineering and Methodology, 7(3):215–249, 1998. [6] J. Eder and W. Gruber. A Meta Model for Structured Workflows Supporting Workflow Transformations. In Yannis MANOLOPOULOS and Pavol NAVRAT, editors, Sixth East-European Conference on Advances in Databases and Information Systems, ADBIS 2002, Bratislava, Slovakia, September 8-11, 2002, Proceedings. Springer, 2002. [7] Johann Eder, Herbert Groiss, and Walter Liebhart. The workflow management system panta rhei. In Asuman Dogac, Leonid Kalinichenko, M. Tamer Ozsu, and Amit Sheth, editors, Advances in Workflow Management Systems and Interoperability, 1997, pages 129–144. [8] Dimitrios Georgakopoulos, Mark F. Hornick, and Amit P. Sheth. An overview of workflow management: From process modeling to workflow automation infrastructure. Distributed and Parallel Databases, 3(2):119–153, 1995. [9] Joachim Herbst and Dimitris Karagiannis. Integrating machine learning and workflow management to support acquisition and adaption of workflow models. In DEXA Workshop, pages 745–752, 1998. [10] B. H¨ usemann, J. Lechtenb¨ orger, and G. Vossen. Conceptual Data Warehouse Design. In Proc. of the International Workshop on Design and Management of Data Warehouses (DMDW 2000), Stockholm, 2000. [11] W. Inmon. Building the Data Warehouse. John Wiley and Sons, New York, 1 edition, 1992. [12] M. Jarke, M. Lenzerini, Y. Vassiliou, and P. Vassiliadis. Fundamentals of Data Warehouses. Springer-Verlag, 2000. [13] C. Jensen. A consensus glossary of temporal database concepts. In S. Jajodia O. Etzion and S. Sripada, editors, Temporal Databases: Research and Practice, pages 367–405, 1998. [14] Pinar Koksal, Sena Nural Arpinar, and Asuman Dogac. Workflow history management. SIGMOD Record, 27(1):67–75, 1998. [15] P. Lawrence. Workflow Handbook. John Wiley and Sons, New York, 1997. [16] W. Liebhart. Fehler- und Ausnahmebehandlung im Workflow Management. Dissertation, Klagenfurt, 1998. [17] Peter Muth, Jeanine Weisenfels, Michael Gillmann, and Gerhard Weikum. Workflow history management in virtual enterprises using a light-weight workflow management system. In RIDE, pages 148–155, 1999. [18] G. Olivotto. Ein Data Warehouse zur Analyse von Workflows. Master Thesis, University of Klagenfurt – Department of Informatics–Systems, 2002. [19] A.J.M.M. Weijters and W.M.P. van der Aalst. Process mining. discovering workflow models from event-based data. In Krse, B., De Rijke, M., Schreiber, G, and Someren, M. van (Eds.), Proceedings of the 13th Belgium-Netherlands Conference on Artificial Intelligence, pages 283–290, 2001. [20] A.J.M.M. Weijters and W.M.P. van der Aalst. Rediscovering workflow models from event-based data. In Hoste, V., and De Pauw, G. (Eds.), Proceedings of the Eleventh Belgian-Dutch Conference on Machine Learning, pages 93–100, 2001. [21] Workflow Management Coalition. The Workflow Reference Model, 1995. [22] M. Wu and A. Buchmann. Research Issues in Data Warehousing. BTW’97, 1997.
Workflow and Knowledge Management: Approaching an Integration Jin Lai and Yushun Fan Department of Automation, Tsinghua University, Beijing 100084, China
[email protected] [email protected] Abstract. As in a famous saying: “Knowledge comes from practice and should return to practice“, knowledge is closely related to business process where it is used and created. In this part, WFMS (Workflow management system), as a system for business process definition, execution and management, plays an import role in KM (knowledge management): a big knowledge consumer and an important knowledge provider. But now the standard workflow technology has very few knowledge considerations. In this paper, after analyzing the relation among knowledge and WFMS, a new architecture for their integration is proposed. Considering the implementation of this architecture, one of the key points, the extended workflow model is studied. Keywords. Workflow, WFMS, knowledge management, knowledge model, KM.
1.
Introduction
The two lasting contributions of business process reengineering (BPR) are an emphasis on business processes and the recognition of the importance of knowledge and its management to an organization. The former orientation finds an expression in workflow management system (WFMS), which completely defines, manages and executes structured business processes, workflows, so as to make sure that the right tasks are executed at the right time by the right people using the right tools. Concerning the latter orientation, knowledge management (KM) is proposed and studied in order to improve the organization’s knowledge infrastructure and to bring the right knowledge to right people in the right form at the right time. [1] Actually business process and knowledge are both indispensable elements of an organization, and moreover, these two orientations are closely correlated. To workflow’s aspect, some complex business processes rely on intensive information exchange with the company’s environment, such as design process, decision making
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 16-29, 2002 © Springer-Verlag Berlin Heidelberg 2002
Workflow and Knowledge Management: Approaching an Integration
17
and some office processes, and the knowledge workers would like the workflow management system to automatically and even proactively offer the relevant knowledge in the right format and at the right time. Although WFMS is becoming more and more related with other application systems, such as the workflow management systems used in the EAI (Enterprise Application Integration) platforms (Weblogic 2.1, Tibico for example), in current standard WFMS approaches the considerations of supplying knowledge for knowledge intensive task (KIT) are still very few. To knowledge management’s aspect, knowledge itself is defined as “ the whole body of data and information that people bring to bear to practical use in action, in order to carry out tasks and create new information”[1]. In short, knowledge is information made actionable. In this part, knowledge management (KM) has a process focus. Knowledge creation, sharing, use and evaluation are all carried out in organizations’ daily business processes, and on the other hand, knowledge is embedded in routines, processes as well as norms, thus WFMS can be and should be an important dynamic knowledge resource for KM. Although WFMS is already used in some knowledge management system such as Microsoft’s Exchange 2000, it is mostly used as an aid for automating the processes needed for knowledge management itself such as document examination and approval, and because the workflow model has no knowledge elements, WFMS is not really integrated with knowledge. Thus although WFMS has a good framework for real-time knowledge capture and dissemination, its ability is not full recognized. In order to resolve the problem mentioned above, we propose a new architecture for an integration of WFMS and KM, where conventional workflow is extended with knowledge specifications to support context-aware knowledge supply and real-time knowledge collection, and two intelligent agents are proposed to help knowledge supplying and capturing. And an important part of this architecture, the extended workflow model will also be discussed. Our design in this part is used for optimizing our workflow management system, CIMFLOW (http://www.simflow.net ). This paper will be organized as follows: in section 2, we will discuss the relation of workflow and knowledge from two aspects: a concept point of view and a system point of view; in section 3, after analyzing the limitations of current WFMS and knowledge management system, a new integration architecture is addressed; in section 4, the extended workflow model to support this architecture is proposed; and in section 5 some related work is listed; and in the final section, there will be a summary and a proposal to the future research.
18
Jin Lai and Yushun Fan
2. Workflow and Knowledge As is mentioned workflow and knowledge are closely interrelated, but how? This section will discuss this problem in two aspects: in view of fundamental concepts, what’s the relation between knowledge model and workflow model; in view of system construction, including what type of business process is associated to knowledge management, what kind of role WFMS will play in knowledge management and what knowledge is related to WFMS.
2.1 Knowledge Model and Workflow What is knowledge? In CommonKADS, one of the leading methodologies to support structured knowledge management and engineering, knowledge is defined as “ the whole body of data and information that people bring to bear to practical use in action in order to carry out tasks and create new information”. There may be some other definitions, but most of them agree that: 1) Knowledge attaches purpose and competence to information; 2) Knowledge is potential to generate action. According to these characteristics, a knowledge model is put forward by CommonKADS and widely adopted. This model has three parts: domain knowledge, inference knowledge and task knowledge. Each part is called a knowledge category. The domain knowledge means the domain-specific static knowledge and information. And its description is somewhat comparable to a “data model” or “object model”, which may include three types of constructions: concepts, relations and rules. And the inference knowledge describes the basic inference steps that can be made in domain knowledge and applied by tasks. Substantially an inference is an atomic reasoning action. And the tasks knowledge describes the goals a task pursues and how these goals can be realized through its decomposition into subtasks and (ultimately) inferences. Figure 1 gives shows an overview of the three knowledge categories, as well as an example at the right side. We can see that task knowledge is an indispensable part of this model. It is “task” that organizes the trivial domain knowledge up to satisfy a goal, that brings them the purpose and competence, and that offers environment for knowledge-to-action transformation.
Workflow and Knowledge Management: Approaching an Integration
19
Fig. 1 Overview of knowledge categories in the knowledge model. And on the other hand, “task” is also a key element that constructs a business process. In fact a business process itself can be view as a top-level task that is broken down to subtasks and the corresponding control logic. Each of the tasks in a business process will be assigned with goals, the participating agents (humans or computer software) and the resources. Although the standard business process model, such as workflow, has no knowledge specifications, it offers a good framework for task knowledge construction, and if more knowledge elements are added, for example inferences and domain knowledge objects, workflow itself can be easily mapped to a knowledge model and WFMS as a workflow automation system can be an important knowledge resource.
2.2 Knowledge Management and WFMS Knowledge management is a framework and a tool set for improving the organization’s knowledge infrastructure, aimed at getting he right knowledge to the right people in the right form at the right time. And the three key components in knowledge management are agents, business processes and knowledge assets, where an agent can be a human, an information system or any other entity capable of carrying out a task. And the business processes that associate to knowledge
20
Jin Lai and Yushun Fan
management can be divided into two types: an enterprises’ daily business process that contains knowledge intensive tasks and the business processes for managing knowledge such as document examination and approval. In this paper we will focus on the first type, the daily business process. And business process mentioned in this paper indicates the enterprise’s daily business process by default. All knowledge-management actions are defined in terms of these tree objects: agents, business processes and knowledge assets. And their relation can be viewed in Figure 2.
Fig. 2. The relation of agents, business process, knowledge Assets Knowledge assets are possessed by agents, and the agents participate in business processes, in which knowledge is used and updated. Workflow management system (WFMS) is the place where daily business processes are defined and executed, it not only stores the workflow models but also instantiates these models and records the actions that are taken and the events that are happening. We can find that WFMS will play two different roles here: on is a knowledge consumer, who require some knowledge assets in order to fulfill the knowledge intensive task (KIT); the other is a knowledge provider, who by recording the execution trail of business processes, provide a good framework for knowledge collection. So WFMS should be an important part in the process of managing knowledge, especially knowledge capture and knowledge dissemination. And the relation between WFMS and knowledge management can be view in the following figure.
Workflow and Knowledge Management: Approaching an Integration
21
Fig. 3. WFMS and knowledge management The knowledge that is required and used in WFMS maybe knowledge of any type: domain knowledge, inference knowledge or task knowledge. And this knowledge can be stored in databases, data warehouses, documents, multimedia files etc. The ontology of knowledge and intelligent knowledge retrieval are the hot topics in knowledge management. And the knowledge that can be offered by WFMS comes down to five classes: 1) Workflow reference model, as a kind of domain knowledge; 2) Task knowledge, which describes what knowledge is used to meet the goals of a task and how is it used. Task knowledge is formed dynamically when the task is carried out; 3) newly created domain or inference knowledge, such as a new design document, a new competence description of an agent, together with its creation context; 4) Real-time quality evaluation of knowledge assets; 5) Some other audit information.
3.
A New Integration Architecture
“Knowledge comes from practice and should return to practice.” WFMS as a system for tasks routing and allocation, is the first place where knowledge is created, shared and used is closely related to knowledge. From a knowledge consumer’s part, it will be very helpful that the system will actively offer some relevant knowledge, such as
22
Jin Lai and Yushun Fan
the materials of some similar projects, the relating reports, papers, or some persons that may be helpful for this task, when executing a knowledge intensive task. But now most WFMS has very few knowledge considerations. On the other hand, although WFMS can offer a lot of valuable knowledge (of five types as is mentioned), this ability is omitted. In order to resolve the problem, we will propose a new integration architecture that can support the following functions. 1) WFMS will actively (i.e. without an explicit, detailed request by the employee) and intelligently recommend relevant knowledge considering the context of the tasks. And the user is free to accept or dismiss the recommendations, or to select different material according to his personal knowledge. Whatever the choice, the system will keep track of the solutions and record the results and the context automatically for refining knowledge recommendation at the later time. 2) When executing a business process, WFMS will not only offer supports to record what knowledge is used in a task, but also encourage users to figure how the knowledge is used and its quality evaluation, which can be done in coarse or fine granularity. In the coarse granular mode, a task can be viewed as a black box with goals and input/output knowledge. And each item of the knowledge has a property card that will describe its basic natures, such as its location, form, role in the task, the importance weight, quality evaluation etc. And in the fine granular mode, the users can define the corresponding task knowledge. That is to break the task down to subtasks, inference till the domain knowledge resources, as is shown in figure 1. And also each knowledge item will have a property card. 3) All the five types of knowledge that WFMS offers (workflow model, newly formed task knowledge, new created domain knowledge, knowledge evaluation etc.) will be exported. And after a series of processes, for example knowledge examination, approval and classification process, they will be included in the knowledge repository and be reused. In this way, the knowledge will be automatic linked to its creation and usage context and can be retrieved accordingly, should the need arise. The new architecture is shown in Figure 4.
Workflow and Knowledge Management: Approaching an Integration
23
Fig. 4 A new architecture for Workflow and knowledge management integration We will describe this architecture in two phases: process definition time and process enactment time. Although the boundary of these two phases is not very clearly (some elements that is defined in the first phase will be fulfilled or update in the second phase), the two phases have different focuses. !
Process Definition Time
The main action in this phase is to model the overall business process with a workflow tool. In order to enable the intended active knowledge function, the workflow model is extended with some knowledge specifications: KIT (Knowledge Intensive Task) descriptions and the corresponding context variables, as is shown in figure 4. KIT descriptions include the respective knowledge need and knowledge track. The knowledge need is described as generic queries or query schemata, together with the agent responsible for their processing. Because the knowledge need is context-aware, some context variables, such as the task’s name, the process’s name, the applicant of the process (may be a business partner) etc, are added. At the runtime,
24
Jin Lai and Yushun Fan
the agent processes the instantiated queries thus delivers relevant knowledge that helps to execute the activity in question. And the knowledge track mainly describes what knowledge is used to execute this task, how it is used and how is its quality. And the model of knowledge repository is also very important part. Knowledge repository contains a variety of knowledge sources to be searched and retrieved for active task support. These sources are made of different nature, resulting in different structure, access methods, and contents. To enable precise-content retrieval form heterogeneous sources, a representation scheme for uniform knowledge descriptions is needed. To this end, structure and metadata, information content and information context are modeled on the basis of formal ontology [6]. The Index in Figure 8 is thus realized as a set of descriptions modeling the information sources and facilitating an ontology-based access and retrieval. The ontology-based knowledge management is now a hot topic in this field. !
Process Enactment Time
In the process enactment time, the workflow models already defined will be imported in the workflow execution engine, and whenever it comes to a knowledge-intensive task, the special properties attached to KIT is deciphered and executed, which means that the engine will not only activate this task and allocate it to the appointed persons, as a standard workflow engine does, but also call an appropriate “knowledge supplying agent” (which is specified in the KIT description). And the agent performs the actual retrieval of relevant information from the information source. It relies on knowledge schema to realize an ontology-based knowledge retrieval and utilize the context information from the ongoing workflow – found in the instantiated KIT variables – in order to determine relevant information. Traversing the formal ontology according to specified search heuristics, the information is able to extend and refine the given queries and to reason about the relevance of available items. Of course the users can decide whether to use or dismiss the proposal knowledge, and what’s more they can query knowledge repository manually or search for knowledge out of the repository according to their personal experience. And then WFMS will support them to fulfill the knowledge report about what knowledge is used, how it is used and how about this knowledge in coarse or fine granularity. All this knowledge will be stored in WFMS’s two databases: workflow model base and workflow instance base. And the “knowledge capturing agent” will routinely retrieve the knowledge from these two databases and organize them in a new form and store them in knowledge buffer area.
Workflow and Knowledge Management: Approaching an Integration
25
And after some knowledge processes such as examination, approval, classification it will be imported to knowledge repository for reuse. Considering the implement of this architecture, there are four key points: 1) the extended workflow models have such knowledge specifications as KIT description and the corresponding context variables; 2) the ontology of knowledge repository; 3) the knowledge retrieval algorithm of “knowledge supplying agent”; 4) and the knowledge transform rules of “knowledge capturing agent”; In this paper, we will discuss the first key point: the extend workflow model.
4.
Extended Workflow Model
As is said, a standard business process model, like workflow model, has no knowledge considerations. To support the architecture in figure 4, we have to extend the standard workflow model by adding a special task type: knowledge intensive task (KIT). KIT is inherited from a common task, so beside the conventional property items such as goals, executing agents etc, some knowledge specifications will be added to KIT: KIT description and context variables. And the KIT description has two parts: one is knowledge need, and the other is knowledge track. Conceptually, this means to extend business process to cover not only what is to be done but also what is needed to do it and how to do it. !
Knowledge Need
A knowledge need represents a demand for necessary knowledge for the support and accomplishment of a knowledge intensive task. And from the representational point of view it is important to note that the knowledge need can be represented as some kind of query – possibly a dynamic and sophisticated one. It has to be stated: 4.1. What knowledge is needed, i.e. which variables have to be filled? 4.2. How to obtain the knowledge, i.e. which sources to consider and which search heuristics to use? 4.3. How the concrete information need at runtime depends on the context of the actual application and the state of the business process? According to these considerations, a knowledge need template is proposed in
26
Jin Lai and Yushun Fan
Table 1: Knowledge need template Basic knowledge need properties Name Content Fulfilled way Process instance’s ID Task name Contact person Personal interests Topic1 Topic2 … Topicn Supplying agent Category Topic1’s weight Topic2’s weight … Topicn’s weight Time
(Context variable) (Context variable) (Context variable) (Context variable) context variable or not context variable or not … context variable or not
Automatically
Edited in execution time? No
Automatically
No
Automatically
No
Automatically
No
Automatically or Manually Automatically or Manually … Automatically or Manually Manually Manually Advanced knowledge need properties Manually (mean value by default) Manually (mean value by default) … … Manually (mean value by default) context variable Automatically or not or Manually (with default value)
Yes Yes … Yes No Yes Yes Yes … Yes Yes
Table 1 indicates that some property items can be fulfilled by context variable and the knowledge need can be dynamically formed considering different business process instances. !
Knowledge Track
As is said, knowledge tracks can be coarse or fine granular. In the coarse mode, the task acts like a black box with information inputs and outputs.
Workflow and Knowledge Management: Approaching an Integration
27
Fig. 5 Knowledge specifications in coarse granularity Each knowledge item will have a property card that describes how is this knowledge and how this knowledge is used. And the property card’s template is shown in Table 2.
Name Knowledge Possessed by Used In Role Form Usage
Quality evaluation Importance weight
Table2 the knowledge property template Content Fulfilled way Name Automatically annually Agent Automatically manually Task (Context variable) Automatically Input or output? Manually Mind? Paper? Electronic? Automatically Action skill? Other? manually As a document template? Manually Useful in Content? Or other comments. If input knowledge.
or or
or
Manually Manually
And in the fine granularity mode, the process of the knowledge flow inside the task is described, which means to break the task down to subtasks, and the subtasks to inferences until the domain knowledge that is the raw knowledge resource. No matter what, each knowledge input/output, excluding the template knowledge items, will have a property card similar to that in table2. The process of the knowledge flow inside a task is actually task knowledge and can be viewed in figure 1, especially the example in the right side.
28
Jin Lai and Yushun Fan
5.
Related Work
The idea of extending workflow management system to knowledge is becoming more accepted in the last few years. Ref. [2] discusses the knowledge that is related to workflow, but the architecture or system for knowledge capture and dissemination is not yet addressed. Ref. [3] and Ref. [4] presents an architecture for information management assistants, which will assess the user’s work context for starting active information delivery, this idea is quite similar to our approach, but it doesn’t address the possibility of retrieving knowledge from WFMS either. And Ref. [1] proposes commKADS, a knowledge managing methodology, which discuss the definition and modeling of knowledge. Their main idea is that knowledge should be closely related to tasks, this idea does give us some inspiration, but their work is restricted in knowledge management, and does not care about the WFMS’s knowledge need and its ability to actively capture knowledge in daily business processed. And there is other related work, such as ontology-based knowledge management, XML-based document representations, which are discussed in Ref. [6], [10], [11], [12], [13] etc.
6.
Conclusion
Knowledge is information made actionable. So knowledge is closely related to business processes, especially an enterprise’s daily business process. And WFMS, as a business process execution system, plays an important role in knowledge management. WFMS not only is a big knowledge consumer but also can be an important knowledge provider. But unfortunately, the standard workflow technology has very few knowledge considerations. In this part, we propose a new architecture for the deep integration of WFMS and knowledge management, which extends conventional workflow model by such knowledge specifications as KIT description and the corresponding context variables, and puts forward two agents that take charge of active knowledge supply and routinely knowledge capture. With this architecture, WFMS can intelligently supply relevant knowledge according to the context of a task, and actively record the knowledge usage and creation trail that will be imported to knowledge repository for knowledge assessment and reused. Considering the implementation of this architecture, there are four key points: the extended workflow model, the knowledge ontology schema, the knowledge supplying agent’s query algorithm
Workflow and Knowledge Management: Approaching an Integration
29
and the knowledge capturing agent’s transform rules. In this paper one of the key points, the extended workflow model is studied, and the others will be our future research topics.
7.
Reference
[1] Guus Schreiber, Hans AKKermans, Anjo Anjewierden, Robert De Hoog, Nigel Shadbolt, Walter Van Develd and Bob wielinga. Knowledge Engineering and Management: the CommonKADS Methodology. A Bradford Book, The MIT Press, Cambridge, Massachusetts, London, England, 2000. [2] Alfs T. Berztiss and SYSLAB. Knowledge and workflow systems. IEEE. 2000. [3] A. Abecker, A. Bernardi, H. Maus, M. Sintek, C. Wenzel. Information supply for business process: coupling workflow with document analysis and information retrieval. Knowledgebased systems 13(2000)P271-284. [4] Steffen Staab & Hans-Peter Schnurr. Knowledge and Business Processes: Approaching an integration. Http://www.aifb.uni-karlsruhe.de/ [5] Beate List, Joset Schiefer, Rjobert M. Bruckner. Measuring knowledge with workflow management systems. IEEE. 2001. [6] V.R. Benjamins, D.Fensel, A. Gomez-Perez. Knowledge management through ontologies. Proceedings of the second International Conference on Practical Aspects of Knowledge management, PAKM-98, Basel, Switzerland, 1998. [7] G. Alonso, D.Agrawal, A. EI Abbadi, C. Mohan. Functionality and limitations of current workflow management systems. 1997. [8] P.Balasubramanian, Kumar Nochur, John C. Henderson, M.Mill.ie Kwan. Managing process knowledge for decision support. Decision Support Systems 27 (1999) 145-162. [9] Workfow Management coalition. Workflow client application programming Inter-face specification. WFMC-TC-1009. [10] Andreas Abecker, Gregoris Mentzas, Maria Legal, Spyridon Ntioudis and Giorgos Papavassiliou. Business-process oriented delivery of knowledge through domain ontologies. [11] Steffen Staab and Rudi Studer, Hans-peter Schnurr and York Sure. Knowledge Process and ontologies. IEEE. 2001. [12] Linda Larson Kemp, Kenneth E. Nidiffer, Louis C. Rose, Robert Small, and Michael Stankosky. Knowledge Management: insights from the Trenches. IEEE software. 2001. [13] Daniel E. O’Leary, Rudi Studer. Knowledge Management: An Interdisciplinary Approach. IEEE intelligent systems. 2001.
/LQHDU7HPSRUDO,QIHUHQFHRI:RUNIORZ0DQDJHPHQW 6\VWHPV%DVHGRQ7LPHG3HWULNHWV0RGHOV 48N tasks run concurrently, N of them have to reach the barrier before the next task is enabled. In our example any two out of the three tasks, X, Y, and Z have to finish before V is enabled. (j) The deferred choice pattern is similar with the XOR split but this time the choice is not made explicitly and the run-time environment decides what branch to take. In the following we analyze several patterns and propose a set of linear reasoning algorithms to reduce each pattern to a canonical form. The original model has a single starting point A and a single end point B. Point A and B are connected by a subnet. After reduction the subnet is substituted by an equivalent transition T, see figure 3. T Reduction • A
Subnet B
• A
B
Fig. 3. Reduction to a canonical form requires the replacement of a subnet with an equivalent transition
3.1 The Sequential Workflow Pattern The following algorithm expresses the process of reduction in case of a sequential workflow pattern: 1 Algorithm 1T1T2 → T[tl , tu] = T[t1l + t2l ,t1u + t2u] Justification: figure 4 shows a TPN model of a sequential workflow pattern consisting of two tasks X to be executed first and Y to be executed second. The subnet models the two tasks X and Y denoted by transitions T1[t1l, t1u] and T2[t2l, t2u]. The equivalent
36
Yang Qu, Chuang Lin, and Jiye Wang
transition of the canonical form is T[tl, tu], with tl and tu unknown. Our goal is to relate tl and tu with t1l, t1u, t2l, and t2u.
T[t1l , t1u ]
T [t 2l , t 2 u ]
• A
B
C
Fig. 4. The TPN model of a sequential workflow pattern
Call s1, s2 and s are the enabling time and t1*, t2* and t* the firing time of T1, T2 and T respectively. From the definition of TPN it follows that: t1l ≤ t1* ≤ t1u; t2l ≤ t2* ≤ t2u; and tl ≤ t* ≤ tu; From the definition of the Sequential pattern: s1 + t1* = s2, T is the equivalent transition thus: s = s1, t* = s2 + t2*; Form the axioms algebraic inequalities, we get: t1l + t2l ≤ t* = t1* + t2* ≤ t1u + t2u It follows that: tl= t1l+ t2l; and tu= t1u+ t2u. 3.2 The AND split/join Workflow Pattern The following algorithm expresses the process of reduction in case of AND split/join patterns: 2 → T[tl ,tu]=T[max(t1l ,t2l),max(t1u ,t2u)] Algorithm 2 :T1T2 Justification: the TPN model of the AND split/join pattern is presented in figure 5. Transitions T1 and T2 are executed in parallel; they could fire at the same time, or in any order. This pattern involves two basic flow-control structures, AND-split and AND-join; the AND-split means that T1 and T2 can fire simultaneously; the function of the AND-join is to synchronize the two sub-flows, and produce a new token in place B after T1 and T2 have fired. Both structures are represented by instantaneous transitions or by time transitions. T1 and T2 make up a subnet. Call s1, s2 and s are the enabling time and t1*, t2* and t* are the firing time of T1, T2 and T respectively. T1 and T2 are enabled at the same time, that is s1 = s2 = s0 According to the definition of the TPN: t1l ≤ t1* ≤ t1u, and t2l ≤ t2* ≤ t2u; That means that the end point of T1 is between [s + tl1, s + t1u]the end point of T2 is between [s + t2l, s + t2u]the earliest time both T1 and T2 could completed is max(s + t1l, s + t2l )and the latest time both T1 and T2 could completed is maxs + t1u, s + t2u. Thus max(t1l , t2l) ≤ t* ≤ max(t1u, t2u), and tl = max(t1l, t2l ), tu = max(t1u, t2u)
Linear Temporal Inference of Workflow Management Systems
37
T [ t 1 l , t 1 u ]
• A
B
AND-split
AND-join T [ t 2l , t 2 u ]
Fig. 5. The TPN model of a AND split /join workflow pattern. The two parallel tasks X and Y are modeled as transitions T1 [t1l, t1u] and T2 [t2l, t2u]
3.3 The OR split/join Pattern; The Non-Deterministic Choice The following algorithm expresses the process of reduction in case of an OR split/join: 3 Algorithm 3T1T2 → T[tl ,tu]=T[min(t1l ,t2l),min(t1u ,t2u)] Justification: the TPN model of the OR split/join pattern is presented in figure 6. The transitions T1 and T2 are enabled at the same time, but they are in conflict: if T1 fires then T2 will not fire, and vice versa. The system chooses to fire one of several enabled transitions. Two basic flow-control structures are used in the model to choose one from two or more transitions. OR-split and OR-join. An OR-split structure is represented by a position, which have several output arcs. An OR-join structure is represented by a position, which have several input arcs. A token set out from position A, after an instantaneous transition, T1 or T2 fires, and then after another instantaneous transition, the token reaches position B. T [ t 1 l , t 1 u ]
• A
B OR-split
T [ t 2l , t 2u ]
OR-join
38
Yang Qu, Chuang Lin, and Jiye Wang
Fig. 6. The TPN model of an OR split /join workflow pattern. Only one of the two parallel tasks X and Y, which modeled as transitions T1 [t1l, t1u] and T2 [t2l, t2u], will ever be executed. The choice is non-deterministic
The transition T fires either according to T1 or according to T2 and must fire before both t1u and t2u. The earliest firing time of T is min(t1l ,t2l), and the latest firing time of T is min(t1u,t2u). Thus tl = min(t1l ,t2l) and tu = min(t1u,t2u). If t1u0, T1 fires; if c≤0, T2 fires. Figure 7 (b) shows the equivalent model without an XOR routing task.
T [ t 1 l , t 1 u ]
c>0 •
XOR
A
B
c0 A
•
B
c0, T1 fires; if c≤0, T2 fires. (b) Two conditions are associated with arcs from A
The equivalent transition T fires either according to T1 or according to T2. The firing interval of T is a combination of [t1l, t1u] and [t2l, t2u]. Consider two cases: (1) when [t1l, t1u][t2l, t2u] , the combination of T1 and T2 builds up one new continuous interval, with the earliest firing time min[t1l ,t2l] and the latest firing time max[t1u, t2u]. Thus tl = min(t1l ,t2l), and tu = max(t1u,t2u). (2) when [t1l, t1u][t2l, t2u] = the firing interval of the equivalent transition will be two continuous intervals: [t1l, t1u][t2l, t2u]. Thus tl = min(t1l ,t2l), and
tu = max(t1u,t2u).
3.5 The iteration Pattern; Deterministic Choice
•
XOR
A
B
T [ t 1 l , t 1 u ]
• A
T [ t 1 l , t 1 u ]
B
Fig. 8. The TPN model of an iteration workflow pattern: (a) The original TPN. (b) The transformed TPN model
The following algorithm expresses the process of reduction in case of an iterative pattern, see figure 8 (a). 5 → T[tl ,tu]=T[kt1l , kt1u] Algorithm 5: (T1k) In figure 8 (a) we see that this the flow-control structure has an XOR routing that tests the results of the task modeled by transition T1, and based upon this test, T1 may be executed k times. Figure 8 (b) shows the equivalent model without an XOR routing task.
40
Yang Qu, Chuang Lin, and Jiye Wang
4 Linear Reasoning of Workflow Models We now provide a set of minimal requirements for the TPN model to ensure the correctness of the workflow: 1) A TPN model has a source place i corresponding to start condition and a sink place o corresponding to an end condition. 2) Each task/condition is on a path from i to o. These two conditions are related to the structural properties of the Petri net and can be verified using a static analysis; they are similar to the ones imposed to the WF-nets defined by van de Aalst [14]. For linear reasoning we impose two additional conditions: 3) For any case, the procedure will terminate eventually and the moment the procedure terminates there is a token in place o and all the other places are empty. 4) There should be no dead tasks, i.e., it should be possible to execute an arbitrary task by following the appropriate route though the workflow model. Next we use an example from [15] to illustrate the method for establishing temporal relations in a WF-net. Example: We model the activity of a traveling agency and follow a client interested in booking a trip, see figure 9. First, the client waits until a travel agent becomes available and then provides the travel information: the destination, the date, the desired departure time, the type of lodging, rental car information, and so on; then it pays a service fee. This requires 20-30 minutes all together. Then the travel agent searches for availabilities, after some 60 minutes it has a full itinerary and discusses it with the client. There are three possible outcomes of this discussion; the client: ! Accept the itinerary; then the agent contacts the hotel and the airline (10-30 minutes) and confirms the bookings, this takes 20-40 minutes. If the customer requires insurance an additional time is requires, 15-30 minutes. Some of these tasks can be executed in parallel, see figure 9. ! Ask for changes; the agent spends another 60 minutes changing the reservations. ! Leave without booking; he spends an additional 5-10 minutes to have his service fee refunded. We want to determine: (1) The time required when one change is requested but cannot be accommodated and the client leaves without booking the trip. (2) The shortest possible time a client needs for booking a trip.
Linear Temporal Inference of Workflow Management Systems
Search
Register
41
D
Propose
inform send prepare
book
D
insurance
Fig. 9. The task structure corresponding for booking a trip at a travel agency
We use an equivalent TPN model to represent the process of booking a trip; in figure 10, P1 is a source place, and P10 is a sink place. From P1 to P3, the structure is s combination of the sequential and the iteration patterns. Using algorithms 1 and 5, the equivalent transition from P1 to P2 called T1’ is: 5 T2k → T[kt1l , kt1u ]=T[60k60k] 1 T1T[60k,60k] → T[20+60k,30+60k ]=T1’ 1 T1T2 → T[tl ,tu]=T[t1l +t2l ,t1u +t2u ] The tasks inform and book can be executed in parallel; using algorithm 2, we obtain the equivalent transition T2’ as follows: 2 T5T6 → T[max(10 ,20 ),max(30 ,40 )]=T[20,40]=T2’ The transitions T2’ and T4 can be executed in parallel; using algorithm 2 get obtain the equivalent transition corresponding to the preparation process T3’: 2 T4T2’ → T[max(15,20 ),max(30 ,40 )]=T[20,40]=T3’ The transitions T3 corresponding to “leave without booking” and T3’, to “preparation” are condition selection; using algorithm 4: 4 T3 T3’ → T=[5,10 ] ∪ [20,40] The equivalent transition corresponding to the case when the client leaves without booking is modeled by transition T4’: 1 T1’T[5,10] → T[60k+5,60k+10 ]=T4’ (1) When k=2, the time taken by the equivalent transition T4’ is from 2 hours and 5 minutes, to 2 hours and 10 minutes. Assume the equivalent transition from the client reaches the agency to he books a trip successfully is transition T5’: 1 T1’T[2040] → T[60k+20,60k+40 ]=T5’
42
Yang Qu, Chuang Lin, and Jiye Wang
(2) When k=1, the shortest time a client needs for booking the trip is 80 minutes, 60+20=80. •
P the client reaches the agency T [20, 30]
P begin to search T [60, 60] P provide
P begin to prepare
P decide to leave
T [5,10]
P consider Insurance
T [15,30]
P book
P inform
T [20,40]
T [10,30]
P send
P end
Fig. 10. The TPN model for the process in figure 9, describing the booking of a trip
5 Conclusion Traditional applications of workflow management to business processes and office automation typically deal with few activities with a relatively long lifetime, hours, days, weeks, or months; the time scale reflects the fact that most of the activities are carried out by humans while the enactment engine is hosted by a computer.
Linear Temporal Inference of Workflow Management Systems
43
On the other hand, modern applications of workflow management to process coordination on service, data and computational grids deal with a larger number of short- lived activities with a lifespan of microseconds, milliseconds, seconds, or minutes, carried out by computers. While in the first case it is feasible to use ad-hoc methods to study the temporal properties of processes, this option becomes impractical in the second case. Thus need for automated methods to study the temporal behavior of workflow models. Moreover, the second type of applications requires very efficient algorithms for workflow enactment and analysis because of the very short nature of activities involved. In this paper we investigate the temporal properties of workflows modeled by WF-nets. A set of linear reasoning algorithms for commonly used workflow patterns are presented. Using these algorithms one could solve temporal reasoning problems for sound WF-nets in linear time. The approach discussed in this paper does not take into account the case of random intervals; an extension of the approach will incorporate these features into the methodology. The reasoning algorithms would be extended to answer queries regarding random intervals of an event.
References [1]
[2] [3] [4]
[5] [6] [7]
[8]
[9]
Marinescu D.C. Internet-based Workflow Management: Towards a Semantic Web. 610+XVII, Wiley, New York, Chichester, Weinheim, Brisbaine, Singapore, Toronto, 2002. Bracchi, G. and B. Pernici, The Design Requirements of Office Systems, ACM Trans. Office Automat. Syst., 2(2): 151-170, 1984. Schal, T. Workflow Management Systems for Process Organizations. Springer Verlag, 1996. Yao, Y. A Petri net Model for Temporal Knowledge Representation and Reasoning. IEEE Trans. Systems, Man, and Cybernetics, 24(9): 1374-1382, 1994. Workflow Management Coalition. http://www.wfmc.com, 1998. W. van der Aalst. Three Good Reasons for Using a Petri Net based Workflow Management System. Proc. IPIC 96, 179-181, 1996. Jensenss, G.K., J. Verelst, B. Weyn. Techniques for Modeling Workflows and their Support of Reuse. In Business Process Management. W. van der Aalst, J. Desel, and A. Oberweis Eds. Lecture Notes in Computer Science, Vol. 1806, 1-15, 2000. M. D. Zisman, Representation, Specification and Automation of Office Procedures, PhD thesis, University of Pennsylvania, Warton School of Business, 1977. Ellis, C.A. and G.J. Nutt. Modeling and Enactment of Workflow Systems. In Applications and Theory of Petri Nets, Lecture Notes on Computer Science, Vol. 691, Springer Verlag, 1-16, 1993.
44
Yang Qu, Chuang Lin, and Jiye Wang
[10] W. van der Aalst. Workflow Verification: Finding Control-Flow Errors Using Petri-Nets-based Techniques. In Business Process Management. W. van der Aalst, J. Desel, and A. Oberweis, Eds. Lecture Notes in Computer Science, Vol. 1806, 161-183, 2000. [11] Adam, N.R. V. Atluri, and W. K. Huang. Modeling and Analysis of Workflows Using Petri Nets. Journal of Intelligent Information Systems. 10(2), 1-29, 1998. [12] Yi Zhou, Murata T., Fuzzy-timing Petri net model for distributed multimedia synchronization, Systems, Man, and Cybernetics, IEEE International Conference, Volume: 1, Page(s): 244 -249 vol.1, 1998. [13] W. van der Aalst and A. H. ter Hofstede and B. Kiepuszewski and A.P. Barros. Workflow Patterns. http://www.tm.tue.nl.research/patterns/. Technical Report, Eindhoven University of Technology, 2000. [14] W. van der Aalst. The Application of Petri Nets to Workflow Management. Journal of Circuits, Systems, and Computers, 8(1): 21-66, 1998. [15] W. van der Aalst. Verification of Workflow Tast Structures. Information Systems Vol.25, No1, pp43-69, 2000.
Discovering Workflow Performance Models from Timed Logs W.M.P. van der Aalst and B.F. van Dongen Department of Technology Management, Eindhoven University of Technology P.O. Box 513, NL-5600 MB, Eindhoven, The Netherlands.
[email protected] Abstract. Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we have developed techniques for discovering workflow models. Starting point for such techniques are so-called “workflow logs” containing information about the workflow process as it is actually being executed. In this paper, we extend our existing mining technique α [4] to incorporate time. We assume that events in workflow logs bear timestamps. This information is used to attribute timing such as queue times to the discovered workflow model. The approach is based on Petri nets and timing information is attached to places. This paper also presents our workflow-mining tool EMiT. This tool translates the workflow log of several commercial systems (e.g., Staffware) to an independent XML format. Based on this format the tool mines for causal relations and produces a graphical workflow model expressed in terms of Petri nets. Key words: Workflow mining, workflow management, data mining, Petri nets.
1
Introduction
During the last decade workflow management concepts and technology [2, 3, 11, 17–19] have been applied in many enterprise information systems. Workflow management systems such as Staffware, IBM MQSeries, COSA, etc. offer generic modeling and enactment capabilities for structured business processes. By making graphical process definitions, i.e., models describing the life-cycle of a typical case (workflow instance) in isolation, one can configure these systems to support business processes. Besides pure workflow management systems many other software systems have adopted workflow technology. Consider for example ERP (Enterprise Resource Planning) systems such as SAP, PeopleSoft, Baan and Oracle, CRM (Customer Relationship Management) software, etc. Despite its promise, many problems are encountered when applying workflow technology. One of the problems is that these systems require a workflow design, i.e., a designer has to construct a detailed model accurately describing the routing of work. Modeling a workflow is far from trivial: It requires deep knowledge of the Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 45-63, 2002. © Springer-Verlag Berlin Heidelberg 2002
46
W.M.P. van der Aalst and B.F. van Dongen
workflow language and lengthy discussions with the workers and management involved. Instead of starting with a workflow design, we start by gathering information about the workflow processes as they take place. We assume that it is possible to record events such that (i) each event refers to a task (i.e., a well-defined step in the workflow), (ii) each event refers to a case (i.e., a workflow instance), and (iii) events are totally ordered. Any information system using transactional systems such as ERP, CRM, or workflow management systems will offer this information in some form. Note that we do not assume the presence of a workflow management system. The only assumption we make, is that it is possible to collect workflow logs with event data. These workflow logs are used to construct a process specification which adequately models the behavior registered. We use the term process mining for the method of distilling a structured process description from a set of real executions.
case identifier task identifier timestamp (date:time) case 1 A 08-05-2002 : 08:15 case 2 A 08-05-2002 : 08:24 case 3 A 08-05-2002 : 09:30 case 2 B 08-05-2002 : 10:24 case 5 A 08-05-2002 : 10:24 case 4 A 08-05-2002 : 10:25 case 3 B 08-05-2002 : 10:26 case 1 F 08-05-2002 : 11:45 case 4 B 08-05-2002 : 11:46 case 2 C 08-05-2002 : 12:23 case 2 D 08-05-2002 : 15:14 case 5 F 08-05-2002 : 15:17 case 3 D 08-05-2002 : 15:19 case 1 G 08-05-2002 : 16:26 case 4 C 08-05-2002 : 16:29 case 5 G 08-05-2002 : 16:43 case 3 C 09-05-2002 : 08:22 case 4 D 09-05-2002 : 08:45 case 3 E 09-05-2002 : 09:10 case 4 E 09-05-2002 : 10:05 case 2 E 09-05-2002 : 10:12 case 2 G 09-05-2002 : 10:46 case 3 G 09-05-2002 : 11:23 case 4 G 09-05-2002 : 11:25 Table 1. A workflow log.
To illustrate the principle of process mining, we consider the workflow log shown in Table 1. This log contains information about five cases (i.e., workflow instances). The log shows that for two cases (1 and 5) the tasks A, F and G
Discovering Workflow Performance Models from Timed Logs
47
have been executed. For case 2 and case 4 the tasks A, B, C, D, E and G have been executed. For case 3 the same tasks have been executed. However, C and D are swapped. Each case starts with the execution of A and ends with the execution of G. If task C is executed, then also task D is executed. However, for some cases task C is executed before task D, and for some the other way around. Based on the information shown in Table 1 and by making some assumptions about the completeness of the log (i.e., assuming that the cases are representative and a sufficient large subset of possible behaviors is observed), we can deduce for example the process model shown in Figure 1. The model is represented in terms of a Petri net [23]. The Petri net starts with task A and finishes with task G. These tasks are represented by transitions. After executing A there is a choice between either executing B or executing F. After executing B, tasks C and D are executed in parallel, followed by E. Finally, after executing either E or F, G can be executed. To execute C and D in parallel, tasks B corresponds a so-called AND-split and task E corresponds a so-called AND-join. Note that for this example we assume that two tasks are in parallel if they appear in any order. By distinguishing between start events and end events for tasks it is possible to explicitly detect parallelism. However, for the moment we assume atomic actions.
C B
E
D G
A F
Fig. 1. A process model corresponding to the workflow log.
The first two columns of Table 1 contain the minimal information we assume to be present to provide process models such as the one shown in Figure 1. In [4] we have provided an algorithm (named α) which can be used to discover a large class for process models from logs containing this minimal information. However, in many applications the workflow log contains a timestamp for each event and this information can be used to extract information about the performance of the process, e.g., bottlenecks in the process. In this paper, we explore ways to mine timed workflow logs. In the log shown in Table 1 we can only see the completion time of tasks. In most logs we can also see when tasks are started. However, even using this minimal information we can calculate all kinds of performance measures. Figure 2 shows the minimal, maximal, and average time each case spends in a certain stage of the process. (The times indicated refer to the time a token spends in a place between production and consumption [23].) For example, the mean time between the completion of B and the completion of C is 573,
48
W.M.P. van der Aalst and B.F. van Dongen
the minimum is 119, the maximum is 1316 minutes.1 Similar information is given for all other places. Note that because the log shown in Table 1 only captures completion times, we cannot calculate service times and resource usage. Moreover, we do not know the exact arrival time of each case. The log only shows when the first step in the process (task A) is completed. Therefore, we can only calculate the flow time starting from the completion of A. As indicated the in Figure 2, the average flow time is 1101 minutes.
mean:573 min:119 max::1316
mean:804 min:48 max::1309
mean:614 min:290 max::1259
mean:763 min:80 max::1138
flow time from A to G mean: 1101 min: 379 max: 1582
C B
E
D G
A F mean:152 min:56 max:293
mean:123 min:34 max:281
Fig. 2. Performance information (mean, min, and max) extracted from the workflow log is indicated in the process model.
For this simple example, it is quite easy to construct a process model that is able to regenerate the workflow log and attribute timing information to place. For larger workflow models this is much more difficult. For example, if the model exhibits alternative and parallel routing, then the workflow log will typically not contain all possible combinations. Consider 10 tasks which can be executed in parallel. The total number of interleavings is 10! = 3628800. It is not realistic that each interleaving is present in the log. Moreover, certain paths through the process model may have a low probability and therefore remain undetected. Noisy data (i.e., logs containing exceptions) can further complicate matters. In this paper, we do not focus on issues such as noise. We assume that there is no noise and that the workflow log contains “sufficient” information. Under these ideal circumstances we investigate whether it is possible to discover the workflow process and extract timing information, i.e., for which class of workflow models is it possible to accurately construct the model and performance information by merely looking at their logs. This is not as simple as it seems. Consider for example the process model shown in Figure 1. The corresponding 1
B and C are executed for three cases: 2, 3, and 4. The time between the completion of B and C is respectively 119 (case 2), 1316 (case 3), and 283 (case 4) minutes. Therefore, the average time attached to the corresponding place is (119+1316+283)/3=573 minutes.
Discovering Workflow Performance Models from Timed Logs
49
workflow log shown in Table 1 does not explicitly show any information about AND/XOR-splits and AND/XOR-joins. Nevertheless, this information is needed to accurately describe the process. These and other problems are addressed in this paper. For this purpose we use workflow nets (WF-nets) [1, 3]. WF-nets are a class of Petri nets specifically tailored towards workflow processes. The Petri net shown in figures 1 and 2 is an example of a WF-net. The remainder of this paper is organized as follows. First, we introduce some preliminaries, i.e., Petri nets and WF-nets. In Section 3 we present an algorithm that discovers a large class of workflow processes. Section 4 extends this algorithm to also extract timing information. Section 5 presents the tool we have developed to mine timed workflow logs. The application of this tool to Staffware logs is demonstrated in Section 6. To conclude we provide pointers to related work and give some final remarks.
2
Preliminaries
This section introduces the techniques used in the remainder of this paper. First, we introduce standard Petri-net notations, then we define a subclass of Place/Transition nets tailored towards workflow modeling and analysis (i.e., WFnets [1, 3]). 2.1
Petri Nets
We use a variant of the classic Petri-net model, namely Place/Transition nets. For an elaborate introduction to Petri nets, the reader is referred to [10, 22, 23]. Definition 2.1. (P/T-nets)An Place/Transition net, or simply P/T-net, is a tuple (P, T, F ) where: 1. P is a finite set of places, 2. T is a finite set of transitions such that P ∩ T = ∅, and 3. F ⊆ (P × T ) ∪ (T × P ) is a set of directed arcs, called the flow relation. A marked P/T-net is a pair (N, s), where N = (P, T, F ) is a P/T-net and s ∈ B(P ), i.e., s is a bag over P , denoting the marking of the net. The set of all marked P/T-nets is denoted N . A marking is a bag over the set of places P , i.e., it is a function from P to the natural numbers. B(P ) denotes the set of all bags over P . We use square brackets for the enumeration of a bag, e.g., [a2 , b, c3 ] denotes the bag with two a-s, one b, and three c-s. The sum of two bags (X + Y ), the difference (X − Y ), the presence of an element in a bag (a ∈ X), and the notion of subbags (X ≤ Y ) are defined in a straightforward way and they can handle a mixture of sets and bags. Let N = (P, T, F ) be a P/T-net. Elements of P ∪ T are called nodes. A node x is an input node of another node y iff there is a directed arc from x to y (i.e., N xF y). Node x is an output node of y iff yF x. For any x ∈ P ∪ T , • x = {y | yF x} N and x • = {y | xF y}; the superscript N may be omitted if clear from the context.
50
W.M.P. van der Aalst and B.F. van Dongen
Figure 1 shows a P/T-net consisting of 8 places and 7 transitions. Transition A has one input place and one output place, transition B has one input place and two output places, and transition E has two input places and one output place. The black dot in the input place of A represents a token. This token denotes the initial marking. The dynamic behavior of such a marked P/T-net is defined by a firing rule. Definition 2.2. (Firing rule) Let (N = (P, T, F ), s) be a marked P/T-net. Transition t ∈ T is enabled, denoted (N, s)[t, iff •t ≤ s. The firing rule [ ⊆ N × T × N is the smallest relation satisfying for any (N = (P, T, F ), s) ∈ N and any t ∈ T , (N, s)[t ⇒ (N, s) [t (N, s − •t + t•). In the marking shown in Figure 1 (i.e., one token in the source place), transition A is enabled and firing this transition removes the token for the input place and puts a token in the output place. In the resulting marking, two transitions are enabled: F and B. Although both are enabled only one can fire. If B fires, one token is consumed and two tokens are produced. Definition 2.3. (Reachable markings) Let (N, s0 ) be a marked P/T-net in N . A marking s is reachable from the initial marking s0 iff there exists a sequence of enabled transitions whose firing leads from s0 to s. The set of reachable markings of (N, s0 ) is denoted [N, s0 . The marked P/T-net shown in Figure 1 has 8 reachable markings. 2.2
Workflow Nets
Most workflow systems offer standard building blocks such as the AND-split, AND-join, OR-split, and OR-join [3, 11, 17, 18]. These are used to model sequential, conditional, parallel and iterative routing (WFMC [11]). Clearly, a Petri net can be used to specify the routing of cases. Tasks are modeled by transitions and causal dependencies are modeled by places and arcs. In fact, a place corresponds to a condition which can be used as pre- and/or post-condition for tasks. An AND-split corresponds to a transition with two or more output places, and an AND-join corresponds to a transition with two or more input places. OR-splits/OR-joins correspond to places with multiple outgoing/ingoing arcs. Given the close relation between tasks and transitions we use these terms interchangeably. A Petri net which models the control-flow dimension of a workflow, is called a WorkFlow net (WF-net). It should be noted that a WF-net specifies the dynamic behavior of a single case in isolation. Definition 2.4. (Workflow nets) Let N = (P, T, F ) be a P/T-net and t¯ a fresh identifier not in P ∪ T . N is a workflow net (WF-net) iff: 1. object creation: P contains an input place i such that •i = ∅, 2. object completion: P contains an output place o such that o• = ∅, ¯ = (P, T ∪ {t¯}, F ∪ {(o, t¯), (t¯, i)}) is strongly connected, 3. connectedness: N
Discovering Workflow Performance Models from Timed Logs
51
The P/T-net shown in Figure 1 is a WF-net. Note that although the net is not strongly connected, the short-circuited net with transition t¯ is strongly connected. Even if a net meets all the syntactical requirements stated in Definition 2.4, the corresponding process may exhibit errors such as deadlocks, tasks which can never become active, livelocks, garbage being left in the process after termination, etc. Therefore, we define the following correctness criterion. Definition 2.5. (Sound) Let N = (P, T, F ) be a WF-net with input place i and output place o. N is sound iff: 1. 2. 3. 4.
safeness: (N, [i]) is safe, proper completion: for any marking s ∈ [N, [i], o ∈ s implies s = [o], option to complete: for any marking s ∈ [N, [i], [o] ∈ [N, s, and absence of dead tasks: (N, [i]) contains no dead transitions.
The set of all sound WF-nets is denoted W. The WF-net shown in Figure 1 is sound. Soundness can be verified using standard Petri-net-based analysis techniques. In fact soundness corresponds to liveness and safeness of the corresponding short-circuited net [1, 3]. This way efficient algorithms and tools can be applied. An example of a tool tailored towards the analysis of WF-nets is Woflan [27].
3
Mining Untimed Workflow Logs
After introducing some preliminaries we return to the topic of this paper: workflow mining. The goal of workflow mining is to find a workflow model (e.g., a WF-net) on the basis of a workflow log. In this section we first introduce the α mining algorithm which works on untimed logs. In the next section we focus on timed logs. 3.1
Definition of Workflow Logs and Log-Based Ordering Relations
Table 1 shows an example of a timed workflow log. In this section we only consider the first two columns. Note that the ordering of events within a case is relevant while the ordering of events amongst cases is of no importance. Therefore, we define a workflow log as follows. Definition 3.1. (Workflow trace, Workflow log) Let T be a set of tasks. σ ∈ T ∗ is a workflow trace and W ∈ P(T ∗ ) is a workflow log.2 The workflow trace of case 1 in Table 1 is AF G. The workflow log corresponding to Table 1 is {ABCDEG, ABDCEG, AF G}. Note that in this paper we abstract from the identity of cases. Clearly the identity and the attributes of a case are relevant for workflow mining. However, for the algorithm presented in this section, we can abstract from this. For similar reasons, we abstract from the 2
P(T ∗ ) is the powerset of T ∗ , i.e., W ⊆ T ∗ .
52
W.M.P. van der Aalst and B.F. van Dongen
frequency of workflow traces. In Table 1 workflow trace AF G appears twice (case 1 and case 5), workflow trace ABCDEG also appears twice (case 2 and case 4), and workflow trace ABDCEG (case 3) appears only once. These frequencies are not registered in the workflow log {ABCDEG, ABDCEG, AF G}. Note that when dealing with noise, frequencies are of the utmost importance. However, in this paper we do not deal with issues such as noise. Therefore, this abstraction is made to simplify notation. To find a workflow model on the basis of a workflow log, the log should be analyzed for causal relations, e.g., if a task is always followed by another task it is likely that there is a causal relation between both tasks. To analyze these relations we introduce the following notations. Definition 3.2. (Log-based ordering relations) Let W be a workflow log over T , i.e., W ∈ P(T ∗ ). Let a, b ∈ T : – a >W b if and only if there is a trace σ = t1 t2 t3 . . . tn−1 and i ∈ {1, . . . , n−2} such that σ ∈ W and ti = a and ti+1 = b, – a →W b if and only if a >W b and b >W a, – a#W b if and only if a >W b and b >W a, and – aW b if and only if a >W b and b >W a. Consider the workflow log W = {ABCDEG, ABDCEG, AF G} (i.e., the log shown in Table 1). Relation >W describes which tasks appeared in sequence (one directly following the other). Clearly, A >W B, A >W F , B >W C, B >W D, C >W E, C >W D, D >W C, D >W E, E >W G, and F >W G. Relation →W can be computed from >W and is referred to as the causal relation derived from workflow log W . A →W B, A →W F , B →W C, B →W D, C →W E, D →W E, E →W G, and F →W G. Note that C →W D because D >W C. Relation W suggests potential parallelism. For log W tasks C and D seem to be in parallel, i.e., CW D and DW C. If two tasks can follow each other directly in any order, then all possible interleavings are present and therefore they are likely to be in parallel. Relation #W gives pairs of transitions that never follow each other directly. This means that there are no direct causal relations and parallelism is unlikely. Property 3.3. Let W be a workflow log over T . For any a, b ∈ T : a →W b or b →W a or a#W b or aW b. Moreover, the relations →W , →−1 W , #W , and W are mutually exclusive and partition T × T .3 −1 −1 This property can easy be verified. Note that →W = (>W \ >−1 W ), →W = (>W −1 −1 \ >W ), #W = (T × T ) \ (>W ∪ >W ), W = (>W ∩ >W ). Therefore, T × T = →W ∪ →−1 W ∪ #W ∪ W . If no confusion is possible, the subscript W is omitted. To simplify the use of logs and sequences we introduce some additional notations.
Definition 3.4. (∈, first, last) Let A be a set, a ∈ A, and σ = a1 a2 . . . an ∈ A∗ a sequence over A of length n. ∈, first, last are defined as follows: 3
−1 →−1 W is the inverse of relation →W , i.e., →W = {(y, x) ∈ T × T | x →W y}.
Discovering Workflow Performance Models from Timed Logs
53
1. a ∈ σ if and only if a ∈ {a1 , a2 , . . . an }, 2. first(σ) = a1 , and 3. last(σ) = an . To reason about the quality of a workflow mining algorithm we need to make assumptions about the completeness of a log. For a complex process, a handful of traces will not suffice to discover the exact behavior of the process. Relations →W , →−1 W , #W , and W will be crucial information for any workflow-mining algorithm. Since these relations can be derived from >W , we assume the log to be complete with respect to this relation. Definition 3.5. (Complete workflow log) Let N = (P, T, F ) be a sound WF-net, i.e., N ∈ W. W is a workflow log of N if and only if W ∈ P(T ∗ ) and every trace σ ∈ W is a firing sequence of N starting in state [i], i.e., (N, [i])[σ. W is a complete workflow log of N if and only if (1) for any workflow log W of N : >W ⊆>W , and (2) for any t ∈ T there is a σ ∈ W such that t ∈ σ. A workflow log of a sound WF-net only contains behaviors that can be exhibited by the corresponding process. A workflow log is complete if all tasks that potentially directly follow each other in fact directly follow each other in some trace in the log. Note that transitions that connect the input place i of a WF-net to its output place o are “invisible” for >W . Therefore, the second requirement has been added. If there are no such transitions, this requirement can be dropped. 3.2
Workflow Mining Algorithm
We now present an algorithm for mining processes. The algorithm uses the fact that for many WF-nets two tasks are connected if and only if their causality can be detected by inspecting the log. Definition 3.6. (Mining algorithm α) Let W be a workflow log over T . α(W ) is defined as follows. 1. 2. 3. 4. 5. 6. 7. 8.
TW = {t ∈ T | ∃σ∈W t ∈ σ}, TI = {t ∈ T | ∃σ∈W t = first(σ)}, TO = {t ∈ T | ∃σ∈W t = last(σ)}, XW = {(A, B) | A ⊆ TW ∧ B ⊆ TW ∧ ∀a∈A ∀b∈B a →W b ∧ ∀a1 ,a2 ∈A a1 #W a2 ∧ ∀b1 ,b2 ∈B b1 #W b2 }, YW = {(A, B) ∈ XW | ∀(A ,B )∈XW A ⊆ A ∧ B ⊆ B =⇒ (A, B) = (A , B )}, PW = {p(A,B) | (A, B) ∈ YW } ∪ {iW , oW }, FW = {(a, p(A,B) ) | (A, B) ∈ YW ∧ a ∈ A} ∪ {(p(A,B) , b) | (A, B) ∈ YW ∧ b ∈ B} ∪ {(iW , t) | t ∈ TI } ∪ {(t, oW ) | t ∈ TO }, and α(W ) = (PW , TW , FW ).
The mining algorithm constructs a net (PW , TW , FW ). Clearly, the set of transitions TW can be derived by inspecting the log. In fact, if there are no traces of length one, TW can be derived from >W . Since it is possible to find all initial
54
W.M.P. van der Aalst and B.F. van Dongen
transitions TI and all final transition TO , it is easy to construct the connections between these transitions and iW and oW . Besides the source place iW and the sink place oW , places of the form p(A,B) are added. For such place, the subscript refers to the set of input and output transitions, i.e., •p(A,B) = A and p(A,B)• = B. A place is added in-between a and b if and only if a →W b. However, some of these places need to be merged in case of OR-splits/joins rather than AND-splits/joins. For this purpose the relations XW and YW are constructed. (A, B) ∈ XW if there is a causal relation from each member of A to each member of B and the members of A and B never occur next to one another. Note that if a →W b, b →W a, or aW b, then a and b cannot be both in A (or B). Relation YW is derived from XW by taking only the largest elements with respect to set inclusion. If the α algorithm is applied to the log shown in Table 1, then the WF-net shown in Figure 1 is discovered. In fact the α algorithm will detect this WFnet from any complete workflow log of this workflow model. In [4] we prove the correctness of the mining algorithm for a large class of workflow processes. However, a precise description of this class and correctness proofs are beyond the scope of this paper because we focus on timed workflow logs. The interested reader is referred to [4].
4
Mining Timed Logs
The mining algorithm presented in the previous section ignores timing information. Therefore, we extend the algorithm to incorporate time information. Event logs typically add one timestamp to every line in the log. Therefore, we use the following definition. Definition 4.1. (Timed workflow trace, Timed workflow log) Let T be a set of tasks and D a time domain (e.g., D = {. . . , −2, −1, 0, 1, 2, . . .} or any totally ordered domain with >, +, and − defined on it.). σ ∈ (T × D)∗ is a timed workflow trace and W ∈ B((T × D)∗ ) is a timed workflow log.4 Each line in the workflow log has a timestamp indicating at what time the corresponding event took place. A timed workflow trace is simply a sequence of such timed events. Note that a timed log is a bag of traces rather than a set of traces. In contrast to the untimed case (cf. Definition 3.1), we have to take into account the frequencies of traces. Without these frequencies we cannot calculate estimates for probabilities and averages because these depend to how many times a specific trace occurred. Since each line in the workflow log has a single timestamp and no duration attached to it, we will associate time to places, i.e., the firing of a transition is an atomic action and tokens spend time in places. The time tokens spend in places is referred to as sojourn time or holding time. We distinguish between two kinds of sojourn time: waiting time and synchronization time. The waiting 4
T × D is the Cartesian product of T and D.
Discovering Workflow Performance Models from Timed Logs
55
time is the time that passes from the enabling of a transition until its firing. The synchronization time is the time that passes from the partial enabling of a transition (i.e., at least one input place marked) until full enabling (i.e., all input places are marked). Note that these times should be viewed from a token in a place, i.e., when a token arrives in a place p, the synchronization time is the time it takes to enable one of the output transitions in p• and the waiting time is the additional time it takes to fire the first transition in p•. Besides sojourn times we also want to analyze other metrics such as the probability of taking a specific path and the flow time (i.e., the time from the arrival of a case until its completion). To calculate sojourn times, probabilities, flow times, and other metrics we first apply the α algorithm and then replay the log in the resulting WF-net. For each case in the log, we have a timed workflow trace σ ∈ (T × D)∗ . We know that the case starts in marking [i]. Therefore, we will start by putting a token with a timestamp equal to that of the first firing transition in place i. Then, one by one, each transition in the timed workflow trace may fire, thus collecting tokens from its input places and placing them in its output places. Every time a transition fires the waiting and synchronizing times will be calculated for the input places in the following way: (1) the maximum m of the timestamps of tokens in all input places is calculated, (2) if the transition has more than 1 input place, then for each place a “synchronization-time observation” will be added which will be the difference between the timestamp of the token in that place and m, and (3) for each of the input places a “waiting-time observation” is added. The latter observation is equal to the difference between m and the time the transition fires according to the log. This is repeated until the case reaches marking [o]. This analysis is done for each case, resulting in a number of synchronization-time observations and waiting-time observations per place. Based on these observations metrics such as average, variance, maximum and minimum synchronization/waiting time can be calculated. By replaying the log in the discovered WF-net, also other metrics such as routing probabilities and flow times can be calculated. For example, if a place has multiple output transitions, then the probability that a specific transition will be chosen equals the number of occurrences of that transition in the log, divided by the total number of occurrences of all the enabled transitions. To conclude this section, we point out legal issues relevant when mining timed workflow logs. Clearly, timed workflow logs can be used to systematically measure the performance of employees. The legislation with respect to issues such as privacy and protection of personal data differs from country to country. For example, Dutch companies are bound by the Personal Data Protection Act (Wet Bescherming Persoonsgegeven) which is based on a directive from the European Union. The practical implications of this for the Dutch situation are described in [6, 16, 24]. Timed workflow logs are not restricted by these laws as long as the information in the log cannot be traced back to individuals. If information in the log can be traced back to a specific employee, it is important that the employee is aware of the fact that her/his activities are logged and the fact that this logging
56
W.M.P. van der Aalst and B.F. van Dongen
is used to monitor her/his performance. Note that in the timed workflow log as defined in Definition 4.1 there is no information about the workers executing tasks. Therefore, it is not possible to distill information on the productivity of individual workers and legislation such as the Personal Data Protection Act does not apply. Nevertheless, the logs of most workflow systems contain information about individual workers, and therefore, this issue should be considered carefully.
5
EMiT: A Tool for Mining Timed Workflow Logs
This section introduces our tool EMiT (Enhanced Mining Tool). EMiT has been developed to mine timed workflow logs from a range of transactional systems including workflow management systems such as Staffware and ERP systems such as SAP. transactional information systems
product specific translators
Staffware ... XML timed workflow log format
(untimed) mining algorithm
WF-net
... collect statistics dot files
html files
(graphical WF-net)
(performance indicators)
web server
report generator
Fig. 3. The architecture of EMiT.
Figure 3 shows the architecture of EMiT. The mining starts from a toolindependent XML format. From any transactional information system recording event logs, we can export to this XML format. The DTD describing this format is as follows:
Discovering Workflow Performance Models from Timed Logs
57
Note that the XML file not only contains timed workflow traces as defined in Definition 4.1 but also information about the source of the information and the type of event recorded. This information can be used to filter and extract additional knowledge. Using the α algorithm, EMiT constructs a WF-net without considering timing information. Then the component “collect statistics” replays the timed traces in the discovered WF-net and outputs both HTML files and DOT files. EMiT exports WF-nets to the .DOT format to visualize the discovered model and performance indicators. There are two ways to view results. First, it is possible to generate a static report containing the graphical model and all performance indicators: probabilities, sojourn times (average, variance, minimum, and maximum), synchronization times, waiting times, etc. A more sophisticated way to view the results is through a combination of HTML, JPG, and MAP files. This requires the use of a web server, but allows for checkable models and a dynamic hypertext-like report.
Fig. 4. EMiT screenshot.
58
W.M.P. van der Aalst and B.F. van Dongen
EMiT has been developed using Delphi and provides an easy-to-use graphical user interface. Figure 4 shows one of the screens of EMiT while analyzing the log shown in Table 1. EMiT indeed discovers the correct sojourn times as indicated in Figure 2.
6
Application: Mining Staffware Logs
Although EMiT and the underlying analysis routines are tool-independent, we focus on a concrete system to illustrate the applicability of the results presented in this paper. Staffware [26] is one of the leading workflow management systems. We have developed a translator from Staffware audit trails to the XML format described in the previous section. Staffware records the completion of each task in the log. (Note that in Staffware tasks are named steps.) However, it does not record the start of the execution of a task. Instead it records the scheduling of tasks. In the EMiT XML file, completion events are distinguished from schedule events. Other events recorded by Staffware and stored in the XML format are withdraw, suspend, and resume events. Using different profiles, EMiT either ignores or incorporates the various events.
Fig. 5. Staffware process model.
We have tested EMiT on a wide variety of Staffware models. These tests demonstrate the applicability of the α algorithm extended with time. An example is shown in Figure 5. Note that in Staffware each step (i.e. task) is an OR-join/AND-split, and conditions (diamond symbol) and waits (sand-timer symbol) have been added to model respectively OR-splits and AND-joins. The workflow shown in Figure 5 starts with TASKA, followed by TASKD or TASKE,
Discovering Workflow Performance Models from Timed Logs
59
TASKB followed by TASKF, and TASKC in parallel, and ends with TASKG. We have handled several cases using the Staffware model shown in Figure 5. By collecting the audit trails of this model and feeding this to EMiT, we obtained the WF-net shown in Figure 6. It is easy to verify that this WF-net indeed corresponds to the Staffware model of Figure 5. Figure 6 does not show metrics such as waiting times, etc. However, by clicking on the places one can obtain detailed information about these performance indicators. Examples such as shown in figures 5 and 6 demonstrate the validity and applicability of our approach. It should be noted that it is not very useful to mine Staffware logs for discovering pre-specified workflow models. It is much more interesting to mine process models in the situation where the underlying model is unknown. However, even in the situation where the workflow specification is already available, it is interesting to compare the specified model with the discovered model. For example, it is useful to detect deviations between the actual workflow and the specified workflow. Moreover, EMiT also attributes performance indicators to a graphical representation of the real workflow. Clearly these features are not supported by contemporary workflow management systems. TASKB complete starttask normal
TASKA complete
TASKF complete TASKC complete
0.5000 0.5000
TASKD complete
TASKG complete
TerminationTask normal
TASKH complete
TASKE complete
Fig. 6. The resulting model.
7
Related Work
The idea of process mining is not new [5, 7–9, 12–15, 21, 25]. Cook and Wolf have investigated similar issues in the context of software engineering processes. In [7] they describe three methods for process discovery: one using neural networks, one using a purely algorithmic approach, and one Markovian approach. The authors consider the latter two the most promising approaches. The purely algorithmic approach builds a finite state machine where states are fused if their futures (in terms of possible behavior in the next k steps) are identical. The Markovian approach uses a mixture of algorithmic and statistical methods and is able to deal with noise. Note that the results presented in [6] are limited to sequential behavior. Cook and Wolf extend their work to concurrent processes in [8]. They propose specific metrics (entropy, event type counts, periodicity, and causality) and use these metrics to discover models out of event streams. However, they do not provide an approach to generate explicit process models. Recall that the final
60
W.M.P. van der Aalst and B.F. van Dongen
goal of the approach presented in this paper is to find explicit representations for a broad range of process models, i.e., we want to be able to generate a concrete Petri net rather than a set of dependency relations between events. In [9] Cook and Wolf provide a measure to quantify discrepancies between a process model and the actual behavior as registered using event-based data. The idea of applying process mining in the context of workflow management was first introduced in [5]. This work is based on workflow graphs, which are inspired by workflow products such as IBM MQSeries workflow (formerly known as Flowmark) and InConcert. In this paper, two problems are defined. The first problem is to find a workflow graph generating events appearing in a given workflow log. The second problem is to find the definitions of edge conditions. A concrete algorithm is given for tackling the first problem. The approach is quite different from the approach envisioned in this proposal. Given the nature of workflow graphs there is no need to identify the nature (AND or OR) of joins and splits. Moreover, workflow graphs are acyclic. The only way to deal with iteration is to enumerate all occurrences of a given activity. In [21], a tool based on these algorithms is presented. Schimm [25] has developed a mining tool suitable for discovering hierarchically structured workflow processes. This requires all splits and joins to be balanced. Herbst and Karagiannis also address the issue of process mining in the context of workflow management [12–15]. The approach uses the ADONIS modeling language and is based on hidden Markov models where models are merged and split in order to discover the underlying process. The work presented in [12, 14, 15] is limited to sequential models. A notable difference with other approaches is that the same activity can appear multiple times in the workflow model. The result in [13] incorporates concurrency but also assumes that workflow logs contain explicit causal information. The latter technique is similar to [5, 21] and suffers from the drawback that the nature of splits and joins (i.e., AND or OR) is not discovered. In contrast to existing work we addressed workflow processes with concurrent behavior right from the start (rather than adding ad-hoc mechanisms to capture parallelism), i.e., detecting concurrency is the prime concern of the α algorithm. Moreover, we focus on the mining of timed workflow logs to derive performance indicators such as sojourn times, probabilities, etc. Some preliminary results for untimed logs have been reported in [4, 20, 28, 29]. In [28, 29] a heuristic approach using rather simple metrics is used construct so-called “dependency/frequency tables” and “dependency/frequency graphs”. In [20] another variant of this technique is presented using examples from the health-care domain. The preliminary results presented in [20, 28, 29] only provide heuristics and focus on issues such as noise. The approach described in [4] differs from these approaches in the sense that for the α algorithm is proven that for certain subclasses it is possible to find the right workflow model. This paper builds on [4, 20, 28, 29]. The main contribution of this paper, compared to earlier work is the incorporation of time, practical experience with systems such as Staffware, and the introduction of EMiT.
Discovering Workflow Performance Models from Timed Logs
8
61
Conclusion
This paper presented an approach to extract both a workflow model and performance indicators from timed workflow logs. The approach is supported by the EMiT tool also presented in this paper and has been validated using logs of transactional information systems such as Staffware. It is important to see the results presented in this paper in the context of a larger effort [4, 20, 28, 29]. The overall goal is to be able to analyze any workflow log without any knowledge of the underlying process and in the presence of noise. At this point in time, we are applying our workflow mining techniques to two applications. The first application is in health-care where the flow of multi-disciplinary patients is analyzed. We have analyzed workflow logs (visits to different specialist) of patients with peripheral arterial vascular diseases of the Elizabeth Hospital in Tilburg and the Academic Hospital in Maastricht. Patients with peripheral arterial vascular diseases are a typical example of multidisciplinary patients. The second application concerns the processing of fines by the CJIB (Centraal Justitieel Incasso Bureau), the Dutch Judicial Collection Agency located in Leeuwarden. For example fines with respect to traffic violations are processed by the CJIB. However, this government agency also takes care of the collection of administrative fines related to crimes, etc. Through workflow mining we try to get insight in the life-cycle of for example speeding tickets. Some preliminary results show that it is very difficult to mine the flow of multi-disciplinary patients given the large number of exceptions, incomplete data, etc. However, it is relatively easy to mine well-structured administrative processes such as the processes within the CJIB. In both applications we are also trying to take attributes of the cases being processed into account. This way we hope to find correlations between properties of the case and the route through the workflow process. Acknowledgements The authors would like to thank Eric Verbeek, Ton Weijters, and Laura Maruster for contributing to this work.
References 1. W.M.P. van der Aalst. The Application of Petri Nets to Workflow Management. The Journal of Circuits, Systems and Computers, 8(1):21–66, 1998. 2. W.M.P. van der Aalst, J. Desel, and A. Oberweis, editors. Business Process Management: Models, Techniques, and Empirical Studies, volume 1806 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 2000. 3. W.M.P. van der Aalst and K.M. van Hee. Workflow Management: Models, Methods, and Systems. MIT press, Cambridge, MA, 2002. 4. W.M.P. van der Aalst, A.J.M.M. Weijters, and L. Maruster. Workflow Mining: Which Processes can be Rediscovered? BETA Working Paper Series, WP 74, Eindhoven University of Technology, Eindhoven, 2002. 5. R. Agrawal, D. Gunopulos, and F. Leymann. Mining Process Models from Workflow Logs. In Sixth International Conference on Extending Database Technology, pages 469–483, 1998.
62
W.M.P. van der Aalst and B.F. van Dongen
6. College Bescherming persoonsgegevens (CBP; Dutch Data Protection Authority). http://www.cbpweb.nl/index.htm. 7. J.E. Cook and A.L. Wolf. Discovering Models of Software Processes from EventBased Data. ACM Transactions on Software Engineering and Methodology, 7(3):215–249, 1998. 8. J.E. Cook and A.L. Wolf. Event-Based Detection of Concurrency. In Proceedings of the Sixth International Symposium on the Foundations of Software Engineering (FSE-6), pages 35–45, 1998. 9. J.E. Cook and A.L. Wolf. Software Process Validation: Quantitatively Measuring the Correspondence of a Process to a Model. ACM Transactions on Software Engineering and Methodology, 8(2):147–176, 1999. 10. J. Desel and J. Esparza. Free Choice Petri Nets, volume 40 of Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, Cambridge, UK, 1995. 11. L. Fischer, editor. Workflow Handbook 2001, Workflow Management Coalition. Future Strategies, Lighthouse Point, Florida, 2001. 12. J. Herbst. A Machine Learning Approach to Workflow Management. In Proceedings 11th European Conference on Machine Learning, volume 1810 of Lecture Notes in Computer Science, pages 183–194. Springer-Verlag, Berlin, 2000. 13. J. Herbst. Dealing with Concurrency in Workflow Induction. In U. Baake, R. Zobel, and M. Al-Akaidi, editors, European Concurrent Engineering Conference. SCS Europe, 2000. 14. J. Herbst and D. Karagiannis. An Inductive Approach to the Acquisition and Adaptation of Workflow Models. In M. Ibrahim and B. Drabble, editors, Proceedings of the IJCAI’99 Workshop on Intelligent Workflow and Process Management: The New Frontier for AI in Business, pages 52–57, Stockholm, Sweden, August 1999. 15. J. Herbst and D. Karagiannis. Integrating Machine Learning and Workflow Management to Support Acquisition and Adaptation of Workflow Models. International Journal of Intelligent Systems in Accounting, Finance and Management, 9:67–92, 2000. 16. B.J.P. Hulsman and P.C. Ippel. Personeelsinformatiesystemen: De Wet Persoonsregistraties toegepast. Registratiekamer, The Hague, 1994. 17. S. Jablonski and C. Bussler. Workflow Management: Modeling Concepts, Architecture, and Implementation. International Thomson Computer Press, London, UK, 1996. 18. F. Leymann and D. Roller. Production Workflow: Concepts and Techniques. Prentice-Hall PTR, Upper Saddle River, New Jersey, USA, 1999. 19. D.C. Marinescu. Internet-Based Workflow Management: Towads a Semantic Web, volume 40 of Wiley Series on Parallel and Distributed Computing. WileyInterscience, New York, 2002. 20. L. Maruster, W.M.P. van der Aalst, A.J.M.M. Weijters, A. van den Bosch, and W. Daelemans. Automated Discovery of Workflow Models from Hospital Data. In B. Kr¨ ose, M. de Rijke, G. Schreiber, and M. van Someren, editors, Proceedings of the 13th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC 2001), pages 183–190, 2001. 21. M.K. Maxeiner, K. K¨ uspert, and F. Leymann. Data Mining von WorkflowProtokollen zur teilautomatisierten Konstruktion von Prozemodellen. In Proceedings of Datenbanksysteme in B¨ uro, Technik und Wissenschaft, pages 75–84. Informatik Aktuell Springer, Berlin, Germany, 2001.
Discovering Workflow Performance Models from Timed Logs
63
22. T. Murata. Petri Nets: Properties, Analysis and Applications. Proceedings of the IEEE, 77(4):541–580, April 1989. 23. W. Reisig and G. Rozenberg, editors. Lectures on Petri Nets I: Basic Models, volume 1491 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1998. 24. L.B. Sauerwein and J.J. Linnemann. Guidelines for Personal Data Processors: Personal Data Protection Act. Ministry of Justice, The Hague, 2001. 25. G. Schimm. Process Mining. http://www.processmining.de/. 26. Staffware. Staffware 2000 / GWD User Manual. Staffware plc, Berkshire, United Kingdom, 1999. 27. H.M.W. Verbeek, T. Basten, and W.M.P. van der Aalst. Diagnosing Workflow Processes using Woflan. The Computer Journal, 44(4):246–279, 2001. 28. A.J.M.M. Weijters and W.M.P. van der Aalst. Process Mining: Discovering Workflow Models from Event-Based Data. In B. Kr¨ ose, M. de Rijke, G. Schreiber, and M. van Someren, editors, Proceedings of the 13th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC 2001), pages 283–290, 2001. 29. A.J.M.M. Weijters and W.M.P. van der Aalst. Rediscovering Workflow Models from Event-Based Data. In V. Hoste and G. de Pauw, editors, Proceedings of the 11th Dutch-Belgian Conference on Machine Learning (Benelearn 2001), pages 93–100, 2001.
3HUIRUPDQFH(TXLYDOHQW$QDO\VLVRI:RUNIORZ6\VWHPV %DVHGRQ6WRFKDVWLF3HWUL1HW0RGHOV /,1&KXDQJ48@LVSUHVHQWHGDVIROORZV 'HILQLWLRQ $ SUHGLVSDWFKLQJ ZRUNIORZ PRGHO 3UH: LV D WZRWXSOH 3UH: 76' ZKHUH76LVDVHWRIWDVNV 76 = ^7 7 7Q ` Q ∈ 1 1LVDQDWXUDO QXPEHU'LVWKHELUHODWLRQDPRQJWDVNV
84
Jianxun Liu et al.
Definition 3. The task layer numbers are assigned starting from the end, but task “E” does not have a layer number because it is a dummy task. Therefore, in Fig.1, the task in the bottom layer, “Getting Medicine”, is defined to be in layer 1 and the layer number of “Cashier” is 2. In this way, each task can be assigned layer by layer. The higher layer number should be chosen if there are conflicts when deciding layer numbers[12]. For example, while a task T has more than two immediate successors(in Fig.1, “Cashier” is the immediate successor of “Diagnosis”), the layer number of T is assigned the maximal layer number among these immediate successors pluses 1. Definition 4. A task T is the atomic scheduling unit of a workflow engine. T=(Id, Pre-trigger, Layer, Stride, KB, ted, tec,,…), where Id is the identification of the task; Pre-trigger represents the triggering logic (pre-conditions) of the task; Layer is just the task layer number defined in definition 3; Stride represents how far from its’ running predecessors by layer this task should do pre-dispatching computation. For example, Stride=2 means that when the task with layer number equal to T.Layer + 2 is running, T should be done pre-dispatching computation; KB denotes that the knowledge or guide lines for the actor as how to do this task; ted denotes the estimated duration of the task; tec represents the estimated time point when the task can be activated, i.e. the time point when all its pre-conditions are satisfied. The other attributes of a task such as the roles, are all left out consideration in this paper. Table 1. Pre-triggers for tasks in Fig. 1
Registration ON Event S DO Action ST(“Registration”) Diagnosis ON Event END(“Registration”) DO Action ST(“Diagnosis”) Cashier ON Event END(“Diagnosis”) DO Action ST(“Cashier”) Getting Medicine
ON Event END(“Cashier”) DO Action ST(“Getting Medicine”) ON Event Released(“Diagnosis”, “Prescription”) DO Action PreD (“Getting Medicine”)
* Event S means the event of starting the process, END(T) denotes the event of the completion of task T, Released(T,x) denotes task T has released information x. Action ST(T) means start to execute T and PreD(T) means pre-dispatching T.
Though a task is an atomic scheduling unit of the workflow, from the actor’s point of view, it’s still possible to be divided into many steps. For example, the Task “Getting Medicine” can be divided into two steps: “Collecting Medicine” and “Giving to the patient”. Here may turn out a question, why do not further divide such a task into several tiny tasks while modeling, for instance, into two tasks to achieve such concur-
An Agent Enhanced Framework to Support Pre-dispatching of Tasks
85
rency? The reason here is that 1)the integrity of a task may be destroyed and the process will become more complex, which makes the management more difficult; 2) It’s not easy to model the physical and social world around us well, because the modeling of business process is dependent on a workflow designer’s policy, knowledge level, and workflow system environment[3]. Definition 5. Pre-trigger is a set of ECA rules which determine the state transition and the action of a task instance such as when the task should be started or be predispatched. Definition 6. Each ECA rule r is a triple, r=(e,C,A), where e is the event; C is the set of conditions, which represent the different situations in the system environment; A is a set of actions or operations. The formal representation of an ECA rule is as follows: Rule (rule_name) ON Event (Event_name) WITH (condition_exepression) DO (ActionSet) For instance, the pre-trigger for tasks in Fig.1 is shown in Table 1. In the table, only task “Getting Medicine” has set up a trigger to pre-dispatch itself. We assume that event End(T) imply that the information generated by T has been released. To support task pre-dispatching, a new state, Pre-dispatched, is added to the state set of a task instance defined by WfMC[2]. They are as follows: ! Waiting - the task within the process instance has been created but has not yet been activated (because task entry conditions have not been met) and has no workitem for processing; ! Pre-dispatched - A workitem has been allocated. The workers can get information about the task and prepare for it. However, the task entrance condition is not satisfied and the task cannot be finished at all unless it turns into Ready state. For example, in the clinic management process, a patient can’t take his medicine if he has not paid money, though these medicines have been collected together from cabinets by warehouseman. ! Ready - the task entrance condition is satisfied, and a workitem is allocated. ! Running - a workitem has been created and the task instance is just for processing. ! Completed - execution of the task instance has completed (and the entrance condition of its successors will be evaluated) The transition of the states is described in Fig.3. When a task is instantiated, it goes to Waiting state (no workitem allocated). At this time, if the task is predispatched, its state would be changed into the Pre-dispatched state (workitem has been allocated, and some steps can be processed). When all the pre-conditions of the task are satisfied, the state is changed from Pre-dispatched or Waiting state to Ready state and the task instance will wait for actor to do it. As long as the actor begins to do that job, it goes to Running state. After the completion of the task, it goes to Completed state. Each time a state changes, an event related to this change will occur.
-LDQ[XQ/LXHWDO
3UH GLVSDWFKHG 6WDUW
:DLWLQJ
5HDG\
&RPSOHWHG
5XQQLQJ
)LJ6WDWHWUDQVLWLRQVIRUWDVNLQVWDQFHVZLWKSUHGLVSDWFKLQJ
'HILQLWLRQ 7KH VWDWH VHW RI D WDVN 6 ^³:DLWLQJ´ ³5HDG\´ ´3UHBGLVSDWFKHG´ ³5XQQLQJ´³&RPSOHWHG´` 7
7
,WHUDWLRQ
7
&DXVDOLW\
7
7 $1'6SOLW $1'-RLQ
7
7 25-RLQ
256SOLW
7
7
7
7
)LJ$QH[DPSOHZRUNIORZSURFHVV
'HILQLWLRQ 7KH GHSHQGHQF\ FRQVWUDLQW ' DPRQJ WDVNV LV D ELUHODWLRQ RQ 76
' 76 × 76
∀G LM < 7L 7 M > ∈ ' G LM LV WKH SUREDELOLW\ RI WDVN 7M EHLQJ WKH LPPHGLDWH VXFFHVVRURIWDVN7LGLM UHSUHVHQWVWKDW7MLVWKHLPPHGLDWHVXFFHVVRURI7LLHDIWHU WKHFRPSOHWLRQRI7L7MZLOOEHH[HFXWHGGLM UHSUHVHQWVWKDW7MLVQRWWKHLPPHGLDWH §7 7 7 7 7 ¨ 7 ¨ 7 ¨ ¨ 7 ¨ 7 ¨¨ ¨ 7 '= ¨ 7 ¨ 7 ¨ ¨ 7 ¨ 7 ¨ ¨ 7 ¨ 7 ¨©
7
7
7
7 7
)LJ'HSHQGHQF\FRQVWUDLQWPDWUL[
7 · ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¹
$Q$JHQW(QKDQFHG)UDPHZRUNWR6XSSRUW3UHGLVSDWFKLQJRI7DVNV
// Scheduling algorithm for task pre-dispatching Begin STS = GetDownstreamTaskSet(Tcur, Tcur.Layer); //To obtain downstream task set of Tcur for each task T ∈ STS if Tcur.Layer – T.Layer 0 then Begin for j= 1 to n if d[x,j]=1 then // d[x,j] denotes dxj dss = dss ∪ GetDownstreamTaskSet (j,l-1); End else dss=dss ∪ x //add task x to the task set Return dss; End;
)LJ6FKHGXOLQJDOJRULWKPIRUWDVNSUHGLVSDWFKLQJ
VXFFHVVRU RI 7L G LM = S GHQRWHV WKDW DIWHU WKH FRPSOHWLRQ RI 7L WKH SRVVLELOLW\ RI H[HFXWLQJ7MLVS8QGHUWKLVVLWXDWLRQWKHUHH[LVWPDQ\SRVVLEOHFKRLFHVEUDQFKHV DI WHU7L$QGDVSURFHVVFRQWLQXHVGLMZLOOEHFKDQJHGDFFRUGLQJWRWKHFRQGLWLRQVDQG HQYLURQPHQWV:KHQGLMEHFRPHVILQDOO\LWLVVXUHWKDW7MZLOOEHH[HFXWHGDIWHUWKH FRPSOHWLRQRI7L)ROORZLQJWKLVPHWKRGLW¶VYHU\FRQYHQLHQWWRSUHGLFWWKHGLUHFWLRQ RI EUDQFKHV LQ D SURFHVV )RU WKH ZRUNIORZ SURFHVV LQ )LJ WKH GHSHQGHQF\ FRQ VWUDLQW'LVSUHVHQWHGDVDPDWUL[VKRZQLQ)LJLQWKLVPDWUL[EODQNPHDQV]HUR DQGWKHSRVVLELOLW\SLQHDFK25EUDQFKHVDVVXPHVWREH ,W¶VHDV\WRREWDLQWKH LPPHGLDWHVXFFHVVRUVRI7LE\H[DPLQLQJDOOWKHHOHPHQWVLQWKHURZWRZKLFK7LEH ORQJVLQWKH0DWUL[ ,PSOHPHQWDWLRQ&RQVLGHUDWLRQV ,QWKHZRUNIORZHQJLQHZHFDQXVHWKHVFKHGXOLQJDOJRULWKPVKRZQLQ)LJWRSUH GLVSDWFK WDVNV DFFRUGLQJ WR WKH SURSRVHG PRGHO 7KLV VFKHGXOLQJ DOJRULWKP LV WULJ JHUHG ZKHQHYHU D WDVN LV VWDUWHG 7KH DOJRULWKP ILUVW REWDLQV WKH GRZQVWUHDP
88
Jianxun Liu et al.
task(successors) set of the current task Tcur. Then for each task, T, in its downstream task set, it first determines if the distance, Tcur.Layer-T.Layer, between Tcur and T is within T.Stride. When it is true, it then determines if the pre-dispatching conditions in T.Pre-trigger is satisfied. If it’s satisfied, T will be pre-dispatched and the estimated time point, T.tec, is set up. Please refer to [13] for how to estimate the time point. GetDownstreamTaskSet(x), where parameter x represents task number, is a recursive function to search the successors of a task according to dependency matrix D. Only when dij =1 (denotes Tj is sure to be executed immediately after Ti), can Tj be added to the task set. The action, PreD (T), will update states as follows: if T is in Waiting state, then just set its state to Pre-dispatched state. If T is not instantiated, it should be instantiated at first, turned into Waiting state, and then set to Pre-dispatching state.
4 Conclusion and Further Study So far workflow has become a leading tool in modeling enterprise business rules by taking advantage of continuous advancement of information technology[8]. Workflow also has a strong temporal aspect. Recently some researchers have paid attention to it, such as reducing the workflow instance duration, improving the efficiency of WfMS and time constraints. This paper introduces the concept of task pre-dispatching into WfMS in order to improve the efficiency of WfMS. The idea proposed here is based on the fact: it is possible to start a task even when partial pre-conditions of it are satisfied. Thus, some overlapping execution of tasks in a workflow process instance can be achieved. The whole lifecycle of the process can be shortened and the efficiency will be improved, too. A formalized workflow model which supports the idea is presented. With a multi-agent enhanced WfMS architecture, it is possible to make the pre-dispatching mechanism run smoothly without leading to errors, because the SA keeps the knowledge what can do and what can’t do when a task is pre-dispatched. Some extra benefits can be achieved through this agent enhanced architecture, such as cooperation between actors. With the pre-dispatching mechanism, even in the worst case, i.e., the overlapping of execution cannot be achieved, it can still act as a messenger to inform the actor when a task will arrive. However, there are still many research questions remaining ahead, such as optimization and exception handling. For example, as in Fig.1, when the medicine warehouse man has collected the medicine but the patient changes his/her idea and goes off the hospital without “Cashier”, an exception is occurred, which may result in some losses and the rollback of task is needed. These questions should be further studied.
Acknowledgements This work was supported by China National Science Foundation under grant No: 60073035, 69974031, and by the China Super Science and Technology Plan 863/CIMS under the grant No: 2001AA415310, 2001AA412010.
An Agent Enhanced Framework to Support Pre-dispatching of Tasks
89
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Lee, M.K., Han, D.S. and Shim, J.Y. Set-based access conflict analysis of concurrent workflow definition. Information Processing Letters. 80(2001):189-194 WfMC. Workflow Reference Model. http://www.wfmc.org/standards/docs. Jan 1995. Son, J.H. and Kim, M.H. Improving the Performance of Time-Constrained Workflow Processing. The Journal of Systems and Software. 58(2001):211-219 Chinn, S.J. and Madey, G.R. Temporal Representation and Reasoning for Workflow in Engineering Design Change Review. IEEE Transactions on Engineering Management. Vol.47(4). 2000. pp:485-492 Pozewaunig, H., Eder, J. and Liebhart, W.. ePERT: extending PERT for workflow management systems. In: The 1st European Symposium in ADBIS. vol.1,1997:217-224 Panagos, E. and Rabinovich, M. Reducing escalation-related costs in WFMs. In: NATO Advanced Study Institute on Workflow Management Systems and Interoperability. 1997:106-128 Hai, Z.G., Cheung, T.Y., and Pung H.K. A timed workflow process model. The Journal of Systems and Software. 55(2001):231-243 Avigdor, G. And Danilo, M. Inter-Enterprise workflow management systems. 10th International Workshop on Database & Expert Systems Applications, Florence, Italy, 1-3 September, 1999 : 623-627 Applequist, G., Samikoglu, O., Pekny, J. and G.Reklaitis. Issues in the use, design and evolution of process scheduling and planning systems. ISA Transactions. Vol.36(2), 1997, pp. 81-121 Steven, P. and David, J. Data Prefetch Mechanisms. ACM Computing Surveys. Vol.32, No.2, June 2000. pp174-199 Jiang, Z.M. and Kleinrock, L. An Adaptive Network Prefetch Scheme. IEEE Journal on Selected Areas in Communications. Vol.16(3), 1998, pp 358-368 Yan, J.H and Wu, C. Scheduling Approach for Concurrent Product Development Processes. Computer in Industry. 46 (2001):139-147 Eder, J., Panagos, E., Pezewaunig, H., and etc. Time management in workflow systems. 3d Int. Conf. on Business Infomation Systems, pp. 265-280, Invited paper. April 1999.
TEMPPLET: A New Method for Domain-Specific Ontology Design* Ying Dong and Mingshu Li Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences
[email protected] Abstract. Ontology is becoming a basic and important issue for knowledge management and semantic web applications. However, domain-specific ontology design is always difficult. On one hand, the ontology has to fit into standards in the domain to ensure interoperability; On the other hand, it has to be applied into individual organizations. To solve the problem, in this paper we provide a new method for domain-specific ontology design - “Tempplet”, combining two words “template” and “applet” together. The two steps of the method are: Firstly it follows a “template ontology” of the domain; then fitting “applet ontology” into the chosen template when customizing. Moreover, a formal model is provided to answer whether there exists an applet ontology that both satisfies the constraints and conforms to the template ontology. Two cases of ontology design for different domains are given to illustrate how to apply the method into practice. Keywords: ontology domain template applet
1. Introduction More recently, the notion of ontology has attracted attention from fields such as intelligent information integration, cooperative information systems, information retrieval, electronic commerce, and knowledge management. Ontologies were developed in Artificial Intelligence to facilitate knowledge sharing and reuse. [1] The main purpose of an ontology is to enable communication between computer systems in a way that is independent of the individual system technologies, information architectures and application domain. Since ontology is becoming a base for
*
Supported by the National High Technology Research and Development Program (863 Program) (No. 2001AA113180).
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 90-103, 2002. © Springer-Verlag Berlin Heidelberg 2002
TEMPPLET: A New Method for Domain-Specific Ontology Design
91
knowledge management and semantic web applications in the future, thus the first step in devising an effective knowledge representation system, is to perform an effective ontological analysis of the field, or domain. [2] Domain-specific ontology building is always difficult. On one hand, although it is useful to strive for the adoption of a single common domain-specific standard for content and transactions, this is often difficult to achieve, particularly in crossindustry initiatives, where companies cooperate and compete with one another. Commercial practices vary widely and cannot always be aligned for a variety of technical, practical, organizational and political reasons. The complexity of how to describe organizations, their products and services, separately and in combination, and the interactions between them is difficult. Adoption of a single common standard can place limits on the cooperating models and potentially reduce partners’ ability to fully participate in Internet cooperation. On the other hand, if every enterprise or business defines its own ontology, there will generate interoperability problems on business level. It is a conflict making the problem difficult. The conflict lies in the need to build the domain-specific ontology standardized in a domain to ensure interoperability, meanwhile also fitting well to the individual organization to be applicable. To solve the problem, in this paper we provide a new method for domain-specific ontology design, which is called “Tempplet”, by combining two words “template” and “applet” together. Thus the two steps of the method are: Firstly it follows a “template ontology” of the domain, to support basic interoperability; then fitting “applet ontology” into the chosen template when customizing and adding personalities of the organization. In this section, an introduction of the motivation to provide the method is given. Related research on ontology design is summarized following it. In section 3, we provide the new method TEMPPLET for domain-specific ontology design. Moreover, a formal model is provided to answer whether there exists an applet ontology that both satisfies the constraints and conforms to the template ontology. Two cases of ontology design for different domains are given to illustrate how to apply the method into practice in section 5. Finally, the conclusion and future development give an end.
2. Related Research As for methodologies for building ontologies, until now few domain-independent methodologies for building ontologies have been reported. Mike Uschold’s
92
Ying Dong and Mingshu Li
methodology [3], Michael Grüninger and Mark Fox’s methodology [4], and Methontology [5] are the most representative. These methodologies all start from the identification of the ontology’s purpose and the need for domain knowledge acquisition. However, having acquired a significant amount of knowledge, Uschold’s methodology and Grüninger and Fox’s methodology propose coding in a formal language and Methontology proposes expressing the idea as a set of intermediate representations. Methontology then uses translators to generate the ontology. From semantic web research, the recent progress on ontology is DAML+OIL. DAML is for “DARPA Agent Markup Language” [6], and OIL is for “Ontology Inference Layer” [7]. They combine with each other to support an open framework of ontology design, using a standard way of expressing ontologies based on using new web standards like XML schemas and RDF schemas. A DAML+OIL ontology repository [6] has been set up, making it easy to submit pointers to new ontologies. To date there are 112 ontologies in the library. From the related work and research, we found that the domain-specific ontology design lacks of a rapid but efficient methodology. The methods above all require the domain knowledge acquisition completely, which is a difficult task to conduct for every individual organization. The idea of ontology reusing has not been fully applied as a main disadvantage. With the recent development in semantic web, a lot of ontologies are emerging, even organized as a library, which provides the possibility to for us to set up a methodology based on template ontologies.
3. The New Method for Domain-Specific Ontology Design This section outlines the proposed methodology, named “TEMPPLET”, which means “TEMPlate + ApPLET” for domain-specific ontology design. 3.1 Description The proposed methodology is composed of the following components (Fig. 1): !
Template Ontology
!
Applet Ontology
!
Template Ontology Employing Rules
!
Applet Ontology Fitting Mechanisms
!
Reused Ontology Applets
TEMPPLET: A New Method for Domain-Specific Ontology Design
93
Fig. 1 The Overview of TEMPPLET: “Template Ontology” + “Applet Ontology”
To explain the methodology clearly, here we give the fist case study, the ontology for the software enterprise as an example. For the details of the case study please refer to section 5. A. Template Ontology: The “template ontology” is the general and outline ontology of the domain. It is employed to ensure basic interoperability in the domain. For the example to design a “Software Enterprise Ontology”, the template ontology chosen is the E-Service Ontology (ESO), since software enterprise is in a domain of e-Service intensive, the details please refer to [8]. It is composed of 5 parts mainly, as “Product”, “Service”, “Enterprise Knowledge”, “Administration” and “Customer”. B. Applet Ontology: However, template ontology is still not enough to cover the specific domain. For example, ESO is not applicable for software enterprise directly, since its business model for software enterprise has to be customized for software enterprises. ESO has provided the outline and covered a basic organization for e-Service related ontologies, which can be customized for the software enterprise domain. To customize, we add software-related applet ontologies into ESO, including those for “Software Product”, “Software Service Component”, and “Software Enterprise Knowledge” applet ontologies. An "applet ontology" is an independent ontology, defining a certain part of the domain-specific ontology, e.g., “Software Product” is an applet ontology. It
94
Ying Dong and Mingshu Li
defines the “Name”, “Price”, and “Software Description” for software products, which fit exactly to the software-domain. C. Template Ontology Employing Rules: After the introduction of both “template ontology” and “applet ontology”, the questions may appear as “how to choose or build template ontology” and “how to build or fit applet ontology”. In C and D, we try to answer these two questions. The rules to employ template ontology depend mainly generality, standardization, and popularity. There are already a large number of ontology libraries, e.g. the DAML+OIL ontology library, have covered a lot of domains, including business, service, computing, etc. We believe for the interoperability of applications, there will appear standard or popular ontology in each general mainstream domain, the same situation as the XML schemas. At least, the template ontology can adopt the XML schema’s name space strategy, to declare the adopted template ontology’s URI. Such as for the ESO which is towards general electronic service domain, it is to be submitted to the DAML+OIL ontology library, to get the popularity to become a template ontology coving the domain. D. Applet Ontology Fitting Mechanisms: About how to build or fit an applet ontology into the template ontology, the mechanism is a component-oriented and interface-specific way. That means, the applet itself is a component of encapsulation, but its interface is template ontology oriented. For example, the “Software Product” applet ontology for the software enterprise ontology depends on the abstract “Product” ontology in ESO, which requires to provide “Name”, “Price”, “Description” slots for a product; however the “Description” subclass for software product can be defined and customized for the software enterprise domain, including "version", "installation environment" and "product key" as properties. E. Reused Ontology Applets: To reuse the common parts in ontologies covering different domains, they can be built as applet ontologies to be reused in different domains. For example, the ontology as “Person”, “Employee”, etc. can be designed as reused applet ontologies for most of the domains. 3.2 Steps From the introduction of the components of the proposed method above, naturally the method can be applied in two steps (Fig. 2):
TEMPPLET: A New Method for Domain-Specific Ontology Design
95
A. Template ontology employing: Firstly it is to choose or build a template ontology for the domain. If there is general and close ontology available, which can cover the specific domain, it can be chosen as the template ontology. For example, ESO can serve as the template ontology for the software enterprise ontology, since in the future a software enterprise is becoming a complete e-Service provider, providing software on Internet mainly. Otherwise, if there is no one ontology which can serve as a template ontology for the specific domain, it can be built by combing or integrating several general and popular ontologies together, each of which covers a certain part of the specific domain. For example, to build an ontology for “Law Consulting Ontology”, the template ontology may be created by integrating and combining ontologies as e-Service Ontology (ESO) and lawrelated ontologies. B. Applet ontology fitting: After settling down the template ontology, then it is to be customized to the specific domain by fitting domain-specific applet ontologies. When building an applet ontology, the work is to customize the concept, process and axioms according to the specific domain, guided by the template ontology. However as an applet, the interface or API between the applet ontology and the template ontology must match with each other. These APIs include subclass belonging relationships, range/domain/cardinality constraints for concepts, and data types, etc.
Step 1
Template Ontology Choosing
Template Ontology Employing Rules
Step 2
Applet Ontology Fitting
Applet Ontology Fitting Mechanisms
Fig. 2 The Steps to Apply the TEMPPLET Methodology
96
Ying Dong and Mingshu Li
3.3 Rules and Principles The relationship between template ontology and applet ontology can apply some of the OO design rules. As an analogy from OO design, the applet ontology can be considered as subclass of template ontology. In this sense, some of the OO design rules can be applied: a)
Inheritance: Applet ontology can be customized from an existing ontology, as a subclass of the template ontology. This rule outlines the whole structure of our method, making a lot of OO design advantages and flexibility applying here as well.
b)
Abstract class: Applet ontology provides actual implementation of the template ontology, where the general and abstracted concepts of the template ontology are defined. The advantage by applying this rule is that the same template ontology can be extended to completely different applet ontologies toward different specific domains, which will lead to the propagation of an existing general ontology.
c)
Multi-inheritance: An applet ontology can be built by integrating or combing several existing ontologies together. The advantage of this rule is that a new applet ontology can be set up by inherited from ontologies from different domains, which will lead to a wide adaptability and applicability of new ontologies.
In summary, the new TEMPLLET method provides principles as: a)
The interoperability by extended from standard or popular template ontologies;
b)
An applicable method can be followed to build a domain-specific ontology, by customizing the template ontology;
c)
The flexibility by fitting applet ontologies specifying the specific domain;
d)
The ontology can cross domains, by integrating or combining applet ontologies from part of several template ontologies;
e)
Ability of ontology reuse.
4. Formal Model Support Applet ontology usually comes from customizing template ontology that specifies how a domain-specific knowledge structure is organized. Thus, a specification of an applet ontology is always required to comply with both a template ontology and a set of constraints. The constraints are the interface or APIs between these two ontologies.
7(033/(7$1HZ0HWKRGIRU'RPDLQ6SHFLILF2QWRORJ\'HVLJQ
$ OHJLWLPDWH TXHVWLRQ WKHQ LV ZKHWKHU VXFK D VSHFLILFDWLRQ LV FRQVLVWHQW RU PHDQLQJIXOWKDWLVZKHWKHUWKHUHH[LVWVDILQLWH DSSOHWRQWRORJ\WKDWERWKVDWLVILHV WKHFRQVWUDLQWVDQGFRQIRUPVWRWKHWHPSODWHRQWRORJ\)RUWKHVHLPSRUWDQWTXHVWLRQV ZHFDQSURYLGHDQVZHUVIURPIRUPDOPRGHOVDVWUHHJUDSKORJLFHWF+HUHZHJLYH VRPH RI WKH LGHDV ZKLFK FRPH IURP WKH UHODWLRQVKLS EHWZHHQ ;0/ DQG UHODWHG GDWDEDVH>@ (J7RLOOXVWUDWHWKHLQWHUDFWLRQEHWZHHQDSSOHWRQWRORJ\WHPSODWHRQWRORJ\DQG NH\FRQVWUDLQWVLQWHUIDFHV$3,V FRQVLGHUDWHPSODWHRQWRORJ\2ZKLFKVSHFLILHVD QRQHPSW\ FROOHFWLRQ RI FRQVXOWDQWV ZKLFK VD\V WKDW D FRQVXOWDQW VHUYHV WZR FXVWRPHUV (/(0(17FRQVXOWDQWVFRQVXOWDQW ! (/(0(17FRQVXOWDQWVHUYHUHVHDUFK ! (/(0(17VHUYHFXVWRPHUFXVWRPHU ! $VVXPH WKDW HDFK FRQVXOWDQW KDV DQ DWWULEXWH QDPH DQG HDFK FXVWRPHU KDV DQ DWWULEXWHVHUYHGBE\$WWULEXWHVDUHVLQJOHYDOXHG7KDWLVLIDQDWWULEXWHOLVGHILQHGIRU DQHOHPHQWW\SHWLQDWHPSODWHRQWRORJ\WKHQLQDQDSSOHWRQWRORJ\FRQIRUPLQJWRWKH WHPSODWHRQWRORJ\HDFKHOHPHQWRIW\SHWPXVWKDYHDXQLTXHO DWWULEXWH&RQVLGHUD VHWRIXQDU\FRQVWUDLQWV FRQVXOWDQWQDPHėFRQVXOWDQW FXVWRPHUVHUYHGBE\ėFXVWRPHU FXVWRPHUVHUYHGBE\ ⊆ FRQVXOWDQWQDPH 7KDW LV QDPH LV D NH\ RI FRQVXOWDQW HOHPHQW VHUYHGBE\ LV D NH\ RI FXVWRPHU HOHPHQW DQG LW LV DOVR D IRUHLJQ FRQVWUDLQWV UHIHUHQFLQJ QDPH RI FRQVXOWDQW HOHPHQW 0RUHVSHFLILFDOO\UHIHUULQJWRDWHPSODWHRQWRORJ\WUHH7WKHILUVWFRQVWUDLQWDVVHUWV WKDWWZRGLVWLQFWFRQVXOWDQWQRGHVLQ7FDQQRWKDYHWKHVDPHQDPHDWWULEXWHYDOXHWKH VWULQJ YDOXH RI QDPH DWWULEXWH XQLTXHO\ LGHQWLILHV D FRQVXOWDQW QRGH 7KH VHFRQG FRQVWUDLQWVWDWHVWKDWVHUYHGBE\DWWULEXWHXQLTXHO\LGHQWLILHVDFXVWRPHUQRGHLQ77KH WKLUGFRQVWUDLQWDVVHUWVWKDWIRUDQ\VHUYHQRGH[WKHUHLVDFRQVXOWDQWQRGH\LQ7VXFK WKDWWKHVHUYHGBE\DWWULEXWHYDOXHRI[HTXDOVWRWKHQDPHDWWULEXWHYDOXHRI\6LQFH QDPH LV D NH\ RI FRQVXOWDQW WKH VHUYHGBE\ DWWULEXWH RI DQ\ VHUYH QRGH UHIHUV WR D FRQVXOWDQWQRGH 2EYLRXVO\WKHUHH[LVWVDQDSSOHWRQWRORJ\WUHHFRQIRUPLQJWR2DVVKRZQLQ )LJ +RZHYHU WKHUH LV QR DSSOHW RQWRORJ\ WUHH WKDW ERWK FRQIRUPV WR 2 DQG VDWLVILHV7RVHHWKLVOHWXVILUVWGHILQHVRPHQRWDWLRQV*LYHQDQDSSOHWWUHH7DQG
98
Ying Dong and Mingshu Li
an element type t, we use ext(t) to denote the set of all the nodes labeled t in T. Similarly, given an attribute l of t , we use ext(t,l) to denote the set of l attribute values of all t elements. Then immediately fromΣ1 follows a set of dependencies: ∣ext(consultant.name)∣ = ∣ext(consulant)∣; ∣ext(customer.served_by)∣ = ∣ext(customer)∣; ∣ext(customer.served_by)∣ ≦∣ext(consultant.name)∣; where “∣∣” is the cardinality of a set. Therefore, we have ∣ext(customer)∣≦ ∣ext(consultant)∣ (1)
Consultant
...
Consultant ...
@name "Ying"
Serve
Re se arch "IT"
Customer
"Company A"
@served_by "Ying"
Customer
"Company B"
@se rved_by "Ying"
Fig. 3 An applet ontology tree conforming to O1
On the other hand, the template ontology O1 requires that each consultant must serve two customers. Since no sharing of nodes is allowed in applet ontology trees and the collection of consultant elements is nonempty, from O1 follows: 1 < 2 ext(consultant) = ext(customer): (2) Thus ∣ext(consultant)∣ < ext(customer). Obviously, (1) and (2) contradict with each other and therefore, there exists no applet ontology tree that both satisfiesΣ1 and conforms to O1. In particular, the applet ontology tree in Fig. 3 violates the constraint customer.served_by→customer. This example demonstrates that a template ontology may impose dependencies on the cardinalities of certain sets of objects in applet ontology trees. These cardinality constraints interact with other constraints. More specifically, constraints enforce classes of cardinality constraints that interact with those imposed by template ontology. Because of the interaction, simple constraints (e.g., Σ1) may not be
TEMPPLET: A New Method for Domain-Specific Ontology Design
99
satisfied by applet ontology trees, which conform to certain template ontologies (e.g., O1).
5. Case Study To explain how to apply the new method to build up a domain-specific ontology, in this section, we provide 2 case studies. The first is to build the "Software Enterprise Ontology"; the second is to build the "Law Consulting Ontology". 5.1 Software Enterprise Ontology In this case, we demonstrate the customization of a template ontology, e-Service Ontology (ESO), by fitting applet ontologies for software enterprise into ESO. A. Template Ontology Employing: To choose a template ontology for the software enterprise service domain, ESO is chosen as the template ontology for the software enterprise ontology, since in the future a software enterprise is becoming a complete e-Service provider, providing software on Internet mainly. ESO is a general ontology for the enterprise e-Service domain, the details of which please refer to [8]. It is composed of 5 parts mainly, as “Product”, “Service”, “Enterprise Knowledge”, “Administration” and “Customer”. (Fig. 4) B. Applet Ontology Fitting: To customize the template ontology ESO for the software enterprise domain, we add software-related applet ontologies, including “Software Product”, “Software Service Component”, and “Software Enterprise Knowledge”, to make it fit into the software enterprise domain. E.g., “Software Product” is an applet ontology. It defines “Software Description” for software products, which fits exactly to the domain. (Fig. 5) C. Reused Ontology Applets: In the customization process, we reuse available and general ontologies such as "Employee Ontology". Meanwhile from our work, we also provide applet ontologies can be reused for other domains in the future, e.g. "Online Distribution Service Ontology", "Update Service Ontology" in the "Software Service Ontology".
100
Ying Dong and Mingshu Li
Cus tomerProperties
Customer
Name Type Interest
ServiceProperties
hasComponent Customerguided
Knowledge
Service
Product
Knolwedgerelated Service
Product-related Service
Customerguided Product Properties
KnowdgeProperties
Name Price Des cription
Type Intensive Avaliable
Adminis tration
Admnistration Properties Priority
Fig. 4 e-Service Ontology" as a Template Ontology for "Software Enterprise Ontology"
Cla s s 1
ca rd in a lity
Res tric tion
s u b Cla s s Of
S oftware P roduc t
o n Pro p e rty
s u b Cla s s Of
1
Da ta T y p ePro p erty
s u b Cla s s Of
c a rd in ality
1
ca rd in a lity
De s crip tio n
Res tric tion
Res tric tion s u b Pro p e rty Of
s u b Pro p e rty Of
s u b Pro p e rty Of
o n Pro p erty
Da ta T y p ePro p erty
Da taT y p ePro p e rty
Da taT y p e Pro p e rty
Ve rs io n
Na me
Pric e
Da ta T y p ePro p erty
Da ta T y p ePro p e rty
Pro d u ct Ke y
In s talla tio n En v iro n me n t
range
range
x s d:s tring
type Unique P roperty
range
x s d:m oney
range
range
x s d: s tring
o n Pro p e rty
x s d:date
x s d: s tring
Fig. 5 "Software Product Ontology" as an Applet Ontology for "Product Ontology" in ESO
TEMPPLET: A New Method for Domain-Specific Ontology Design
101
5.2 Law Consulting Ontology In this case, we demonstrate the multi-heritage relationship between template ontologies and applet ontology, as a certain case to demonstrate the template ontology choosing and applet ontology building. A. Template Ontology Employing: Although there are already a large number of ontologies in the DAML+OIL ontology library, no appropriate candidate is available to cover the law consulting domain exactly. In this case, a template ontology can be created by integrating and combining ontologies such as e-Service Ontology (ESO) and law-related ontologies. The reason to choose ESO as one of the template ontologies, is that law consulting is becoming Internet-based services in the future in sense of knowledge learning and service providing. Meanwhile, there are a number of law-related ontologies in the DAML+OIL library (Fig. 6). So they are all candidates for the template ontologies to choose from, according to the business coverage of the law consulting firm. B. Applet Ontology Fitting: To customize several template ontologies is a work of merge, choosing, and conflict solving. As for the relationship of these template ontologies, ESO serves as the basic organization for law service providing, while other law ontologies guide customizing ESO parts into law consulting domain. Here we suppose the firm is engaged in high-tech consulting domain, where laws for business, contract, intellectual property, and taxation are employed mainly. As shown in Fig. 7, each part of "Enterprise Knowledge" ("Case", "Firm", "Lawyer", "Law and Regulation", "Report")is divided by these 4 kinds of law as customization of ESO and a merge of the two domains. Tab. 1 Law-related Ontologies in the DAML+OIL Ontology Library Law-related Class URI Business-lawhttp://www.ksl.stanford.edu/projects/DAML/UNSPSC.daml services Contract-lawhttp://www.ksl.stanford.edu/projects/DAML/UNSPSC.daml services International-lawhttp://www.ksl.stanford.edu/projects/DAML/UNSPSC.daml prescription-services Patent-trademark-orhttp://www.ksl.stanford.edu/projects/DAML/UNSPSC.daml copyright-law
102
Ying Dong and Mingshu Li
Property-lawservices Taxation-law
http://www.ksl.stanford.edu/projects/DAML/UNSPSC.daml
http://www.ksl.stanford.edu/projects/DAML/UNSPSC.daml http://phd1.cs.yale.edu:8080/umls/UMLSinDAML/NET/SRS Regulation_or_Law TR.daml LawEnforcementAct http://www.cyc.com/2002/04/08/cyc.daml ivity LawEnforcementOrg http://opencyc.sourceforge.net/daml/cyc.daml anization … …
Fig. 7 "Enterprise Knowledge" as an Applet Ontology in "Law Consulting Ontology"
6. Conclusion In this paper we provide a new method, “Tempplet”, for domain-specific ontology design. The main purpose of the new method is to support both interoperability and particularity of future domain-specific applications toward different individuals in this domain. Thus, the first step of the method, choosing a “template ontology” to work on, is in order to support basic interoperability; then the other step, fitting “applet ontology” into the chosen template, is in order to customize and add personalities into specific scenarios. For this new method to get applied widely, a community to manage an ontology library is expected. Ontology is to be managed as a component library, where mature and general domain-specific template ontologies wait to be approved to become defaco standards, and other applet ontologies serve as encapsulated and candidate components to be chosen from and integrated with others to for new ontology.
TEMPPLET: A New Method for Domain-Specific Ontology Design
103
Reference: [1]
Fensel, D. (2001). "Ontologies: A Silver Bullet for Knowledge Management and Electronic Commerce": 11. Springer, ISBN: 3-540-41602-1.
[2]
Chandrasekaran, B. Josephson, J. R. and Benjamins V. R. (1999). "What Are Ontologies, and Why Do We Need Them?" IEEE Intelligent System, Vol. 14, No. 1: 2026.
[3]
Uschold, M. and Gruninger, M. (1996). “Ontologies: Principles, Methods, and Applications”. Knowledge Engineering Review, Vol. 11, No. 2, Mar. 1996: 93–155.
[4]
Grüninger, M.S. and Fox, M. (1995). “Methodology for the Design and Evaluation of Ontologies”. In Proc. of Int’l Joint Conf. AI Workshop on Basic Ontological Issues in Knowledge Sharing.
[5]
Fernández, M. Gómez-Pérez, A. and Juristo, N. (1997). “METHONTOLOGY: From Ontological Art towards Ontological Engineering.” In Proc. of AAAI Spring Symp. Series: 33–40. AAAI Press, Menlo Park, Calif. 1997.
[6]
DAML and its Ontology Library, http://www.daml.org, http://www.daml.org/ontologies/
[7]
OIL, http://www.ontoknowledge.org/oil/
[8]
Dong, Y. and Li, M. (2002). "ESO: A Prospective Ontology for e-Service". Paper in submission to 2003 International Symposium on Applications and the Internet (SAINT 2003), Orlando, Florida, USA, January 27 - 31, 2003.
[9]
Fan, W. and Libkin, L. (2001). “On XML Integrity Constraints in the Presence of DTDs”. In Proc. of 20th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS’01). Santa Barbara, California, May 21-24, 2001
An Environment for Multi-domain Ontology Development and Knowledge Acquisition Jinxin Si, Cungen Cao, Haitao Wang, Fang Gu, Qiangze Feng, Chunxia Zhang, Qingtian Zeng, Wen Tian, Yufei Zheng Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China {jxsi, cgcao, htwang, fgu, qzfeng, cxzhang}@ict.ac.cn
Abstract. Ontology is used widely as an analytical tool in many domains in recent years, and adopted as a fundamental basis for knowledge acquisition and sharing. However, it becomes difficult for knowledge engineers to share a large ontology base created by many engineers from different domains, and thus ontologies may not genuinely support knowledge engineers in knowledge acquisition and analysis. In order to promote multi-domain ontology sharing and to facilitate knowledge acquisition and analysis, we have implemented an ontology and knowledge engineering environment (OKEE) for multi-domain ontology development and knowledge acquisition. OKEE provides a list of functionalities for ontology design and for knowledge acquisition.
1 Introduction Because of rapid application of ontologies in developing systems for complex tasks (e.g. knowledge discovery, automated inference, natural language processing and intelligent tutoring), an environment strongly demanded for collaborative ontology design and manipulation and for ontology reuse. Furthermore, the diversity of the background of knowledge engineers as well as the complexity of knowledge domains often make formulated ontologies mixed with subjectivity to some extent, thus deteriorating ontology reusability and sharability [7, 9, 23]. A desirable environment not only supports ontology design, modification, manipulation, analysis, reuse and sharing, but also assists knowledge engineers to focus their attention in the areas needed to be resolved, and extends knowledge bases efficiently [18, 19, 20]. In other words, it helps knowledge engineers efficiently and provides immediate response for every operation on ontologies. Because a large-scale ontology base is difficult to manage for multiple knowledge engineers in a manual way, an ontology needs to be designed, analyzed, evaluated, merged, divided and communicated in a uniform and shareable environment [12, 13]. It is no doubt that the usability of a knowledge base can be guaranteed or at least improved by knowledge verification, including both ontology analysis and knowledge analysis, of joint effort.
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 104-116, 2002 © Springer-Verlag Berlin Heidelberg 2002
An Environment for Multi-domain Ontology Development
105
There are several existing ontology management systems, e.g. Ontolingua/Chimaera[14], Protégé/PROMPT[21], OntoEdit[22], WebOnto/Tadzebao[6]. OKEE incorporates some features of these systems in knowledge presentation, the ease of use, multi-user collaboration, security management and so forth. Meanwhile, our OKEE contains a convenient editing environment on which ontologies can be visualized and modified interactively. It has two fuzzy retrieval methods: by slot name and by ontology name; knowledge engineers can retrieve slots or ontologies by supplying partial information. It also allows knowledge engineers to compare ontologies in order to identify their similarities or differences. The paper is organized as follows. Section 2 centers on domain-specific ontologies. Section 3 introduces the architecture of OKEE. Section 4 presents the method of ontology management and analysis. Section 5 introduces our ontology-based knowledge formalization and analysis. Section 6 introduces knowledge compilation that turns knowledge frames into IO models for efficient knowledge retrieval and inference. Section 7 concludes the paper and raises a few problems for our future research.
2 Domain-Specific Ontologies and Sharing Ontologies are fundamental to knowledge sharing in different agents and across different applications. In a general sense, an ontology is an explicit specification of conceptualization [16]. Nevertheless, the term ontology has been controversial in the current AI practice, and so far no formal definition exists. In the literature, we can identify two types of ontologies: engineering ontologies and formal ontologies. Formal ontologies are abstract and they explicitly conceptualize a domain of discourse, while engineering ones are rather informal and often misleading. In our work, we have elected to use the term of domain-specific ontologies (DSO), and have designed a theoretic system of such ontologies covering all domains (66 in total). The rationale for choosing domain-specific ontologies is twofold. First, domain-specific ontologies are not distant from concrete domains. This makes them useful in knowledge acquisition. Second, our experience has repeatedly shown that many ontological constraints or axioms are hard to be formulated at a high level of abstraction, but are easy to be identified and formalized within domain-specific ontologies. More importantly, these lower-level axioms are more useful in ontologybased knowledge analysis during knowledge compilation (see Sect. 6). Based on the discussion above, we have developed more than 580 domain-specific ontologies and around one million assertions and 300000 concepts covering 16 domains, e.g. medicine, biology, history, geography, mathematics, music, ethnology, and archaeology [8, 10, 11, 15, 17, 24, 25, 26, 27]. These ontologies are domainspecific extensions of the Generic Frame Protocol [5]. Figure 1 illustrates a frame-based schema for categories in domain-specific ontologies. The schema consists of three parts: category header, category body, and inner-category axioms. The category header begins with the keyword defcategory, which is followed by the name of the category to be defined1. A category may possi1
In our work, we assume the uniqueness of category and slot naming.
-LQ[LQ6LHWDO
EO\LQKHULWWKHFRQWHQWIURPLWVVXSHUFDWHJRULHV,WPD\LPSOHPHQWZKDWZHFDOOVORW FDWHJRULHV$VORWFDWHJRU\GRHVQRWFRUUHVSRQGWRDQ\HQWLW\LQWKHGRPDLQRILQWHU HVWDQGLWRQO\FRQWDLQVDVHWRIVORWGHILQLWLRQVZKLFKDUHVKDUHDEOHLQGLIIHUHQWFDWH JRULHVLQGLIIHUHQWRQWRORJLHV GHIFDWHJRU\ P\FDWHJRU\!>UHOHYDQWFDWHJRULHV!@ ^ ^VORWGHI!` ^LQQHUFDWHJRU\D[LRP!` ` UHOHYDQWFDWHJRULHV! LQKHULWV VXSHUFDWHJRULHV! _LPSOHPHQWV VORWFDWHJRULHV! _LQKHULWV VXSHUFDWHJRULHV!LPSOHPHQWV VORWFDWHJRULHV! VORWGHI! VORW!VORWQDPH! W\SH VORWW\SH! >V\QRQ\P YDOXHV!@ >SDUDV\QRQ\P YDOXHV!@ >DQWRQ\P YDOXHV!@ >XQLW XQLWV!@ >GRPDLQ YDOXHGRPDLQ!@ >GHIDXOW GHIDXOWYDOXH!@ >IDFHWV IDFHWV!@ >UHYHUVH UHYHUVHVORWV!@ >SURSHUW\ SURSHUWLHV!@ >VXEVORWV VXEVORWV!@ >UHVORWV VORWV!@ >FRPPHQW LQIRUPDOFRPPHQWV!@ VORW! DWWULEXWH _UHODWLRQ _PHWKRG LQQHUFDWHJRU\D[LRP! ILUVWRUGHUZHOOIRUPHGIRUPXOD!
)LJ1.,6WDQGDUGRI2QWRORJ\5HSUHVHQWDWLRQNH\ZRUGVDUHLQEROGIDFH
%HFDXVHWKHUHPD\HYHQWXDOO\EHDWUHPHQGRXVQXPEHURIFDWHJRULHVWKHUHODWLRQ VKLSV LQKHULWV DQG LPSOHPHQWV SOD\ D FULWLFDO UROH LQ RUJDQL]LQJ FDWHJRULHV LQWR D KLHUDUFK\ RI DEVWUDFWLRQ ZKHUH FDWHJRULHV DW KLJKHU OHYHOV RI DEVWUDFWLRQ FDQ EH VKDUHGLQGLIIHUHQWGRPDLQVDQGUHXVHGLQGHILQLQJORZHUOHYHOFDWHJRULHV 7KHFDWHJRU\ERG\FRQVLVWVRIDQXQRUGHUHGOLVWRIVORWGHILQLWLRQV$VORWPD\KDYH D QXPEHU RI IDFHWV WR SURYLGH DGGLWLRQDO LQIRUPDWLRQ IRU WKH VORW :H KDYH VXPPD UL]HGPDQ\IDFHWVIURPWKHGLIIHUHQWGRPDLQVZKLFKZHDUHFXUUHQWO\ZRUNLQJRQ &RPPRQIDFHWVDUH − − − − − − − −
7\SH7KHYDOXHW\SHRIWKHVORW7KLVIDFHWLVPDQGDWRU\IRUDOOVORWV 6\QRQ\P7KHV\QRQ\PVRIWKHVORW 3DUDV\QRQ\P7KHSDUDV\QRQ\PVRIWKHVORW $QWRQ\P7KHDQWRQ\PVRIWKHVORW 8QLW7KHXQLWRIWKHVORWYDOXH 'RPDLQ7KHSRVVLEOHVHWRIYDOXHVWKHVORWFDQDVVXPH 'HIDXOW7KHGHIDXOWYDOXHRIWKHVORW )DFHW7KHGRPDLQVSHFLILFIDFHWVZKLFKWKHVORWLVVXEMHFWWR
$Q(QYLURQPHQWIRU0XOWLGRPDLQ2QWRORJ\'HYHORSPHQW
− 5HYHUVH 7KH UHYHUVH VORW RI WKH VORW )RU H[DPSOH WKH UHYHUVH RI WKH VORW KDYH PHPEHULVLVPHPEHURI − 3URSHUW\7KHSURSHUWLHVZKLFKWKHVORWKDV,QRXUHQYLURQPHQWDVORWPD\SRVVL EO\ KDYH WKH IROORZLQJ SURSHUWLHV UHIOH[LYH LUUHIOH[LYH V\PPHWULF DV\PPHWULF DQWLV\PPHWULFDQGWUDQVLWLYH − 6XEVORW7KHVXEVORWVRIWKHVORW − 5HVORW7KHUHOHYDQWVORWVRIWKHVORW,QDGLVHDVHFDWHJRU\IRUH[DPSOHWKHUHOH YDQWVORWVRIV\PSWRPDUHPDMRUV\PSWRPDQGPLQRUV\PSWRP − &RPPHQW7KHLQIRUPDOFRPPHQWRQWKHVORW &DWHJRU\ D[LRPV DUH ZHOOGHILQHG ILUVWRUGHU IRUPXODH :)) UHSUHVHQWLQJ D ILUVW RUGHUWKHRU\RQWKHFDWHJRULHVDQGWKHLUUHODWLRQVKLSV
$UFKLWHFWXUHRI2.(( 2.((PDLQO\FRQVLVWVRIHLJKWPRGXOHV&DWHJRU\HGLWLQJPRGXOH &(0 FDWHJRU\ DQDO\VLV PRGXOH &$0 RQWRORJ\ EDVHV PRGXOH 2%0 NQRZOHGJH IRUPDOL]DWLRQ PRGXOH .)0 NQRZOHGJH DQDO\VLV PRGXOH .$0 NQRZOHGJH IUDPHV PRGXOH .)50 ,2PRGHOJHQHUDWLRQPRGXOH,*0 DQG,2PRGHOEDVHPRGXOH,%0 .QRZOHGJH(QJLQHHUV
&DWHJRU\ (GLWLQJ
.QRZOHGJH )RUPDOL]DWLRQ
&DWHJRU\ $QDO\VLV
.QRZOHGJH $QDO\VLV
2QWRORJ\ %DVHV
,2PRGHO *HQHUDWLRQ
.QRZOHGJH )UDPHV
,2PRGHO%DVHV
)LJ7KH$UFKLWHFWXUHRI2.((
7KH ZKROH DUFKLWHFWXUH LV GHSLFWHG LQ ILJXUH 7KH PRGXOHV RI &(0 &$0 DQG 2%0WRJHWKHUFRPSRVHWKHRQWRORJ\PDQDJHPHQWSDUWRI2.((DQGWKHPRGXOHVRI .)0.$0DQG.)50DUHWKHSDUWRINQRZOHGJHDFTXLVLWLRQ7KHODVWWZRPRGXOHV LH,*0DQG,%0 JHQHUDWHDQGUHSUHVHQWWKHNQRZOHGJHIUDPHVLQDIRUPDWRI,2 PRGHOV>@ )URP VHFWLRQ WR ZH ZLOO SUHVHQW WKH WKUHH PDLQ PRGXOHV LQ 2.(( QDPHO\ &(0 &$0 DQG 2%0 6HFW .)0 .$0 DQG .)50 6HFW ,*0 DQG ,%0 6HFW
108
Jinxin Si et al.
4 Ontology Management and Analysis Ontology management and analysis are crucial components in our OKEE system. The main tasks are divided into two parts: ontology management and ontology analysis. 4.1 Ontology Management OKEE provides a set of basic operations on domain-specific ontologies. These operations can be divided into five basic types: Category visualization, category retrieval, category creation, removal and modification, ontology security policies maintenance, and view conversion of categories. All categories form a network by the two relationships inherits and implements. We implemented a subsystem in OKEE for visualizing the category network. This visualization is very useful for knowledge engineers to browser/retrieve existing categories when creating new categories or modifying old ones. Figure 3 depicts a partial view of our category network around the category biology in which heavy lines represents inherits and light ones for implements. OKEE has two retrieval methods: by slot name and by category name. In the first method, knowledge engineers can retrieve all categories with a given slot name, and in the second method they can find categories with a given name. For each retrieval method, fuzzy search is supported: Knowledge engineers can retrieve slots or categories by supplying partial information. To create a new category, the developer takes the following steps. 1. Retrieve the whole ontology base to see whether the category already exists. If so, terminate. 2. Retrieve the whole ontology base to identify the super-categories for the new category. This can be done by supplying some common slots in the new category to OKEE, or by browsering the whole ontology base. If some super-categories are found, define the new category directly under the super ones, and then goto 5. 3. Retrieve the ontology base to find whether an existing category can be modified to define the new one. If found, modify the existing category, and goto 5. 4. Create the new category from scratch through the editing facilities provided by OKEE. 5. Check the defined category for various errors (see Sect. 4.2). OKEE provides a simple security policy when a knowledge engineer modifies categories: 1. The knowledge engineer can perform add-slot() or remove-slot() operation on his (her) own categories. But when the operations are committed, OKEE checks whether the modified category is syntactically legal. If this is true, it continues to check whether the operations guarantee that every added slot is useful, or every removed slot is useless. Note that OKEE can pre-compile knowledge frames to collect all slot names, and these slots help to check the usefulness or uselessness of slots in the category.
An Environment for Multi-domain Ontology Development
109
Fig. 3. Ontology Structure Surrounding the Category Biology
2. The knowledge engineer is authorized to browser categories designed by others, but can not modify them in any manner. If some modification is necessary for a category, he or she can send a request to the engineer-in-chief, and the later will verify the request. If the request is reasonable, the engineer-in-chief changes the category according to the request, and then sends a note to the owner of the original category. For editing categories conventionally, OKEE provides another step-by-step facility. Although this facility may take some extra time in defining slots of a category, it can check the definitions online. 4.2 Ontology Analysis In collaborative development of multi-domain ontologies and their categories, there may be several problems that must be corrected for ontologies to be used in formal analysis and knowledge acquisition. Ontology analysis is a key component in the OKEE system. It consists of two parts: ontology parsing and semantical analysis. Syntax checking ensures that syntactical errors do not occur. Syntax checking is quite straightforward, because ontologies follow a fixed frame format. But semantical analysis of categories is quite tedious. OKEE considers many possible semantical errors, and typical errors are:
-LQ[LQ6LHWDO
5HGXQGDQF\RILQKHULWDQFH &LUFXODULW\RILQKHULWDQFH 0XOWLSOHGHILQLWLRQVRIVORWV 2PLVVLRQRIVORWDQGIDFHWGHILQLWLRQ &LUFXODULW\ LQ RQWRORJLFDO LQKHULWDQFH LV D FRPPRQ HUURU LQ FROODERUDWLYH RQWRORJ\ GHVLJQ)LJXUHD GHSLFWVDQLQKHULWDQFHVWUXFWXUHLQZKLFKLQKHULW&& LVDFWXDOO\ UHGXQGDQW&LUFXODULW\DOVRIUHTXHQWO\RFFXUVLQRQWRORJLFDOLQKHULWDQFH$QH[DPSOH LVVKRZQLQILJXUHE ZKHUHLQKHULW&& DUHPLVWDNHQO\DVVHUWHG & & & & & & D E )LJ,OOXVWUDWLQJ5HGXQGDQF\D DQG&LUFXODULW\E RI2QWRORJ\,QKHULWDQFH
7RLGHQWLI\SRVVLEOHUHGXQGDQF\DQGFLUFXODULW\LQRQWRORJ\LQKHULWDQFHZHGHVLJQ WZR DX[LOODU\ IXQFWLRQV LV3DUHQW&& DQG LV%URWKHU&& 7KH EDVLF LGHDV EHKLQGWKHWZRIXQFWLRQVDUHDVIROORZV ,QLWLDOO\LV3DUHQW&& DQGLV%URWKHU&& )$/6( )RULV3DUHQW&& ZHKDYHWKHIROORZLQJDVVLJQPHQWUXOHV LQKHULW&& ÆLV3DUHQW&& DQGLV3DUHQW&& LQKHULW&& ALQKHULW&& ÆLV3DUHQW&& DQGLV3DUHQW&& )RULV%URWKHU&& ZHKDYHWKHIROORZLQJDVVLJQPHQWUXOHV LV%URWKHU&& 758(ÆLV%URWKHU&& 758( LQKHULW&& ALQKHULW&& ÆLV%URWKHU&& 758( LQKHULW&& ALQKHULW&& ÆLV%URWKHU&& 758( /HW&DQG&EHWZRFDWHJRULHV:HGLVWLQJXLVKWKHIROORZLQJFDVHV D ,I LV3DUHQW&& ≠ DQG LV%URWKHU&& )$/6( WKHQ WKHUH LV DQ LQKHULWDQFHUHODWLRQVKLSEHWZHHQ&DQG& E ,I LV%URWKHU&& 758( WKHQ WKHUH LV QR LQKHULWDQFH UHODWLRQVKLS EHWZHHQ&DQG& F ,WLVLPSRVVLEOHWKDWLV3DUHQW&& ≠DQGLV%URWKHU&& 758( 0XOWLSOHGHILQLWLRQVRIVORWVLQGLFDWHUHGXQGDQF\DQGRULQFRQVLVWHQFHEHWZHHQFDWH JRULHV ,Q RUGHU WR HOLPLQDWH WKHP VRPH UXOHV PXVW EH VHW XS LQ DGYDQFH VR WKDW 2.((FDQJXLGHWKHNQRZOHGJHHQJLQHHULQPRGLI\LQJVORWGHILQLWLRQVRUWUDQVIHUULQJ WKHPWRSURSHUFDWHJRULHV:HKDYHFRQVLGHUHGWZRFDVHVRIPXOWLSOHVORWGHILQLWLRQV
An Environment for Multi-domain Ontology Development
111
1. It is not allowed that the slots are defined more than one time in a category. 2. Slot S is defined in two different categories, but with different facets. In this case, the knowledge engineer is advised to merge the two separate definitions into one and place the merged definition in a proper category; or is advised to design a super-category of C1 and C2, and put the merged slot as a slot in the super-category. C
C1
C2
C3
C4
Fig. 5. An Example about Ontology Topology
As illustration, figure 5 depicts a situation where slot S occurs in C as well as in C1, C2, C3 and C4. In this case, the knowledge engineer is advised that S should be moved to the super-category C. There are some other similar situations where advice should also be given to the knowledge engineer. For example, suppose that S is defined in C, C1, C3 and C4, but not in C2. This is an unusual case, since the supercategory and some of its subcategories have S as slot, but S does not occur in C2. The causes might be that the user made an error of omission when defining C2 or C2 might not be a real subcategory. In any case, the knowledge engineer must be advised to examine possible causes and modify the inheritance properly. Another more complicated situation is that S is defined in C1, C3 and C4, but neither in C nor in C2. One possible cause is that S was neglected in C2 by the knowledge engineer when designing C2. If this is true, the user is advised to consider that S may be a slot of C. Another possible cause is that nothing may be wrong, because S is a (private) slot of C1, C3 and C4. There may be other situations where slots are omitted in categories. But these situations can not be detected within the categories themselves. In fact, they can only be detected during the procedure of knowledge analysis (see Sect. 5).
5 Ontology-Based Knowledge Acquisition and Analysis Although knowledge acquisition (KA) has been considered as a critical task in AI system development, few practical KA systems have been developed in the past. In our work, we have divided the knowledge acquisition task into two separate phases, namely knowledge formalization and analysis. In the first phase, the knowledge engineer extracts domain knowledge from text sources. This phase is mainly done manually with a frame representation method (see figure 6 for an example). In the second
112
Jinxin Si et al.
phase, OKEE analyzes the acquired knowledge to identify possibly syntactical and semantical errors. In either phase, domain ontologies play a crucial role. In the knowledge formalization phase, domain ontologies serve as both guidelines and ”place-holders”: The knowledge engineer uses domain-specific ontologies to identify possible values in the sources to fill in those places. defframe International Ultraviolet Explorer: Instrument { is-a: Satellite orbit: around earth : time January 1978 manufactured-by: Europe and USA } Fig. 6. An Example Frame
In the phase of knowledge analysis, OKEE uses domain-specific ontologies to identify possible errors in the formalized knowledge frames. The procedure of knowledge analysis includes the following steps: 1. Read in and compile the categories. 2. If categories are not compiled successfully, stop. 3. While the knowledge frame base is not empty, do the following: a) Select arbitrarily a frame F from the frame base. b) Parse F according to the knowledge frame language. If syntactical errors are detected, report them to the knowledge engineer, and then goto e). c) Do semantical analysis on F. If semantical errors are detected, report them to the engineer, and goto e). d) Generate IO-models for F (see Sect. 6). e) Remove F from the frame base. In a knowledge frame, there may be quite a number of semantical errors. In the following we only mention four typical cases: 1. (Omission of slot definition). A slot is used in the frame, but is not defined in the corresponding category. Therefore, the frame parser does not know its type and other information (see Sect.4.2). 2. For a slot of the quantity type in the frame, no unit is given for the slot value, or the unit is different from the possible units defined for the slot in the corresponding category. 3. In the frame, a slot value is out of the slot domain as defined in the corresponding category. 4. Some facets required for a slot are not given any value in the frame. This makes the specification of the slot incomplete in the frame.
An Environment for Multi-domain Ontology Development
113
6 Knowledge Compilation After knowledge frames are parsed and analyzed, they are compiled into IO-models through the NKI compiler. An IO-model defines the input and output relationships between concepts and slots. An IO-model is subdivided into two parts: a declaration part and a statement part. In the declaration part, all relevant nodes are specified, and in the second part all the relationships between the nodes are given. An IO statement is in the following format: ?node.io ~ ?input-node ?output-node ?facet ?node-type ?thesaurus {:?input-node ?output-node ?facet ?node-type ?thesaurus}
defframe C { my-attr: V :F FV1, and FV2 :G GV1 }
NKI compiler
@9 M01=and N01=C N02=my-attr N03=V N04=F N05=FV1 N06=FV2 N07=G N08=GV1 N01.io~* N02 * concept * N02.io~N01 N03 M01-1 attribute * N03.io~N02 * * concept * M01-1.io~N02 N04 * and *:N02 N07 * and * N04.io~M01-1 M01-2 * facet * M01-2.io~N04 N05 * and *:N04 N06 * and * N05.io~M01-2 * * concept * N06.io~M01-2 * * concept * N07.io~M01-1 N08 * concept * N08.io~N07 * * concept *
Fig. 7. Illustrating NKI Knowledge Compilation
IO-models play a crucial role in speeding up knowledge retrieval [1, 2]. All concepts, slots and facets have only one occurrence in our IO-model bases, and multiple occurrences are so integrated that a global search for a concept, slot or facet is reduced to a local search. When a frame is analyzed, an IO-model is created for each slot in the frame, as illustrated in figure 7. Each IO-model has an elegant graphical view, and the model nodes correspond to the vertices in the graph, and the IO statements are represented as arcs. Figure 8 depicts the graph for the IO-model in figure 7. Figure 9 presents a knowledge frame (a), and its three IO-models (b, c and d).
7 Conclusion Ontology development for multiple domains is a team job, and an integrated environment like OKEE facilitates the work greatly. Separating ontology design from knowledge acquisition would cause problems, and our practice strongly suggests that
114
Jinxin Si et al.
N 0 1
N 0 2
C
a n d
N 0 4
M 0 1 -2
F V 1 N 0 5
N 0 3
A ttr
V
M 0 1 -1
G
F
N 0 7
a n d
G V 1
N 0 8
F V 2 N 0 6
Fig. 8. The Graphical View for the IO-model in figure 8. defframe International Ultraviolet Explorer:Instrument { is-a: Satellite orbit: around earth : time January 1978 manufactured-by: Europe and USA } (a) {
{ @3 N01=International Ultraviolet Explorer N02=is-a N03=Satellite N01.io~* N02 * concept * N02.io~N01 N03 * Relation * N03.io~N02 * * concept * }
(b)
{ @5 N01=International Ultraviolet Explorer N02=orbit N03=around earth N04=time N05=January 1978 N01.io~* N02 N04 concept * N02.io~N01 N03 * attribute * N03.io~N02 * * concept * N04.io~N01 N05 * facet * N05.io~N04 * * time *
@5 M01=and N01=International Ultraviolet Explorer N02=manufactured-by N03=Europe N04=USA N01.io~* N02 * concept * N02.io~N01 N01-1 * relation * N01-1.io~N02 N03 * and *: N02 N04 * and * N03.io~M01-1 * * concept N04.io~M01-1 * * concept
}
} (c)
(d)
Fig. 9. Illustrating IO-model Generation
ontology development and knowledge acquisition are actually two interactive tasks. In our OKEE, we provide convenient functions for both ontology development and knowledge acquisition by multiple knowledge engineers from various domains. Three problems remain in our research agenda. First, OKEE needs to strengthen its capability of coordination and collaboration between domain-specific ontologies in a multi-user environment. Distributed participants can come to the common understanding and representation of ontologies through online session, ontology-modified log and ontology locking strategy. These will be implemented in the next version of OKEE. Second, OKEE does not have strong inference capabilities. Such capabilities are indispensable in both ontology development and knowledge acquisition. Third,
An Environment for Multi-domain Ontology Development
115
knowledge analysis will be enhanced to identify other possible ontological errors. An ontological error is a semantical error, but the former is hard to detect. For example, one can say that Beijing is in the north of China, but saying ”China is in the north of Beijing?” would be an ontological error. Detecting such errors would require more inference based on ontological axioms (about geography).
Acknowledgements This work is supported by a grant from the Chinese Academy of Sciences (#20004010), a grant from the Foundation of Chinese Natural Science (#20010010-A), and a grant from the Ministry of Science and Technology (#2001CCA03000).
References 1. Cungen Cao: National Knowledge Infrastructure: A Strategic Research Direction in the 21st Century, Computer World, 1-3, 1998. 2. Cungen Cao: Knowledge Application Programming Interface 1.1 (KAPI 1.1), Technical Report, Institute of Computing Technology, Chinese Academy of Sciences, 2000. 3. Cungen Cao: Medical Knowledge Acquisition from Encyclopedic Texts. Lectures In Computer Science, vol.2101, 268-271. 4. Cungen Cao and Qiuyan Shi: Acquiring Chinese Historical Knowledge from Encyclopedic Texts, In Proceedings of the International Conference for Young Computer Scientists, pp.1194-1198, 2001. 5. V. Chaudhri, A. Farquhar, R. Fikes, P. Karp, & J. Rice: The Generic Frame Protocol 2.0,KSL-97-05, 1997.(http://www.ksl.stanford.edu/KSL_Abstracts/ KSL-97- 05.html) 6. J. Domingue: ”Tadzebao and WebOnto: Discussing, Browsing, and Editing Ontologies on theWeb”, Proceedings of the Eleventh Workshop on Knowledge Acquisition, Modeling and Management, Banff, Canada, 1998. 7. A. Farquhar, R. Fikes, W. Pratt, & J. Rice: Collaborative Ontology Construction for Information Integration. Knowledge Systems Laboratory Department of Computer Science, KSL-95-63, August 1995. 8. Donghui Feng, Cungen Cao: Building Communication between User and Knowledge, International Conference for Young Computer Scientists. 2001. 1024 –1026. 9. D. Fensel et al: OIL in a nutshell. Knowledge Acquisition, Modeling, and Management, Proceedings of the European Knowledge Acquisition Conference (EKAW-2000), R. Dieng et al. (eds.), Lecture Notes in Artificial Intelligence, Vol. 1937, Springer-Verlag, October 2000. 10.Donghui Feng, Cungen Cao: A Query Language for NKI Users, CJCAI 2000. 2001. 208213. 11.Qiangze Feng, Cungen Cao and Wen Tian: KTS-I a Knowledge-to-Speech System, In Proceedings of the Sixth International Conference for Young Computer Scientists, pp.262-266, 2001. 12. R. Fikes, A. Farquhar, & J. Rice: Tools for Assembling Modular Ontologies in Ontolingua. Knowledge Systems Laboratory, April, 1997. 13.R. Fikes, A. Farquhar, & J. Rice: Large-Scale Repositories of Highly Expressive Reusable Knowledge. Knowledge Systems Laboratory, April, 1997.
116
Jinxin Si et al.
14.R.Fikes, A. Farquhar, & J. Rice: The Ontolingua Server: A Tool for Collaborative Ontology Construction. Knowledge Systems Laboratory, September, 1996. 15.Fang Gu, Cungen Cao: Biological Knowledge Acquisition from the Electronic Encyclopedia of China, Proceedings of the Sixth International Conference for Young Computer Scientists,1199-1203, 2001. 16.T.R.Gruber: A Translation Approach to Portable Ontology Specification, Knowledge Acquisition, vol.5, no.2, pp.199-220. 17.Yuxia Lei and Cungen Cao: Acquiring Military Knowledge from the Encyclopedia of China, In the Proceedings of the Sixth International Conferences for Young Com-
puter Scientists, 368-372, 2001. 18.Deborah L.McGuinness: Conceptual Modeling for Distributed Ontology Environment, Proceedings of The Eighth International Conference on Conceptual Structures Logical, Linguistic, and Computational Issues(ICCS 2000), Darmstadt, Germany, August 1418,2000 19.Deborah L.McGuinness,Richard Fikes, James Rice, and Steve Wilder: The Chimaera Ontology Environment, Proceedings of the Seventeenth National Conference on Artificial Intelligence (AAAI2000) Austin, Texas. July 30 - August 3, 2000. 20. Deborah L.McGuinness, Richard Fikes, James Rice, and Steve Wilder: An Environment for Merging and Testing Large Ontologies, Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning (KR2000). Breckenridge, Colorado, USA. 21.Natalya F. Noy, Mark A. Musen: SMART: Automated Support for Ontology Merging and Alignment, Proceedings of the Twelfth Workshop on Knowledge Acquisition, Modeling and Management, Banff, Canada, July 1999. 22.Y. Sure, S. Staab, J. Angele, D. Wenke, and A. Maedche: OntoEdit: Guiding ontology development by methodology and inferencing, Prestigious Applications of Intelligent Systems (PAIS), ECAI 2002, July 21-26 2002, Lyon, France, 2002. 23.H. Stuckenschmidt: Using OIL for Intelligent Information Integration, Proceedings of the Workshop on Applications of Ontologies and Problem-Solving Methods at ECAI 2000, 2000. 24.Suqin Tang: Researches of Across-Discipline Intelligent Tutoring System Based on Domain-Specific Ontology, MS Thesis, Guangxi Normal University, 2002. 25.Wen Tian , Cungen Cao: A Framework for Extracting Knowledge of the Human Blood System from Medical Texts. In the Proceedings of the Sixth International Conference for Young Computer Scientists,501-505, 2001. 26.Dehai Zhang , Cungen Cao: Acquiring Knowledge of Chinese Cities from the Encyclopedia of China, In the Proceedings of the Sixth International Conferences for Young
Computer Scientists, 1111-1193, 2001. 27.Yuxiang Zhang, Cungen Cao: Acquiring Knowledge of Chinese Mountains from the Encyclopedia of China, In the Proceedings of the Sixth International Conferences for
Young Computer Scientists, 1222-1224, 2001.
Applying Information Retrieval Technology to Incremental Knowledge Management* Zhifeng Yang, Yue Liu, and Sujian Li Software Division, Institute of Computing, The Chinese Academy of Sciences, Beijing 100080, China Email: {zfyang, yliu, lisujian}@ict.ac.cn
Abstract. Knowledge Management has been an important task in large organizations. It is rather costly to convert the huge amounts of unstructured information produced by legacy applications to formally described knowledge. In this paper we present a knowledge management framework that is based on improved information retrieval systems. By introducing ontologies into IR domain, the framework can recognize semantics of concepts and obtain limited reasoning ability. We expect that the framework would provide organizations a practical and cost-effective way of incremental knowledge management.
1 Introduction Knowledge Management has been an important task in large organizations. Traditional term-based, full-text search engines based on information retrieval (IR) technology are not effective in performing knowledge storage and retrieval, because they cannot exploit the semantic content of documents. Further, the most popular document formats, such as PDF, DOC, TXT and HTML, are unstructured themselves. Searchers must guess which terms the authors of the information sources may use. If they fail to find appropriate terms the most relevant documents will not be retrieved. In the worst situation, they will get lots of documents that are not relevant at all. For *
Supported by the National Grand Fundamental Research 973 Program of China under Grant No. G1998030413 & G1998030510; the Youth Foundation of Institute of Computing Technology under Grant No 20016280-9.
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 117-129, 2002. © Springer-Verlag Berlin Heidelberg 2002
118
Zhifeng Yang, Yue Liu, and Sujian Li
novices of a domain, it is rather difficult to find such right domain terms[Deerwester, 1990]. The emergence of XML offers a powerful tool to those who care more about the semantics of documents. The new language can help information providers manually structure their information, so that search engines can learn about parts of the document semantics. However, searchers generally need to know the exact data structure of information sources, such as element hierarchies and tag names; thus it is difficult to develop large-scale, general-purpose search engines. Furthermore, XML itself does not provide standard semantics specifications; it is only a markup syntax standard (http://www.w3c.org/xml/). Artificial Intelligence (AI) community has been researching the knowledge representation and retrieval problems for a long time. Their approach is to define standard facilities to describe human knowledge at any granularity level and to obtain inference ability. The basis of the AI approach is ontologies, which means the consensus conceptualization of the real world. It is initially defined by Tom Gruber[Gruber, 1993]. Once the needed ontologies are established, people can index the existing documents (unstructured knowledge) and convert them to formal represented knowledge. Then searchers can utilize some kind of intelligent agents to browse, modify and query the knowledge bases. Nicola Guarino gives a discussion of the role of ontologies in information systems[Guarino, 1998]. There have been many formal knowledge representation languages. Many of them are based on XML, such as SHOE, Ontology Exchange Language (XOL), Ontology Markup Language (OML and CKML), Resource Description Framework Schema Language (RDFS), Riboweb, OIL[Fensel, 2001], and DAML+OIL. DAML+OIL is created with the best features of SHOE, DAML, OIL and several other markup approaches. There are also several representation languages based on First Order Logic,
such
as
KIF-based
Ontololingua,
Loom,
Frame-Logic
(refer
to
http://www.semanticweb.org/knowmarkup.html for a survey). The AI approach is a solid step towards the machine understandable and inferable human knowledge representation. But presently there are two serious problems: how to construct the ontologies and how to index the existing unstructured documents. The building of ontologies is not as simple a process as it looks like. Human knowledge is an extremely huge entity as a whole, and it is increasing with acceleration. Extracting concepts and axioms from human knowledge is an arduous job. Consensus of concepts is another problem. Different community may have different ideas about
Applying Information Retrieval Technology to Incremental Knowledge Management
119
concepts. To make things worst, although some semi-automatic tools have been developed, indexing of most existing unstructured documents needs to be done manually by knowledge experts(Frank et al. gives an example of extracting formal knowledge from a structured information source[Frank et al., 1999]). This makes knowledge representation by this approach very costly, if not impractical, at present. As compared with the AI approach, most steps of the IR information processing can be automatically performed. IR technology has been proved efficient and effective in dealing with huge amounts of unstructured documents. If we add semantic support to existing IR systems, we would get improved legacy systems that are automatic, semantics-aware, capable of intelligent knowledge management at document level. For organizations which have accumulated gigabytes of unstructured data (which can be called legacy knowledge), making an investment in such systems will enable them to begin their knowledge management in a short period of time. Although the improved IR scheme is a transition to full-fledged knowledge management, it is cost-effective. In the following sections, we will present an improved IR system framework
2 Ontologies for IR System As discussed above, the full-text indices created by traditional IR systems are based on terms. From the semantic point of view, a term in documents always represents certain concepts. There are often more than one possible lexical forms for a certain concept, and a document author just selected one of them in his document. Then if a searcher misses the only word, he will probably miss that document. In terms of IR evaluation, this phenomenon may lead to loss of recall. For large-scale search engines recall is not a serious problem nowadays, because there are so abundant information on the World Wide Web that searchers can almost always find interested pages. But in organizational environment searching in knowledge bases is somewhat like searching in database: the data may not be as redundant as data on the web, so recall is also critical. To improve recall we must append possible relevant information to original queries. In other words, we must complement original queries so that information needs of users can be represented more comprehensively.
120
Zhifeng Yang, Yue Liu, and Sujian Li
Term-based technology has still other serious problems. Firstly, when a query consists of only simple concepts, sometimes the information need cannot be determined. Recall of such queries may be satisfactory, but the precision performance will be poor. Secondly, one certain term usually has several semantic meanings. When such a term appears in a query without context, it is difficult to determine which meaning to apply. In IR domain queries and documents are generally matched literally, i.e., the semantics are discarded. The searcher who submits a certain term definitely has his particular information need, but the search result may include many irrelevant documents, in which the submitted term definitely appears, but actually has the semantics that the searcher does not intend. In terms of IR evaluation this is a loss of precision. Nowadays precision is more important than ever, also because of the abundance of information: searchers would like to see that most of the first 10 to 20 search results are highly relevant. To improve precision performance, there are two approaches. The first one is to limit the document collection to relevant domain so that the search results are always highly relevant. Introducing hierarchical category information to data collection belongs to this approach. Another approach is to limit the context of query terms. Generally speaking, the queries that users submit to search systems are quite short; they cannot provide definite context information. We must complement such queries with possible context information to represent user’s information needs more accurately. We have introduced the concept of ontologies to traditional IR systems to boost the performance of IR system. The architecture is illustrated in Fig. 1. The core of the framework is ontologies. Domain knowledge is based on the ontologies of certain domains and includes instances of concepts and their properties. For example, "CEO" is a subclass of "People" in the ontologies of a certain organization; "Mr. Smith" is the "CEO" of the organization. So "Mr. Smith" is an instance of "CEO", and correspondingly an instance of "People". As a "People", "Mr. Smith" has properties assigned for "People", such as "Name", "Birthday", etc. When a query is submitted, the system will parse it and try to find concepts and other information defined in domain knowledge base. Then the system will analyze the information need implied by the query. We call the analyzing process concept reasoning. Finally the original query will be translated to appropriate description based on the understanding of the information need. So users of the new system are
Applying Information Retrieval Technology to Incremental Knowledge Management
121
allowed to search by concepts and instances, as well as by terms. They need not to learn any complex knowledge retrieval languages.
term
documen t
documen t
documen t
documen t
term
term
term
term
Description of Information Need
Description of Information Need
Description of Information Need
Inference Engine
query
term
query
Domain Knowledge
Ontologies
query
Fig. 1.
Due to the difficulty of concept reasoning, we have not implemented the fullfledged inference engine yet. We will present an alternative approach in section 4, which we found effective and can be easily implemented. It should be noted that we do reasoning only with concepts and other information found in queries; we will not try to understand the semantics contained in documents. As we have discussed above, our goal is to obtain automatic knowledge management capability in a cost-effective way at the price of some features the AI approaches provide.
122
Zhifeng Yang, Yue Liu, and Sujian Li
3 Building of Ontologies There have been many literatures focused on the field of ontologies building[Uschold, ES 1996] [Uschold et al., KER 1996][Jones, 1998]. We have also noticed that some semi-automatic tools have been developed to assist the ontologies building process[Martin et al., 2000][ Staab et al., 2001]. Some general concepts have been defined[Uschold et al., 1998][Zhou, 2000]. We decide that we must make sure that our ontologies and knowledge base can be easily established from external data sources (especially database and semi-structured data source) and can be easily utilized. We choose RDF/RDFS as our knowledge representation language. They are not as powerful as DAML+OIL, but they have enough descriptive ability in terms of our system, which is mainly text-oriented. For evaluation purpose we have built a biography information knowledge base. The first step of the building process is the creation of ontologies. We setup a simple concept hierarchy which has only two levels. The top level is "People", and the second level consists of several roles such as "Scientist" and "Politician". We choose Triple as our inference engine, which can accept RDF and DAML+OIL information as its input (http://www.semantic-web.org/inference.html). Its grammar is different from XML. If there is a RDF statement (subject, predicate, object), the corresponding Triple statement is subject[predicate -> object]. The definitions of namespaces are as follows. rdf := "http://www.w3.org/1999/02/22-rdf-syntax-ns#". rdfs := "http://www.w3.org/2000/01/rdf-schema#". p := "http://www.ict.ac.cn/software/schema/people_schema#". pi := "http://www.ict.ac.cn/software/rdf/people_instance#". The definitions of RDF schema are as follows (abridged). p:People[rdfs:subClassOf -> rdf:Resource]. p:People[rdfs:label -> "People"]. p:Scientist[rdfs:subClassOf -> p:People]. p:Scientist[rdfs:label -> "Scientist"]. We define several properties for these concepts. For example, "People" has the properties "Name", "Alias", "Country", "Age", etc. The definitions are as follows (abridged). p:Name[rdf:type -> rdf:Property]. p:People[rdf:Property -> p:Name].
Applying Information Retrieval Technology to Incremental Knowledge Management
123
The next step is the creation of domain knowledge base. We visited some web sites devoted to Who’s who online service and decided to use the data on the web sites http://www.whosaliveandwhosdead.com/ and http://www.biography-center.com/. The former provides detail information of about 1300 personalities, and the latter provides comprehensive biography information. One instance in our knowledge base may be (abridged): pi:ETELLER[rdf:type -> p:Scientist]. pi:ETELLER[p:Name -> "Edward Teller"]. pi:ETELLER[p:Birthday -> "01/15/1908"]. pi:ETELLER[p:FamousFor -> "physicist; …" In this example biography information about Edward Teller, one of the most famous American physicists, is recorded in the knowledge base.
4 Translation of Query The IR system can only accept term-based queries. We must translate the concept based query to term-based one. At present our translating process is simplified and can be regarded as relevance expansion. If we find a concept in original query, the values of properties and names of the instances of the concept will be inserted into query; if an instance name is found in original query, the other property values of the instance and the property values of its class will be inserted into query. In our knowledge base, the property rdfs:label is used to identify concepts. The resources that do not have a rdfs:label property will be regarded as instances or other kind of resources. The following Triple statement will detect the existence of the concept "Scientist". FORALL X"Scientist"]@rdfschema(persons). If Triple tell us that X = p:'Scientist', we can conclude that the term "Scientist" represent a concept. Then we can invoke the following Triple statement to find all the instances of "Scientist". FORALL X,Y X[rdfs:label -> "Scientist"]]@rdfschema(persons).
124
Zhifeng Yang, Yue Liu, and Sujian Li
A property called "Alias" is defined in our ontologies, which can play the role of synonym. p:Alias[rdf:type -> rdf:Property]. p:People[rdf:Property -> p:Alias]. For a concept, "Alias" can record synonyms of its name. For an instance, especially an instance in our experimental knowledge base, it can record alias of instances. In both cases this property will help to improve retrieval results when its value is inserted into original query. The following statement will determine whether the term "politico" is a concept alias. FORALL X"politico"; rdfs:subClassOf>p:People]@rdfschema(persons). If we find that a term is neither a concept nor an alias of any concept, we can determine whether it is an instance by the following statement: FORALL X,Y"AnyName"]@rdfschema(persons). The last step is to determine whether a term is an instance alias: FORALL X"AnyName"; rdf:type -> p:People]@rdfschema(persons). At last we will get a term-based query by inserting the properties of recognized concepts and instances into original query. Note that "Name" and "Alias" are general properties and the top level concept "People" can be replaced by any other top level concept, so the application of our translating process is not limited to our experiments. There is a query in our experimental runs which said, What about alexander graham bell? The standard description of the query requires that a relevant document must list an invention by Alexander Graham Bell. The document must provide the date or approximate date of the invention or patent to be relevant (http://trec.nist.gov/data/topics_eng/topics.501-550.txt). It is obvious that we cannot get much requirement information from the original query. A document which contains Bell’s name may only refer to Bell’s parents, education, and so on. Such documents are certainly irrelevant. By translating the original query will be enhanced with context information, especially the value of property "p:FamousFor", which is "Inventor of the first telephone; reluctance microphone patent" in our knowledge base. When a query consists of such definite information, the retrieval result will be satisfactory.
Applying Information Retrieval Technology to Incremental Knowledge Management
125
As yet we have not discussed the weights of terms in the generated query. The basic idea is that the weight of a term should be determined by the information need implied by original query. When we can deduce the information need from original query safely, it is practical to assign different weights to terms. At present we assign the same weight to each term. There is another reason we do term weighting in this way. As the above example shows, sometimes user query lacks some critical context information only by which we can determine the information need. For such queries we have to give terms from different information "aspects" the same weight since it is not possible for us to ask users what they really want.
5 Results and Evaluation Our system is based on an improved version of the SMART information retrieval system. It runs on a Linux PC. Traditional TREC evaluation method is used to evaluate the system (http://trec.nist.gov/). It is not perfect for evaluation purpose, but it has obvious advantages: it is relatively objective and standard criterion of relevance. We choose Wt10g (http://www.ted.cmis.csiro.au/TRECWeb/access_to_data.html), the standard data collection for TREC 10, as our test collection. We have analyzed the 50 topics (namely queries) of the ad hoc task in TREC 10, and classified them into several categories. One of these categories is about biography information of personalities, which can be seen as a special application domain. There are seven topics in this category. One of them is the topic about Bell, whose full description is as follows. Number: 515 what about alexander graham bell Description: What did Alexander Graham Bell invent? Narrative: A relevant document will list an invention by Alexander Graham Bell. The document must provide the date or approximate date of the invention or patent to be relevant.
126
Zhifeng Yang, Yue Liu, and Sujian Li
The field is the only available information for composing queries. The and fields are guidelines for assessors who will judge whether a retrieved document is relevant or not. The field is extracted from the topic descriptions and directly submitted to IR system. Results of this run are listed in "Original" columns of Table 1. Then the original queries are translated and the generated queries are submitted to the same system. Results of this run are listed in "Translated" columns of Table 1. Table 1. The "Cmp" columns show the improvements
Topic
Original 1 0.5000 2 0.8974 3 0.6829 4 0.1183 5 0.2500 6 0.8966 7 0.7188 Average 0.5150
Recall @1000 Translated Cmp 0.9583 +91.66% 0.9231 +2.86% 0.9024 +32.14% 0.4624 +290.87% 0.2500 +0% 0.8966 +0% 0.6563 -8.70% 0.7068 +37.24%
Average Precision Original Translated Cmp 0.0833 0.2223 +166.87% 0.4925 0.5040 +2.34% 0.0827 0.1407 +70.13% 0.0052 0.1195 +2198.08% 0.0063 0.0063 +0% 0.1021 0.1228 +20.27% 0.1725 0.1563 -9.39% 0.1350 0.1817 +34.59%
Recall at 1000th document and Average Precision are listed in Table 1. The average improvement of Average Precision is 34.59% and that of Recall at 1000th document is 37.24%. The topic No. 5 is about artists and 6 is about authors, but there is little information about them in our knowledge base. Therefore the results of the two translated topics show little or no improvement. And the result of last topic, which is about Peter the Great, is found degraded after translating. We find that the terms about this topic in our knowledge base are different from that used by authors of relevant documents. There are two approaches to deal with the problem. The first choice is to refine our knowledge base to reflect the real documents, but it is not a stable approach. The other choice is to build ontologies and knowledge base for relevant domains so that the properties of resources in our biography knowledge base can also be represented by knowledge representation language. For example, we can describe the property "FamousFor" of Bell by the following statements. //ip := "http://www.ict.ac.cn/software/schema/intell_property#" //ips := "http://www.ict.ac.cn/software/schema/ip_schema#" //obj := "http://www.ict.ac.cn/software/schema/object#"
Applying Information Retrieval Technology to Incremental Knowledge Management
127
< p:FamousFor rdf:resource="&ip;BELL" /> Once the semantics of properties can be recognized by the system, the problem arising from topic 7 will be easily handled. This is also a step towards our goal: the concept reasoning. The evolution of the system is incremental, that is, the more knowledge it has, the more satisfactory the effect of concept reasoning is. There is another problem associated with the system. It is based on vector space model (VSM), and we utilize pivoted document length normalization for term weighting[Singhal et al, 1996]. If new terms are inserted into original query, in theory it is possible to retrieve irrelevant documents which are only relevant to parts of new terms. We believe that a relevant document should contain at least most original query terms and some new terms. So we try to make sure that the weights of original terms are not "diluted" after translating. There are two factors related with Singhal’s term weighting: average term frequency in query, and query length. We try to keep the average term frequency to constant after translating. If a term has been in the query, it will not be inserted again. To avoid the impact of increased query length, we decrease the coefficient of document length (called slope in pivoted document length normalization formula). Our consideration about these factors has been proved reasonable by the retrieval experiments.
6 Conclusion As we have stated, there are a large amount of legacy knowledge (unstructured documents and data) in organizations which have been applying information technology since long time ago. It is a fortune for organizations, but it is too difficult to manage the legacy knowledge in a full-fledged semantic way. By introducing ontologies into IR system we present a practical scheme of knowledge management although it cannot recognize the semantics of document contents. With the accumulation of ontologies and domain knowledge, the performance of the IR system
128
Zhifeng Yang, Yue Liu, and Sujian Li
will scale up. However, we believe that substantial performance improvements will mainly result from the semantic understanding of information needs. Our future work will focus on the research of concept reasoning.
References Abecker, A. Bernardi, K. Hinkelmann, O. Kuhn, and M. Sintek. Toward a Technology for Organizational Memories. IEEE Intelligent Systems, June 1998. Benjamins, D. Fensel and A. Gomez Perez. Knowledge Management Through Ontologies. Proceedings of the Second International Conference on Practical Aspects of Knowledge Management (PAKM'98). Borghoff, R. Pareschi. Information Technology for Knowledge Management. Springer, 1998. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391-407. Erickson, T. & Kellogg, W. A. Knowledge Communities: Online Environments for Supporting Knowledge Management and its Social Context Beyond Knowledge Management: Sharing Expertise. (eds. M. Ackerman, V. Pipek, and V. Wulf). Cambridge, MA, MIT Press, in press, 2001. Fensel, D., van Harmelen, F., Horrocks, I., McGuinness, D. L., Patel-Schneider, P. F. (2001). OIL: An ontology infrastructure for the semantic web. IEEE Intelligent Systems, 16(2):38-44. Frank, A. Farquhar, and R. Fikes. Building a large knowledge base from a structured source. IEEE Intelligent Systems, 14(1):47-54, 1999. Gaines, Douglas H. Norrie, Andrew Z. Lapsley, Mildred L.G. Shaw. Knowledge Management for Distributed Enterprises, Proc. KAW-96, Banff, Canada, 1996. Gruber. A translation approach to portable ontologies. Knowledge Acquisition, 5(2):199-220, 1993. Guarino, N. Formal Ontology and Information Systems. In N. Guarino, editor, Proceedings of the 1st International Conference on Formal Ontologies in Information Systems, FOIS'98, Trento, Italy, pages 3-- 15. IOS Press, June 1998. Jones, D.M., Bench-Capon, T.J.M. and Visser, P.R.S. (1998b) "Methodologies for Ontology Development", in Proc. IT&KNOWS Conference, XV IFIP World Computer Congress, Budapest, August.
Applying Information Retrieval Technology to Incremental Knowledge Management
129
Koller and M. Sahami. Hierarchically classifying documents using very few words. In International Conference on Machine Learning, volume 14. Morgan-Kaufmann, July 1997. Leary, D.E. Enterprise knowledge management. IEEE Computer, 1998, 31 (3) pp. 54--61. Martin and P. Eklund. Knowledge Retrieval and the World Wide Web. IEEE Intelligent Systems Special issue on Knowledge Management and the Internet, pages 18--25, May-June 2000. Robertson, S.E., Walker, S., Beaulieu, M.M., Gatford, M., Payne, A. Okapi at TREC-4, The Fourth Text REtrieval Conference (TREC-4), NIST Special Publication 500-236, National Institute of Standards and Technology, Gaithersburg, MD, pp. 73-86, October 1996. Rocchio. Relevance feedback in information retrieval. In The SMART Retrieval SystemExperiments in Automatic Document Processing, pages 313-323. Prentice Hall Inc. 1971. Salton, G., and Buckley, C. 1990. Improving retrieval performance by relevance feedback. Journal of the American Society for Information Science 41:288--297. Singhal A., Buckley C. and Mitra M. Pivoted document length normalization. In Proceedings of the 19th ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'96), pages 21--29, 1996. Sparck Jones, K. Search term relevance weighting given little relevance information, Journal of Documentation, 35(1), pp. 30-48, 1979. Staab and A. Maedche. Knowledge portals - ontologies at work. AI Magazine, 21(2), Summer 2001. Stenmark. Information vs. Knowledge: The Role of intranets in Knowledge Management. Proceedings of the 35th Hawaii International Conference on System Sciences, 2002. Uschold. Building Ontologies: Towards a Unified Methodology. Proceedings of Expert Systems '96, the 16th Annual Conference of the British Computer Society Specialist Group on Expert Systems. Uschold, Michael Gruninger. ONTOLOGIES: Principles, Methods and Application. Knowledge Engineering Review, 11(2), (1996). Uschold, Martin King, Stuart Moralee and Yannis Zorgios. The Enterprise Ontology. The Knowledge Engineering Review, Vol.13, Special Issue on Putting Ontologies to Use, 1998. Willett, P. Recent trends in hierarchic document clustering: A critical review, Information Processing & Management, Vol. 24, No. 5, pp. 577-597, 1988.
Visualizing a Dynamic Knowledge Map Using Semantic Web Technology Hong-Gee Kim1, Christian Fillies2, Bob Smith,3 and Dietmar Wikarski4 1
Dankook Univ., Korea,
[email protected] 2 Semtation GmbH, Schwabach, Germany
[email protected] 3 Cal. State University, USA
[email protected] 4 University of Applied Sciences Brandenburg, Germany
[email protected]
Abstract. Visual knowledge1 maps are being used to improve the communication processes within global organizations. Knowledge maps are graphical presentations of ontological knowledge as well as of business processes. Especially for enterprises working in a multi-cultural space the explicit formalization of knowledge and business rules using graphical models seems to be a very promising approach in order to improve discussion and learning processes. Publishing and automatic inference or search techniques are becoming available due to the latest standards for Semantic Web worked out by W3C. This article gives an impression how to create end user interfaces for the ”Corporate Knowledge Base” using MS Office and Visio with the modeling tool SemTalk. Several problems on capturing and maintaining large scale knowledge bases are discussed. Specific attention is given to the problem of weighting and association of information from orthogonal ontologies, which arises while using the same concepts in different graphical scenarios.
1 Graphical Representation of a Knowledge Map Knowledge management in an organization is tightly connected with the ability to create business values and to generate a competitive advantage. However, knowledge is not visible in its nature so that managing it is very difficult. Tacit knowledge embodied in the experiences of organizational members is easily lost unless it is transformed into a usable form. Knowledge mapping provides a framework for visualizing knowledge that can easily be examined, refined, and shared by non-expert knowledge users. A knowledge map can also be used as an interactive tool that links different conceptualizations of the world. 1
This research was partially supported by Brain Science and Engineering Research Program sponsored by Korean Ministry of Science and Technology, and also conducted by the research fund of Dankook University in 2001.
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 130-140, 2002 © Springer-Verlag Berlin Heidelberg 2002
Visualizing a Dynamic Knowledge Map Using Semantic Web Technology
131
Since every person has its own concept of knowledge in terms of its form and content, meaningful communication is difficult especially when the number of communicative actors is large. There is a need to develop a common way of constructing and maintaining knowledge in a visual form [8]. Many methods of knowledge representation have already been developed and devised for AI applications. Most of these techniques are for machine-processed and target specific systems. In contrast, the application discussed in this paper provides a user-friendly method for developing a knowledge map that helps knowledge users to visualize their implicit ontologies2 and workflows. Knowledge about the same object is represented differently depending on contexts. Since the visualizing tool proposed in this paper suits with the dynamic feature of a knowledge map, it helps people to modify and to combine ontologies across domains. SemTalk using Semantic Web technology equips the user with a method for knowledge representation that is not only machine “understandable” but also human readable because it includes both graphical and textual forms of information. Semantic nets are a powerful diagrammatic knowledge representation technique. Figure 1 is an example of a knowledge map that is represented in a semantic net. A knowledge map represents meaningful relationships between concepts in the form of propositions. A proposition represented in a knowledge map consists of two or more concepts linked by relational labels to form a semantic unit.
2 Context Dependency of a Knowledge Map As already pointed out, people conceptualize their world differently. Accordingly, a knowledge map about the same object may contain different contents and structures depending on the contexts for which they are generated. For example, a scientist usually has a view that electrical ‘current’ is a kind of “constraint-based events”, but in some contexts can share with others the naïv e view that it is a material substance. We can have multiple views for a single concept depending on context [10]. As real world objects have huge numbers of properties, there are many ways of conceptualizing a given object, each serving a particular goal. The concept ‘car’ may contain different information for a car dealer, a manufacturer, a driver, and a cartoonist. We tend to conceptualize an object as having a certain set of properties in the context of the kind of things involved. For example, there are explanatory networks for a car’s fuel systems, known only to engineers, that consist of many mechanically defined terms unique to engineers. A cartoonist could also have similar clusters of terms for the shapes and motions of cars.
2
We are using the term “Ontology” here in the sense explained by Tom Gruber: “In the context of knowledge sharing, I use the term ontology to mean a specification of a conceptualization. That is, an ontology is a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set-of-concept-definitions, but more general. And it is certainly a different sense of the word than its use in philosophy.” [9]
132
Hong-Gee Kim et al.
Plants
have
have
Roots
have
support
Leaves
Stems
have
absorb
absorb
produce
have
Oxygen
Flowers
transform
Water
Minerals
in
in
produce
Soil
in
Seeds
Fig. 1. A knowledge map of plants Ontology A Concept A
Concept B
relation b
relation b
Ontology B cross-ontology relation
relation a
Concept B
relation c
relation d
Concept F
relation a
Concept C Concept D relation b
Concept H
Concept K
relation c relation b
Concept L
Fig. 2. Ontology merge
Concept K
relation b relation a
cross-ontology relation relation c
Concept P
Concept Q
Visualizing a Dynamic Knowledge Map Using Semantic Web Technology
133
An object can be conceptualized and organized differently into ontologies, yet some information can be shared across these ontologies when it is needed [11]. For example, although the cartoonist’s ontology of ‘car’ does not have any mechanical information of a car, such information sometimes needs to be accessible to the cartoonist in a certain situation. Figure 2 depicts how the two knowledge maps are merged in terms of cross-ontological relationships. Further information about merging of ontologies can found at [4].
3 Knowledge Structure Represented in a Knowledge Map Knowledge maps are defined as “graphical representations of the connections by the brain in the process of understanding facts about something” [7]. While the static side of knowledge mapping is to represent the connections between properties identified during conceptualization, the dynamic side stands for the process of inferring values on those properties in the problem solving or decision making contexts. Concepts can be either directly or by analogy transferred from one domain to another. For example, the use of the physical notion of transient state can be transferred into the domain of certain business management problems. The dynamic aspect of knowledge mapping is used to improve the communication processes within global organizations. Especially for enterprises working in a multi-cultural space, the explicit formalization of knowledge and business rules using knowledge maps seems to be a very promising approach in order to improve communication and learning processes. The actual knowledge does not have a static structure but is dynamically constructed by identifying and indexing pieces of information or knowledge components depending on contexts. Figure 3 describes how knowledge is represented in a knowledge map that shows the hierarchical structure of knowledge. Understanding is not just knowing an item of knowledge, but knowing how the supporting knowledge relates each higher knowledge item [8]. A measure of importance represents how important each supportive piece of knowledge item is to the higher one. The weighting in the association between knowledge components can vary depending on contexts. In the actual use of a knowledge map granularity is flexibly applied in the sense that a certain knowledge component, in this example ‘cooking chicken’, may consist of a deeper level knowledge map with a greater granularity.
4 A Real World Use Case Large and successful organizations today can afford to invest resources into formalizing corporate ontologies, but medium sized organizations can seldom afford the time and the resources to effectively execute these important projects. Our use case is a 105 year old engineering and chemical testing lab employing over 100 people. The engineering and construction professionals have conflicts with the chemistry lab professionals. The IT department is very small and under-funded. They use WinWord and Excel on a network, and have used Visio for planning and training purposes in the past. SemTalk now offers a low cost approach to visualizing
134
Hong-Gee Kim et al.
cooking chicken importance = 3 importance = 2
obtaining chicken
importance = 5
importance = 4
choosing fresch chicken
making taste for chicken
how chicken change through heat
cooking chicken for kids
cooking chicken for people with diabetes importance = 7
importance = 6
importance = 5 importance = 6
medical knowledge of diabetes
cooking chicken
nutrition of chicken
nutrition of chicken
importance = 4 importance = 7
decorating food
cooking chicken
Fig. 3. Knowledge structure about cooking chicken
workflows and their implicit ontologies based on the W3C notation RDFS using Visio shapes which are relevant to their business problems. An explicit examination of each group's ontology (and specialized jargon) significantly enhances the CEO/Owners' ability to more effectively balance resources within the organization. Applying an importance measure on the specific issues found in both organizational sub-structures helps to make communication deficits explicit. As a direct result a 20% increase in profit 6 months after project completion is expected.
5 SemTalk SemTalk is being used in ontology projects helping people to agree on a common language. As described in Fillies et al. [5] those graphical ontologies may be used in several ways such as terminology control for technical writers. Ontologies represented in an application independent XML based format are an important building block for any knowledge management system, for business process modeling and for the consistent definition of large projects e.g. using MS Project. Ontology based business process models can be maintained, translated and reused with significantly less effort than conventional process models. This especially applies for process models describing web services.
Visualizing a Dynamic Knowledge Map Using Semantic Web Technology
135
Semantic Differences between Key Department Managers Analytical Chemistry Dept
Importance =5
Document Controls
Importance =3 Importance =4
Importance =2
Procurement
Component Cost
Processing Times
Civil Engineering Dept
Importance =2 Importance =3
Scheduling Issues
Importance =5 Importance =4
Project Profitability
Component Cost
Procurement
Fig. 4. Knowledge structure for the chemistry testing lab
5.1 Architecture of SemTalk SemTalk is integrated in MS-Office. It has a Visio based graphical user interface which makes it easy to use for a broad range of users. Using Office XP SmartTag technology, Semantic Web glossaries can be used from all MS Office applications to lookup words in an ontology or process model. SemTalk does work on an RDF(S)-like XML data structure. Therefore, diagramming information and object oriented features like methods and states have been added to RDF(S). SemTalk also has an optimized structure for basic inferences as inheritance and graph traversals. There is an object engine providing a COM API in order to be able to use the object engine within MS Office products. For the graphical presentation of models we have used MS Visio for two reasons: (i) the tool is widely used in industry, therefore people are used to it and (ii) it is easily extensible through an API.
136
Hong-Gee Kim et al.
MS Visio
MS Word
Method Meta Model
Smart Tag
SemTalk Object Engine
RDFS Flatfile
RDFS Webservices
Fig. 5. Architecture of SemTalk
The SemTalk object engine is used to define semantics - in other words a meta model - for existing Visio shapes. You can graphically define which shapes are allowed to be connected with each other. SemTalk supplies an infrastructure to define complete modeling methods inside Visio. Those methods have been created e.g. for DAML [3], for Enterprise Resource Planning (ERP), for product modeling and for Business Process Modelling (BPM). SemTalk has a couple of interfaces to CASE tools like Rational Rose and to BPM tools. There is also a report generator that creates HTML tables by using XSL for formatting. 5.2 Notation for Semantic Nets In respect to the very broad audience we want people to be able to read our models without learning a notation. We have best experiences using the very simple bubble notation, shown in some of the pictures below. It is important to label most of the links and not to use graphical encodings which are known from graphical languages as Entity Relationship diagrams.
Visualizing a Dynamic Knowledge Map Using Semantic Web Technology
137
For readers with a technical background more complex notation with various shape types can be used. Examples are the DAML Notation and e.g. a user interface for a product configuration engine. One of the great advantages of using Visio is that is contains a large collection of predefined and extendable shapes. The shapes correspond quite natural to classes. Using pictures improves the acceptance of the models which is an important success factor in Knowledge Management. 5.3 Referencing External Knowledge Bases WordNet®, which was developed by the Cognitive Science Laboratory at Princeton University under the direction of Professor George A. Miller, is a huge online lexical reference system whose design is inspired by current psycholinguistic theories of human lexical memory. English nouns, verbs, adjectives and adverbs are organized into synonym sets, each representing one underlying lexical concept. Different relations link the synonym sets. SemTalk uses WordNet via Dan Brickley’s RDF(S) web service for WordNet 1.6 [2] [13]. SemTalk 1.1.1 and WordNet Any SemTalk Object which has the namespace http:// xmlns.com/wordnet/1.6 can be looked up at Dan Brickley’s RDFS WordNet server. We are importing the definition. Subclasses and Superclasses can be imported via right-click „Expand“ and the button „External“ For information about WordNet visit http://xmlns.com
Vehicle
Doodlebug Automotive_vehicle
Tractor
a conveyance that transports people or objects
4-wheeled motor vehicle; usually propelled by an internal combustion engine; "he needs a car to get to work"
a self-propelled wheeled vehicle that does not run on rails
Auto
Fig. 6. A small vehicle model built from WordNet
The models are being incrementally built with external model repositories. Once you have used a class name in a model you can look for related objects in external repositories and integrate them into your model (Figure 6). The idea of using an external glossary basically ensures that people are talking about the same thing with a well defined Uniform Resource Name (URN) to identify objects and related hyperlinks to access their definitions. The other benefit users have from such ontologies is that they are getting hints for related objects or subclasses to use in the model.
138
Hong-Gee Kim et al.
The objects remember their origin and can be refreshed (or replicated) from their external data source once the source has changed. In a very similar way you can link one class to another class living in an external model which was created using SemTalk and which is published on a web server. This technology results in a web of hyperlinked models based on RDF(S) as a common standard.
6 Weighted Knowledge Maps in SemTalk Cross ontology integration is a very common problem which arises as soon as multiple organizational units such as different departments within one company are involved. It becomes very important once business partners with a inhomogeneous cultural background and communication strategies are being forced to solve real world problems together. This is in particular the case if corporations from Asia move to western markets or vice versa. Abstract and graphical models for knowledge and business processes have been used from the very first days of mankind to ease communication. SemTalk is a modeling tool designed to create knowledge structures in the Semantic Web format RDFS (Resource Description Framework, [1]. The Semantic Web is a kind of a distributed world wide knowledge model. One of the basic ideas of the Semantic Web is to denote concepts of discourse by URN. Once a group of users has agreed that they are talking about the same topic, they can refer to it from their specific application models by a public URN. This technique is used to disambiguate words by explicitly mentioning homonyms and assigning synonyms to concepts. Beyond using URNs and synonyms, SemTalk relies on the use of manually clustering of information on diagrams, contexts or scenarios.
Analytical Chemistry Dept. Importance = 5
Importance = 2 Importance = 3
Document Controls
Importance = 4
Procurement Processing Times
Fig. 7. Using line width to visualize importance
Component Costs
Visualizing a Dynamic Knowledge Map Using Semantic Web Technology
139
Each object and a subset of its associations can participate in multiple scenarios. Now, the technique of weighting the importance of an association in a specific context offers first of all additional information. For larger projects, the importance factor of associations helps to reuse the object in contexts build by different people, because a statement made in one context may be less important in other contexts. A very simple but effective way to visualize importance is to use graphical properties such as line width or node size in order to emphasize specific aspects of the scenario. Adding weighting and importance factors to the RDFS class model was possible because of two reasons: 1. RDFS is based on XML and 2. from the tool builders point of view, SemTalk has an open meta model, which allows the extension of associations (RDF speech: “Properties”) and regards them as first class objects.
7 Conclusion and Future Research In this paper we have shown how to apply dynamic knowledge maps to semantic nets mainly for the purpose of improving communication and understanding between human readers of models. As an experience, the knowledge mapping tool of SemTalk has shown to be more flexible and less constrained than semantic network systems in the sense that any graphical form of knowledge representation can be modeled including UML and Conceptual Graphs. The Semantic Web also open great perspectives for the communication of programs and machines. Interpreting process descriptions by workflow engines or executing processes with MS Project having a (fuzzy) measure of “importance” is an interesting issue which has to be investigated.
References 1. Berners-Lee, T. Hendler, J., and Lassila: “The Semantic Web", Scientific American, Mai 2001 2. Brickley, D.: RDF(S) web service for WordNet 1.6, cf. http://xmlns.com/2001/08/wordnet/ 3. Darpa Agent Markup Language (DAML): cf. http://www.daml.org 4. Doan, A., Madhavan, J., Domingos, P., Halevy, A.: “Learning to Map between Ontologies on the Semantic Web”, WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA ACM 158113-449-5/02/0005 5. Fillies, C., Wood-Albrecht, G., Weichhardt, F.: A Pragmatic Application of the Semantic Web Using SemTalk, WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA ACM 1-5811449-5/02/0005, see also http://www.semtalk.com 6. Flynn, J.: DAML Visio Shapes, cf. http://www.daml.org/visiodaml/ 7. Gomez, A., Moreno, A., Pazos, J., Sierra-Alonso, A.: Knowledge maps: An essential technique for conceptualization, Data & Knowledge Engineering 33. (2000) 169-190 8. Gordon, J.L.: Creating knowledge maps by exploiting dependent relationships, KnowledgeBased Systems 13. (2000) 71-79 9. Gruber, T.: http://www-ksl.stanford.edu/kst/what-is-an-ontology.html (2001)
140
Hong-Gee Kim et al.
10.Kim, H.-G.: “A Psychologically Plausible Logical Model of Conceptualization.” Minds and Machines, 7. (1997) 249-267 11.Kim, H.-G.: “Formalizing Perspectival Defeasible Reasoning.” Proceedings of the 30th Hawaii International Conference on System Science, Vol. V. (1997) 347-353 12.W3C: RDF Schema Specification. http://www.w3.org/TR/PR-rdf-schema/, 1999, O. Lassila and R. Swick: Resource description framework (RDF), model and syntax specification Technical report, W3C, 1999, W3C Recommendation http://www.w3.org/TR/REC-
rdf-syntax 13.WordNet: cf. http://www.cogsci.princeton.edu/~wn/
Indexing and Retrieval of XML-Encoded Structured Documents in Dynamic Environment Sung Wan Kim1, Jaeho Lee 2, and Hae Chull Lim1 1
Dept. of Computer Engineering, Hong Ik University, Sangsu-Dong 72-1, Mapo-Gu, Seoul, KOREA {swkim, lim}@cs.hongik.ac.kr 2 Dept. of Computer Education, Inchon National University of Education, Gyesan-Dong San 59-12, Gyeyang-Gu, Inchon, KOREA
[email protected]
Abstract. In order to retrieve structured documents efficiently, many researches have been done to design indexing technique that supports fast and direct access for arbitrary element as well as whole document. On the other hand, fast and efficient indexing technique for supporting dynamic update of structured documents in business domain is required. In this paper, we propose an inverted index structure that supports dynamic update, such as including both structure and content updates, quickly. In the proposed index structure, in addition to a horizontal term-based index as in general inverted file structure, we add a vertical index. The vertical index uses element identifier as key. Using this dual index structure, it is possible to support fast and efficient updates on the parts of a document as well as whole document as reducing reindexed space and time dramatically.
1 Introduction The traditional information retrieval system, especially text retrieval system, aims to access and retrieve whole text documents efficiently. Usually these systems are based on content retrieval using simple keyword (or term). However, in case of structured document, such as books and journal articles, it is possible to access and retrieve arbitrary elements, instead of whole documents. Structured documents include a logical structure such as chapter, section, and paragraph within themselves. Using the logical structure, hence, structure-based retrieval can be utilized. In order to improve the efficiency of structured document retrieval, some indexing techniques based on inverted list index or signature file, which have used broadly in traditional information retrieval areas, was proposed [1][2][3]. However, these proposed index structures are adequate for static environment. Hence, when we use these index structure in business domain such as inventory management where documents need frequent updates, the whole index or many parts of index should be re-indexed. On the other hand, XML(eXtensible Markup Language) is considered a standard represent and exchange format for the data over web. A logical structure for XMLencoded structured documents can be defined by DTD(Data Type Definition) or Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 141-154, 2002 © Springer-Verlag Berlin Heidelberg 2002
142
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
XML Schema. As growing of XML-encoded structured documents, an index structure for efficient retrieval of them should be developed. In this paper, we aim to develop a fast and efficient indexing and retrieval technique based on inverted list index for XML-encoded structured documents. Especially, we focus on supporting both structure and content updates. In order to achieve our goal, in addition to the term-based horizontal index, we add a vertical index that is based on element identifier. Using this dual index-based inverted list index structure, it is possible to support fast and efficient update on the parts of a document as well as whole documents as reducing re-indexed space and time dramatically. The paper is organized as follows. The related works are described in Section 2. In Section 3, we present index organization. And updating and retrieval are described in Section 4 and Section 5 respectively. An implementation and experiments for the proposed index structure are presented in Section 6. Finally, we present the conclusion and future works in Section 7.
2 Related Work In this Section, we describe retrieval types for structured documents and previous works for indexing of structured document. 2.1 Retrieval Types for Structured Documents In [1][6], retrieval types for structured documents can be classified into two major categories, namely content-based and structure-based retrievals. First, content-based retrieval is further classified into boolean and ranking. The boolean retrieval can be viewed as exact match query and the ranking retrieval can be viewed as relevance query. Usually, term frequencies are used for ranking. Second, a logical structure is used for structure-based retrieval. And instead of whole document, arbitrary element can be targets for a query. 2.2 Indexing for Structured Documents Lee et al [1] proposed five index structures based in inverted list index for structured documents. They represented a structured document as k-ary complete tree and they call it 'document tree'. In the document tree, 'k' means the maximum out degree of nodes. They assume text contents are in leaf nodes only. The internal nodes represent the structural relationships between elements only. In order to access arbitrary elements in a document, unique identifier should be assigned to each element. They assign each element node an UID(unique identifier) by level order traversal. The parent UID of a node whose UID is 'i' can be computed by simple formula such as : parent(i) = (i-2)/k + 1 .
(1)
Indexing and Retrieval of XML-Encoded Structured Documents
143
In order to direct access arbitrary elements, although internal element nodes have no terms, terms at leaf nodes are should be considered as their terms. In this paper, they focused the ANOR(Inverted Index for All Nodes without Replication) scheme for the lowest storage overhead, which the common terms of child nodes for an parent node are moved into the parent node. ANOR is adequate for exact match query but cannot support ranking query based on weights by the term frequencies since the original terms in child nodes are included in a parent node. Although not mentioned in this paper, LNON(Inverted Index for Leaf Nodes Only) or ANWR(Inverted Index for All Node with Replication) scheme can be candidates for supporting the ranking query . The former scheme with formula (1) is more adequate in view of efficiency. In BUS(Bottom Up Scheme) [2][3], an index structure based on LNON was proposed to support ranking query. The basic idea is that indexing is performed at the leaf level of the given structure and query evaluation computes the similarity at higher level by accumulating the term frequencies at the leaf level in the bottom up manner using formula (1). In BUS scheme, GID(General Identifier), an extending of UID, is assigned to elements. In addition to UID, GID includes DID(document identifier), LEV(element level), and EID(element type). DID is to identify a document, LEV and EID is to re-compute the term frequencies for a certain element user wanted. LEV is used to decide the number of the iteration of bottom up processing by the difference a target level for user query between a leaf level indexed. EID is used to filter out some elements that differ from the element type user wanted. EIDs are decided by means of element names in the document structure, such as path. Postings are composed by triple. Since the structural information, EID, is included within each posting, it is possible to process the structure-based queries. 2.3 Updating for Structured Documents On the other hand, most of the proposed index schemes for structured documents didn't consider dynamic update. If any, it consider whole documents unit only. In case of content updating, even though a part of the document is updated, the whole document is deleted and a newly updated document is inserted. In BUS, after representing a document tree as a complete tree, UIDs are assigned to all element nodes. In case of inserting a new node or changing 'k', UIDs in postings should be updated. This leads the re-indexing of entire document. And, since LEV value is included in each posting, in case of level updating, there is serious overhead also. Dual-structure index data structure was proposed to resolve the incremental updates for inverted list [7]. However, in this scheme, inserting for whole documents only is supported. There is no fundamental change in index structure. In [8], an incremental updating scheme based on persistent object store was proposed. It considers whole documents insertion only. There is no fundamental change in index structure also. In [9], traditional inverted index was combined with a path index to support more sophisticated retrieval, such as query with simple path expression, for XML encoded structured documents. Although dynamic update was considered, it supports a whole document unit only.
144
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
Since proposed index structures support dynamic updates with a whole document unit only, not with a certain part of document, if there are frequent inserting, deleting, and updating, these index schemes have much overhead and take much time to re-index.
3 Index Organization In this section, we describe our new fast and efficient indexing structure. Basically, our proposed index structure is based on inverted index file as in LNON and BUS schemes. However, it updates the related parts only, instead of a whole document or many parts of document, when updating on a certain part of document. The basic idea is that in addition to the term-based horizontal index (using B tree), we add a vertical index based on element identifier (using B tree), when constructing the inverted list index. Using the element identifier-based vertical index structure, updating time is reduced dramatically. Figure 1 shows the index organizing steps.
Fig. 1. Index Organization Steps
The first step in organizing the index structure is to recognize and extract each element and terms within element after scanning a whole document. For the extracted terms and elements, the stop words removal and stemming are passed. And compute some useful statistics, such as term frequencies within each element, element level, and element type. As compared with LNON or BUS scheme, we do not assume the document tree as complete tree. It is to avoid re-indexing of whole document. And we assign any available UID to element node without special ordering. Since we cannot utilize the formula (1), we maintain the parent UID information for all elements with an independent table. We remove the level (LEV) for an element from GID used in BUS scheme and maintain it in the same table. In result, we assign GID triple to each element. First constituent means document identifier. Second constituent means unique identifier within a document. Third constituent means an element type. Figure 3 shows the document tree for the sample XML encoded structured document in Figure 2. It represented with GID, terms, and term frequency for the text node in leaf level. For example, GID for the 'PAR' element which is the most left child of the element 'ABS' is triple (and we call it G5). The extracted terms for the element are 'anchor' and 'browse', and their frequencies are all '2'.
Indexing and Retrieval of XML-Encoded Structured Documents
145
Fig. 2. A Sample Structured XML Document
Fig. 3. Document Tree with GID, terms, and term frequency of Fig. 2
Fig. 4. Element Information Table
Fig. 5. Structure Information Table
Figure 4 shows an element information table to maintain the parent UID and level for each element node. Figure 5 shows a structure information table to maintain element types (EID) in document structure according to the path. Some similar table like these two tables should maintained for query processing in other index scheme such as BUS scheme. We show only simple table using simple absolute path [12] as shown in Figure 5. For efficiency, instead of using simple absolute path, other schemes can be
146
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
utilized such as DataGuide[5] which summarizes the structural information. We omit these descriptions for page limitation. Second step is to generate postings for each element node that contains textual content using the GID, extracted terms, and term frequencies from the document tree as shown in Figure 3. For example, in case of the 'PAR' element with UID '9', two posting and triples are generated. First constituent is the extracted term. Second constituent is GID for the element which contains the term. Last constituent is term frequency. In the proposed index structure, when generating postings from an element, all postings are connected each other. Namely, a linked posting list for all postings is generated. To do so, we add a new field into posting structure to link all posting generated from an element with the same UID. Hence, a posting structure for a term consists of triple. In posting structure, term is removed since it will be inserted into term-based horizontal index as a key. Once a posting list from an element is generated, first, as the same manner of constructing inverted list index in general, each posting in the list is inserted into term-based index using the term within the posting as a key, we call this index term-based horizontal index.
Fig. 6. Proposed Index Structure
And additional step is executed. Since a posting list is generated from the same element with an UID, the UID is inserted into a new index which uses the UID as a key, we call this index UID-based vertical index. And newly inserted UID is connected with the first posting node in the posting list. Only UIDs for elements having textual content are inserted into the vertical index. Applying these steps repeatedly, we can construct the whole index. Figure 6 show an index structure using dual index structure for the document tree shown in Figure 3. In this Figure, dotted directed link between postings mean that each posting is linked each other.
Indexing and Retrieval of XML-Encoded Structured Documents
147
In the proposed index structure, hence, there exist two access methods for the posting files. First, as the same manner in traditional one, is to access all postings sequentially which have the same term using the term as a key in horizontal index. The other is to access all postings sequentially which have the same UID using the UID as a key in vertical index. For example, three postings generated from an element with UID '8' can be accessed sequentially using the UID as a key value in the vertical index. The latter approach provides fast access for the related postings only without accessing other non-related ones when we will delete an element. This is advantage for frequent updating of structured documents.
4 Updating Updates for a document can be classified into two cases. One is for a whole document and the other is for a part of the whole document. Since the updating for a whole document can be considered the sum of updates for individual element that is a minimal unit of the logical structure, in this paper, we consider the updates for an element only. Updates for an element are classified into structure update and content update. Structure update is further classified into element insertion and deletion. Each case can be with textual content or not. Content updating means changing the textual content for an element. In this case, it is a simple and general approach that after deleting all contents for the element, inserting the newly updated contents is followed. Hence, content updating is resolved a sequence of deleting all contents followed by inserting new contents for the element. In previous proposed index structure, whole or huge parts of whole index should be re-indexed in case of inserting/deleting an element into/from the original structured document. However, in our proposed index structure, fast and efficient processing without re-indexing of whole or huge parts of entire index can be done. 4.1 Changing Structure – Inserting an Element Inserting an element is classified into two cases. One is for inserting an element without textual content, such as internal element node and the other is for inserting an element with textual content, such as leaf element node. In case of XML encoded, internal nodes can contain textual content. Assume, a document tree is represented as shown in Figure 3, additional two tables are created as shown in Figure 4 and Figure 5, and an inverted list index is generated as shown in Figure 6. Consider inserting a new leaf element 'SEC' with some textual content into the most left child node of the element 'CHAP' with UID 4. In BUS scheme, all UIDs for the sub-trees rooted from the right siblings to be inserted node should be changed. These will lead huge parts of the previous index structure to be re-indexed. Especially, in case of changing 'k' value, overall index structure should be re-indexed. In our proposed index structure, however, it only have to assign any available UID, say that '11', to the newly inserted node and insert entry into the last entry of the element information table as shown Figure 4. Hence, since there is no UID updating
148
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
for other nodes, after generating a posting list for the inserted element, inserting them into index structure as described in Section 3 without re-indexing of the previous index is only have to do. Even though it is possible to maintain the parent UID within each posting, it is desired that parent UID is maintained in the element information table since, in general, the number of postings is larger than that of elements. // input : an element node E insert_element_node(element node E) {
if (node E without textual content) then { assign GID to E update the element and structure information tables if needed }
else { assign GID to E create postings with terms, GID, and frequencies link each posting (result in a posting list PL for E with an UID) for each posting in the posting list PL and its term do { search an entry having the same term as key within horizontal index if (entry founded) then { insert the posting into the end of a list linked from the entry sort the entries of the list if needed } else { insert the term into horizontal index as a new entry insert the posting into the first of a list from the new entry }
} update the element and structure information tables if needed search an entry having the same UID of E within vertical index if (entry founded) then exit // impossible case, error else { insert the UID into vertical index as a new entry link from the new entry to a first posting of the posting list PL } } } Fig. 7. Algorithm for Inserting an Element
Consider the other case of inserting a new typed element node without textual content, such as internal node, into an arbitrary level. If using BUS scheme, all levels and UIDs for all nodes of subtrees having the inserted node as their root should be changed. This leads changing all LEVs and UIDs for all posting in the subtrees. And the structure information table should be modified adequately. In our proposed index structure, we remove LEV field from posting and maintain LEV in the element
Indexing and Retrieval of XML-Encoded Structured Documents
149
information table for all elements. Hence, it only have to assign any available UID to the newly inserted node, compute the parent UID and its level, insert triple into the new entry of the element information table. In this table, levels of some entries under the influence of the newly inserted node should be updated. The structure information table should be updated also. Figure 7 shows a brief algorithm for inserting an element. In case inserting a whole document, it can be resolved by calling this algorithm repeatedly as shown in Figure 8. // input : a document tree D with additional information like Fig. 3 insert_document(document D) { for each element node E in a document tree D do call insert_element_node(E) } Fig. 8. Algorithm for Inserting a Whole Document
// input : an element node E with an UID delete_element_node(element node E) { delete the entry having an UID of E from the element information table update the element and structure information tables if needed search an entry having the same UID of E within vertical index if (entry not founded) then exit //element E without textual content else { for each posting in the vertical positing list linked from the same UID do { delete the posting link the left sibling posting of the deleted posting to the right sibling posting in the horizontal posting list } } } Fig. 9. Algorithm for Deleting an Element
4.2 Changing Structure – Deleting an Element Deleting an element is also classified into two cases. One is for deleting an element without textual content and the other is for deleting an element with textual content. First, consider deleting the 'PAR' element with UID '9' in Figure 3. In BUS scheme,
150
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
all postings for both the subtrees having the right siblings as their roots and the deleted node itself should be updated and removed respectively. In our proposed index structure, however, it only have to remove an entry with an UID of the deleted node from the element information table, access all posting of the posting list which are generated from the deleted element sequentially using the vertical index, and deleting them without accessing or updating other postings. Hence, fast deleting can be done. Second, consider deleting the 'CHAP' element with UID '4' in Figure 4. This leads structural updating also. In BUS Scheme, UIDs and levels for all nodes of all subtrees having the deleted node as their roots should be updated. This leads updating for many postings related. In our proposed index structure, however, it only have to remove an entry with an UID of the deleted node from the element information table without no updating for postings. Of course, in element information table, parent UIDs of the entries which are children for the deleted node should be modified and the structure information should be modified adequately. Figure 9 shows a brief algorithm for deleting an element node. 4.3 Changing Content In case of content updating for an element node with textual content, it only have to delete all postings generated from the element and insert the newly generated postings into index structure. For example, consider updating the 'SEC' element with UID '8' in Figure 3. First, using the UID value 8, search the entry with the same UID in UIDbased vertical index and follow a link to the first positing connected from the entry. Since all postings generated from the deleted element are linked each other, it only have to delete all posting while following the list sequentially using the algorithm in Figure 9. And after generating a posting list with newly updated content for the deleted node, it is only have to insert this list using the algorithm in Figure 7.
5 Query Evaluation and Retrieval Query evaluation is performed in bottom-up manner like LNON and BUS schemes. However, since we did not consider a document tree as complete and removed the level from the posting structure, for query evaluation, we use the element information table as shown in Figure 4, instead of using formula (1). If we construct the table using simple 2 dimensional array, it takes constant time to get the parent UID and level values for the element with an UID. On the other hand, user query can be classified into content-based and structure-based queries. In case of content-based query using keyword, boolean and ranking queries are supported. Structure-based query using the logical structure, namely a path, is supported. The mixture of them can be imposed. Figure 10 shows a brief algorithm for query evaluation.
,QGH[LQJDQG5HWULHYDORI;0/(QFRGHG6WUXFWXUHG'RFXPHQWV
LQSXWXVHUTXHU\84 RXWSXWUHVXOWVHW56RIHOHPHQWXLGVRUGHUHGE\WKHLUIUHTXHQFLHV DFF>@LVDVLPSOHDUUD\WRDFFXPXODWHWKHIUHTXHQFLHVIRUWDUJHWHOHPHQWV TXHU\BHYDOXDWLRQVWULQJ84 ^ GHFRPSRVHWKHXVHUTXHU\84LQWRDSDWKSDUWDQGDOLWHUDOSDUW DQDO\]HWKHSDWKSDUW FRPSXWHDWDUJHWHOHPHQWW\SHVHW$DQGDWDUJHWOHYHOXVLQJ)LJXUH FRPSXWHWKHFDQGLGDWHSRVWLQJVHW%IURPKRUL]RQWDOLQGH[XVLQJDWHUPLQ WKHOLWHUDOSDUW IRUHDFKSRVWLQJLQFDQGLGDWHVHW%GR^ LI(,'RIWKHSRVWLQJZLWKLQVHW$ WKHQ^ PDWFK←WUXH FRPSXWHWKHGLIIHUHQFHWKHOHYHORIWKHSRVWLQJEZWKHWDUJHWOHYHO ` LIPDWFK WUXH $1'GLIIHUHQFH! WKHQ^ PDWFK←IDOVH XLG←8,'YDOXHRIWKHFXUUHQWSRVWLQJ ZKLOHGLIIHUHQFH! GR^ DFF>XLG@ DFF>XLG@IUHTXHQF\RIWKHFXUUHQWSRVWLQJ LIGLIIHUHQFH! XLG←SDUHQW8,'RIWKHSRVWLQJ GLIIHUHQFH GLIIHUHQFH± ` WDUJHWBXLGBVHW76←XLG ` ` UHVXOWVHW56←SDLUXLGDFF>XLG@!IRUHDFKXLGLQWDUJHWBXLGBVHW76 VRUW WKH UHVXOW VHW 56 E\ GHVFHQGLQJ RUGHU RI WKH IUHTXHQF\ YDOXH LQ DFF>XLG@ ` )LJ$OJRULWKPIRU4XHU\(YDOXDWLRQ
8VHUTXHU\FDQEHGHFRPSRVHGLQWRWZRSDUWVWKDWLVDSDWKSDUWDQGDOLWHUDOSDUW>@ )RU H[DPSOH TXHU\LQJ WR UHWULHYH DOO VHFWLRQV QDPHO\ '2&&+$36(& ZKLFK FRQWDLQ NH\ZRUG EURZVH PD\ EH UHSUHVHQWHG DV µ'2&&+$36(&>FRQWDLQV
EURZVH @¶ ,Q WKH UHSUHVHQWHG TXHU\ '2&&+$36(& DQG µ>FRQWDLQV EURZVH @¶ PHDQV D SDWK SDUW DQG D OLWHUDO SDUW UHVSHFWLYHO\ 2I FRXUVH GHVFHQGDQWV RI WKH HOHPHQW 6(& ZKLFKFRQWDLQ EURZVH VKRXOGEHLQFOXGHGLQUHVXOWVHW7KHHYDOXDWLRQ VWHSVIRUWKHH[DPSOHTXHU\DUHGHVFULEHGDVIROORZV $IWHUDQDO\]LQJWKHTXHU\H[WUDFWDWDUJHWOHYHOVD\LQWKLVH[DPSOH DQGDWDUJHW HOHPHQW W\SH VHW VD\ DQG XVLQJ WKH VWUXFWXUH LQIRUPDWLRQ WDEOH DV VKRZQ LQ )LJXUH $QG H[WUDFW DOO SRVWLQJV ZKRVH WHUPV DUH EURZVH XVLQJ WKH KRUL]RQWDO
152
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
index in Figure 6. If the EID value of each posting is not the same one within the target element type set, this posting is filtered out (, are removed in this example). For each posting of the survived postings, the level difference between the target level and the level of the postings using the Figure 4 is computed. If the level difference is zero(0), the UID of the posting is added into the result set and its term frequency is added into the accumulator with the UID. If the level difference is over one (1), get the parent UID for the posting using figure 4 and its frequency is added into the accumulator with the parent UID. In this example, UID '7' and '8' are included in the result set. And their frequencies are 6 and 1 respectively. Like this manner, query evaluation computes a frequency for the target element from leaf level by accumulating each frequency.
6 Implementation and Experiments We implemented the prototype system partially to store and query the XML encoded structured document set. The prototype system is implemented on a Pentium III 866MHz machine with 256MB under Window 2000. And we used C language to implement the index structure. Codes are over about 2500 lines. A modified stemmer based on the open implemented source [10] for the popular porter stemming algorithm was used. As a test XML collection, we used parts of the XML encoded New Testament [11] from the Religious Works by Jon Bosak. The size of test XML collection is 200KB as a single file. Figure 11 shows the logical structure for the test XML collection. The element with a label '+' means that it can be repeated one and more. We implemented both the term-based horizontal index and the UID-based vertical index using B tree structure.
Fig. 11. Logical Structure for the New Testament
First, we measured the initial index organization overhead in terms of time and space. The number of postings generated is 8,367, the number of terms extracted is 1,647, and the number of elements recognized is 1,186. The initial index creation time is less
Indexing and Retrieval of XML-Encoded Structured Documents
153
than 30 seconds. We include 15 keys in a node for both B tree index. The sizes of a node for the term-based horizontal B tree index and the UID-based vertical B tree index are 154 bytes and 94 bytes respectively and the size of a posting structure is 15 bytes. The numbers of used nodes for the two B tree index are 161 and 137 respectively. Hence, The total size of the term-based horizontal B tree, the UID-based vertical B tree, and postings are 25 KB, 13 KB, 126 KB respectively. To summary, the whole space for the index is 164 KB, which amounts to 82% of the source. Especially, the total size for the UID-based vertical B tree index amounts to 7% of the source, which has low overhead. Second, we measured the retrieval times for some simple mixed queries which means the combining of content and structure parts, with changing the target levels from leaf to root. The experiment is summarized in Figure 12. The reason that the number of element retrieved is the same for the query type 3 and 4 is we include only one chapter in each book for the test collection. The keyword 'christ' amounts to 1.6 % of the total number of terms extracted and occurs more frequently than others. five query types tstmt[contains ='christ'] tstmt/bookcol[contains ='christ'] tstmt/bookcol/book[contains ='christ'] tstmt/bookcol/book/chapter[contains ='christ'] tstmt/bookcol/book/chapter/v[contains ='christ']
target level 1 2 3 4 5
# of elem 1 1 23 23 128
time(sec) 0.494505 0.439560 0.384615 0.384615 0.329670
Fig. 12. Retrieval Results
Third, we measured the updating times for deleting a leaf element and its postings. In case of deleting an element having 12 postings, the deleting time takes 0.054945 second. In case of deleting an element having 19 postings, the deleting time takes 0.109890 second. In case of deleting an element having under 11 postings, the deleting time takes 0.000000 second using the clock() function in in C library. Like these measured results, in our proposed index structure, the updating time is very fast.
7 Conclusion and Future Work In this paper, we proposed a fast and efficient indexing structure for XML-encoded structured documents for supporting dynamic update. In order to support the structural updating efficiently we do not represent a document tree as complete. We assign any available UID to an element. In order to support the content updating efficiently we add an UID-based vertical index into the traditional inverted list index and change the posting structure. In this paper, we did not consider attributes in XML-encoded document. Currently, we are on going to extend our index structure to support these attribute component including ID/IDREF attributes.
154
Sung Wan Kim, Jaeho Lee, and Hae Chull Lim
And, we plan to experiment out index structure with a various and large text collection and to verify the proposed index structure with various aspects.
Acknowledgement This research is supported by IITA (Institute of Information Technology Assessment), Korea under contract IITA 2001-122-3. The first author (Sung Wan Kim) is working at Computer Information Department of Sahm Yook College, Korea also
References 1. Y.K. Lee et al., Index structures for structured documents, Proc.of the 1st ACM Int’l Conf. on Digital Libraries (1996) 2. Dongwook Shin et al., BUS: An Effective Indexing and Retrieval Scheme in Structured Documents, Proc.of the 3rd ACM Int'l Conference on Digital Libraries (1998 ) 3. Dongwook Shin, XML Indexing and Retrieval with a Hybrid Storage Model, Knowledge and Information Systems (2001) 4. J. McHugh, J. Widom, S. Abiteboul, Q. Luo, and A. Rajaraman. Indexing Semistructured Data. Technical Report (1998) 5. R. Goldman and J. Widom. DataGuides: Enabling Query Formulation and Optimization in Semistructured Databases. VLDB (1997) 6. Ron Sack-Davis, T. Arnold-Moore, Justin Zobel, Database Systems for Structured Documents, ADTI (1994) 7. A. Tomasic, H. Garcia-Molina, K. Shoens, Incremental Updates of Inverted Lists for Text Document Retrieval, Stanford Univ, Technical Report Number STAN-CS-TN-93-1 (1993) 8. E.W. Brown, J.P. Calla, W.B. Croft, Fast Incremental Indexing for Full-Text Information Retrieval, VLDB (1994) 9. E. Kotsakis, Structured Information Retrieval in XML documents, ACM SAC (2002) 10. Martin Porter, Porter Stemming Algorithm, available at http://www.tartarus.org/~martin 11. J. Bosak, XML examples, available at http://www.ibiblio.org/bosak 12. T. Shimura et al, Storage and Retrieval of XML Documents using Object-Relational Databases, DEXA (1999) 13. XPath, http://www.w3c.org
Knowledge Management: System Architectures, Main Functions, and Implementing Techniques Jun Ma and Matthias Hemmje Fraunhofer-IPSI, Dolivostrasse 15, Darmstadt D-64293, Germany {jun.ma,matthias.hemmje}@ipsi.fhg.de Abstract. Based on the known theoretical models of knowledge management (KM), our investigation on existing KM systems and a prototype system designed for supporting the cooperative research in research institutes, we address three important issues in the design and implementation of KM systems, i.e., the software system architecture of a general KM system, the main functions of the KM systems and the information processing techniques and algorithms that can be used to implement the KM systems.
1. Introduction Knowledge management is considered to be very important in enhancing the competence of companies and organizations. Larry Prusak (IBM) noted 80 percent of the largest global corporations now have KM projects [14]. Therefore it is an open question whether after the era of management information systems (MIS), KM will become a mainstream technology in the future. Now the progress in the research of semantic web technology has made the documents wrapped in XML or RDF [7,9,27] and ontology languages [22] machine-readable on the Internet. Clearly these techniques have provided KM system developers with a standard interface in knowledge representations and access. Furthermore the progresses in computer science and network technology have made it possible to develop distributed KM systems based on today’s Internet/Intranet platforms. However, some industry observers said KM was a vague concept that would neither deliver what it promised nor add to the bottom line [14]. For example, nowadays document management systems can be declared as KM systems; computersupported cooperative work (CSCW) systems are also considered to be KM systems. The computer systems consisting of document management systems, knowledge bases, CSCW environments as well as the tools for knowledge discovery are, of course, KM systems. These descriptions on KM systems often make people feel difficult to form a complete concept about KM systems. In this paper, based on the known theoretical models of KM, our investigation on existing KM systems and a prototype system designed for supporting the co-research and software development in research institutes, we address three important issues in the design and implementation of KM systems, i.e., the software system architecture of a general KM system, the main functions of the subsystems of the KM system and the information processing techniques and algorithms that can be used to implement Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 155-167, 2002. © Springer-Verlag Berlin Heidelberg 2002
156
Jun Ma and Matthias Hemmje
the KM system. We propose a software system architecture designed as serviceoriented, and demonstrate the methods and algorithms that can be used to implement the general KM systems as well.
2. KM Theory In KM theory, data are observations or facts; information is the data organized and categorized into patterns to create meaning; and knowledge is information put to productive use, enabling correct action and helping people to produce innovation. Knowledge also refers to what we have been acquainted, e.g., working experience, problem-solving methods, customers’ profiles etc. Usually people classify knowledge into two categories, i.e., explicit and tacit knowledge, where explicit knowledge is knowledge that has been articulated in the form of text, tables, diagrams and so on, while tacit knowledge is knowledge that cannot be articulated, which mainly resides in people’ brains. However there is no a clear boundary between the two sets for any application domain. In contrast, there is a nonempty intersection set of the two sets. The knowledge in the intersection set is called implicit knowledge, i.e., the knowledge that can possibly be articulated but has not been done. A KM project is to establish an open culture and improve ways of cooperative work in an organization. A concrete goal is to establish a computer system called KM system, which will be used to set up a framework and tool set for improving the organization’s knowledge infrastructure. The most popular theoretical models on KM have been summarized in Fig. 1 [26]. These models provide clear descriptions of knowledge-life-cycles. KM Value Chain
virsions Creation
Holzner etc. (1979) Pentland (1995) Nonaka etc. (1995) Demarest (1997) Daal etc. (1998) Davenport etc. (1998) Liebowitz (1999)
Consciousness Construction
Storage
Distribution
Extension
Transformation
Organization
Creation
Storage
Access
Construction
Embodiment
Creation
Draw-up
Creation Identify
Capture
Store
Distribution
Dissemination
Application Implementation Application Application
Dissemination Dissemination
Use Apply
Evaluate
Transference
Asset Management
Share
Apply
Fig. 1. KM models and KM-Value-Chain.
Sell
KM: System Architectures, Main Functions, and Implementing Techniques
157
However these models cannot be used to develop a KM system directly because system developers need to know what the software system architecture of a KM system is, the basic functions of the KM system and the information processing technologies that can be used to implement the KM system.
3. Related Work The software system architectures of KM systems and the implementation techniques in the design and implementation of KM systems have been studied before. For example, KM systems are classified as work-flow (processing) oriented [11,13], where knowledge creation, storage, distribution and reuse are implemented based on the basic work-flows of enterprises, problem-solving oriented [7], where knowledge creation, distribution and reuse are carried out along with the process of disintegrating the goal of problem-solving into sub-goals, knowledge-creation oriented [3,6, 21,24], where the main task of KM systems focus on enhancing organizational learning and discovering new knowledge. We think the three types of system architectures often dependent on the working flows and the administration of concrete enterprises or organizations. Therefore we consider service-oriented system architecture seems more suitable to be a general one in the design of general KM systems. We motivate this below. We investigated many successful and failing KM systems, we found that KM applications are not limited to the innovative organizations only, such as software companies and research institutes, where knowledge/information is considered to be the primary basis for business development and success. KM techniques have also been applied to the knowledge management of many public sectors and service organizations[1,3,4,7,9,11,12,13,14,20,21,23,25]. In most KM systems knowledge usually does not refer to scientific or technical innovation directly, but the management of staff members, cooperation, business transactions and learning and training. Concrertely speaking the knowledge in these systems is often about the customer profiles, synergy relationships, administration and business experience and all others that can help staff members to act more successfully. The usage of this kind of knowledge is to enhance the working efficiency, decrease expense, avoids faults and reinventing, reuse knowledge to new problems as well as provide various services in the process of people’s innovation activities. An investigation on KM systems within the experts involved in the Social Security Administration’s benefit rate increase process [16] is shown in Tab. 1. We think it is of the representative among the known KM systems. Table 1. Major benefits of KM Increased innovation Practice and process improvement Increased customer satisfaction Enhanced employee capability and organizational learning Improved efficiencies in writing reports and responding to inquiries Lower learning gap
percentage of responses 20% 60% 30% 50% 10% 10%
-XQ0DDQG0DWWKLDV+HPPMH
$*HQHUDO6\VWHP$UFKLWHFWXUHIRUWKH'HVLJQRI.06\VWHPV %DVHG RQ WKH GLVFXVVLRQ DERYH DQG WKH FODVVLILFDWLRQ RI NQRZOHGJH PHQWLRQHG LQ VHFWLRQ ZH FRQVLGHU D FRPSOHWH .0 V\VWHP VKRXOG FRQVLVW RI DW OHDVW WKUHH VXEV\VWHPV LH D VXEV\VWHP QDPHG (.0 ZKLFK PDQDJHV WKH H[SOLFLW NQRZOHGJH DQG SURYLGHV EDVLF NQRZOHGJH PDQDJHPHQW IXQFWLRQV D VXEV\VWHP QDPHG 7.0 ZKLFK PDQDJHV WKH WDFLW NQRZOHGJH LQ WHUPV RI SURYLGLQJ WKH FRRSHUDWLYH ZRUNLQJ HQYLURQPHQWDQGLQWHOOLJHQWVHUYLFHVIRUVWDIIPHPEHUVDQGSDUWQHUVWRZRUNWRJHWKHU DQG ILQDOO\ DQ DOWHUQDWLYH VXEV\VWHP QDPHG ,.0 ZKLFK SURYLGHV WKH WRROV DQG VRIWZDUH DSSOLFDWLRQV IRU NQRZOHGJH GLVFRYHU\ 7KH (.0 IRUPV WKH NQRZOHGJH LQIUDVWUXFWXUHRIDQRUJDQL]DWLRQZKLOHD7.0SURYLGHVLQWHOOLJHQWVHUYLFHVEDVHGRQ WKHLQIRUPDWLRQDQGNQRZOHGJHVWRUHGLQWKHFRUUHVSRQGLQJ(.0,.0LVXVHGWR ILQGQHZNQRZOHGJHE\PLQLQJWKHZHESDJHVRQWKH,QWHUQHW,QWUDQHWDVZHOODVWKH GDWDLQWKHGDWDEDVHVDQGNQRZOHGJHEDVHVRIWKH(.0DQG7.0+RZHYHUVLQFH WKH DSSOLFDWLRQV RI WRGD\¶V NQRZOHGJH GLVFRYHU\ WHFKQLTXHV DUH VWLOO OLPLWHG ZH EHOLHYHLWLVEHWWHUWRFRQVLGHU,.0WREHDQDOWHUQDWLYHVXEV\VWHPFRQVLVWLQJRIWKH VHSDUDWHWHFKQLTXHVLQWKHGHVLJQRI.0V\VWHPV &OHDUO\ IRU DQ\ D .0 V\VWHP 6 )6 ⊆ )(.0 ∪)7.0 ∪),.0 ZKHUH )$ UHSUHVHQWVWKHIXQFWLRQVSURYLGHGE\DFRPSXWHUDSSOLFDWLRQV\VWHP$ /HWXVGLVFXVVWKHEDVLFPRGXOHVDQGWKHV\VWHPDUFKLWHFWXUHVRI(.0DQG7.0 LQ GHWDLO +RZHYHU ZH WKLQN LW LV GLIILFXOW WR GR WKH VLPLODU GLVFXVVLRQ RQ ,.0 EHFDXVH ZH IHHO WKDW WKH WHFKQLTXHV XVHG LQ NQRZOHGJH GLVFRYHU\ ODFN D WLJKW FRQQHFWLRQIRUWKHWLPHEHLQJ $'HVLJQRIDQ(.06XEV\VWHP $Q(.0VXEV\VWHPPDLQO\FRQVLVWVRIGRFXPHQWPDQDJHPHQWV\VWHPVFDVHEDVHV NQRZOHGJHEDVHVDQGLQWHOOLJHQWVHUYLFHV7KHPDLQIXQFWLRQVDUHOLVWHGEHORZ • ,QIRUPDWLRQ.QRZOHGJH VKDULQJ DQG WKH LQWHOOLJHQW VHUYLFHV IRU LQIRUPDWLRQ NQRZOHGJHUHXVH • .QRZOHGJHDXGLWZKLFKORRNVDWDWDUJHWHGDUHDDQGLGHQWLILHVZKLFKNQRZOHGJHLV QHHGHG DQG DYDLODEOH IRU WKDW DUHD ZKLFK NQRZOHGJH LV PLVVLQJ ZKR KDV WKH NQRZOHGJH • .QRZOHGJH OHDUQLQJ VXSSRUW ZKLFK KHOSV XVHUV WR ILQG OHDUQLQJ GRFXPHQWV DQG DUUDQJHVWXG\FRUUHVSRQGLQJWRWKHLUOHDUQLQJSXUSRVHVDQGHGXFDWLRQEDFNJURXQGV • 6HDUFKLQJ HQJLQHV 7KH VHDUFKLQJ HQJLQH KHOSV XVHUV WR ILQG IRUPHU ZRUNLQJ H[SHULHQFHPDWFKLQJWKHVHDUFKLQJUHTXLUHPHQWRIXVHUV • .QRZOHGJH PDSV JUDSKLF GLVSOD\ RI NQRZOHGJH IORZV DQG NQRZOHGJH V\QHUJ\ DPRQJ SDUWQHUV 7KH QRGHV LQ WKH JUDSK GHQRWH SHUVRQV WKH UHVHDUFK JURXSV GLYLVLRQV DQG SDUWQHUV ZRUOGZLGH WKH DUFV UHSUHVHQW WKH NQRZOHGJH IORZV RU V\QHUJ\ UHODWLRQVKLSV 'LIIHUHQW FRORUV GHVFULEH GLIIHUHQW NLQGV RI V\QHUJLHV RU NQRZOHGJHIORZV • @ LH UHSUHVHQWHG LQ WD[RQRPLF JUDSKV ZLWKRXW GLUHFWHG F\FOHV 7KH GRFXPHQWV LQ WKH V\VWHP DUH FODVVLILHG LQWR VHYHUDOFDWHJRULHVEDVHGRQDSSOLFDWLRQGRPDLQVZKLFKDUHGHQRWHGE\''«'Q HJ HOHDUQLQJ PXOWLPHGLD DSSOLFDWLRQV XVHU LQWHUIDFH GHVLJQ HEXVLQHVV DQG .0 V\VWHPV3URMHFWVZKLFKPHDQWKHQDPHVRISURMHFWVKHUHDUHOLQNHGWRWKHGRPDLQV WKH\EHORQJWRDQGWKHGRFXPHQWVEHORQJLQJWRDSURMHFWDUHFODVVLILHGIXUWKHUEDVHG RQ GRFXPHQW W\SHV HJ UHSRUWV ZRUNLQJ SODQV ZKLOH SDSHUV IRUPDO SXEOLFDWLRQV JURXS SURILOHV PHHWLQJ PLQXWHV V\VWHP GHPRQVWUDWLRQV KDQGERRNV DV ZHOO DV WKH IUHTXHQWO\DVNHGTXHVWLRQV)$4 6XEFODVVRIUHODWLRQVDUHXVHGWRVHWWKHVHPDQWLF UHODWLRQVKLSVRIWKHFRQFHSWVRIFDWHJRULHVDQGUHDOGRFXPHQWVLQWKHV\VWHP 7KHGRFXPHQWVDERXWEHVWSUDFWLFHVOHVVRQVDQGWKHSURMHFWVFDUULHGRXWEHIRUHDUH UHSUHVHQWHGE\PHWDGDWDLQ5')RU;0/ZKHUHWKHPHWDGDWDDQQRWDWHWKHVHPDQWLF UHODWLRQVKLSV DPRQJ WKH HQWLWLHV LQ D GRFXPHQW ZKHUH DQ HQWLW\ PD\ EH D GHPR D OLWHUDOUHSRUWDQGDSLHFHRIJUDSKLFDQGLPDJH7KHUHDOGRFXPHQWVFRUUHVSRQGLQJWR WKHHQWLW\GHVFULSWLRQVDUHGLVSHUVHGDFURVVDQXPEHURIVHUYHUV $'HVLJQRI$7.0 7KHPDQDJHPHQWRIWDFLWNQRZOHGJHUHIHUVWRPDQ\GLVFLSOLQHVHJPDQDJHPHQWDQG FRPSXWHU VFLHQFH DV ZHOO DV SV\FKRORJ\ :H RQO\ GLVFXVV WKLV LVVXH IURP WKH YLHZSRLQW RI VRIWZDUH V\VWHP GHYHORSHUV ,Q SULQFLSOH D &6&: HQYLURQPHQW KHOSV VWDII PHPEHUV DQG WKH SDUWQHUV LQ WKH FRPPXQLW\ WR H[FKDQJH WKHLU NQRZKRZ DQG 6HDUFKLQJSLORW
0,6
' 3URMHFW SXEOLFDWLRQV
'
'Q
3URMHFW
PLQXWHV
FRGHV
VHUYLFH
/HDUQLQJVXSSRUW
.QRZOHGJHPDS @ 1RZ ZH XVH WKHP ZLWK D OLWWOH FKDQJHLQWKHGHVLJQRI7.0V\VWHPV ,IWKHSURILOHVRILQGLYLGXDOVJURXSVGRFXPHQWVGRPDLQVDQGFDVHVDUHUHSUHVHQWHG E\NH\ZRUGRUSKUDVHVHWVWKDWGHVFULEHUHVHDUFKLQWHUHVWVLQVSHFLDOGRPDLQVWKH VLPLODULW\RIWZRSURILOHVLVFDOFXODWHGE\IRUPXOD 6LPLODULW\$% ⏐$∩%⏐⏐$∪%⏐ ZKHUH$DQG%UHSUHVHQWWZRNH\ZRUGVHWV ,IWKHSURILOHVDUHUHSUHVHQWHGE\IUDPHVIDQGIZKHUHDIUDPHGHVFULEHVWKHEDVLF SHUVRQDO LQIRUPDWLRQ UHVHDUFK LQWHUHVWV ZRUNLQJ H[SHULHQFH DQG HGXFDWLRQ EDFNJURXQGVWKHQWKHVLPLODULW\RIWZRSURILOHVLVFDOFXODWHGE\IRUPXOD ¦ WKH ZHLJKW RI WKH PDWFKHG VORWV 6LPLODULW\II ¦ WKH ZHLJKW RI DOO VORWV ,I D SURILOH LV UHSUHVHQWV E\ DQ P×Q PDWUL[ 5 ZKHUH 5 LV D UHODWLRQ PDWUL[ RI PHPEHUV×LQWHUHVWHG GRPDLQV SURMHFWV × GRPDLQV GRFXPHQWV × GRPDLQV DQG GRFXPHQW W\SHV × GRPDLQV HWF VXFK WKDW UL M LI WKH LWK PHPEHU KDV WKH LQWHUHVWV LQ WKH MWK GRPDLQ 7KHQ WKH VLPLODULW\ RI WZR PHPEHUV LQ UHVHDUFK LQWHUHVWVLVFDOFXODWHGE\IRUPXOD 6LPLODULW\ UL U M FRV UL U M
UL $ U M UL × U M
ZKHUH³ $ ´GHQRWHVWKHGRWSURGXFWRIWKHWZRUDZYHFWRUVRI5RU
.06\VWHP$UFKLWHFWXUHV0DLQ)XQFWLRQVDQG,PSOHPHQWLQJ7HFKQLTXHV
6LPLODULW\ UL U M FRUU UL U M
¦N ULN − UL U MN − U M
¦N U − UL ¦N U MN −U M LN
)RUPXOD LV FDOOHG &RVLQH DQG IRUPXOD LV FDOOHG &RUUHODWLRQ 6LQFH WKH FRPSXWDWLRQRI&RVLQHLVPXFKVLPSOHUWKDQWKDWRI&RUUHODWLRQWKHFRPSXWDWLRQRI FRVLQH LV DGRSWHG PRUH LQ SUDFWLFH +RZHYHU WKH HIILFLHQF\ RI DERYH IRUPXODH GHSHQGVRQFRQFUHWHSUREOHPVDSSOLFDWLRQVDQGGDWD 7KHPDLQIXQFWLRQVRID7.0VXEV\VWHPDUHOLVWHGEHORZ • 5HFRPPHQGDWLRQ 5HFRPPHQGVH[SHUWVLQDVSHFLDOGRPDLQ 5HFRPPHQGV WHDP PHPEHUV IRU D QHZ SURMHFW 5HFRPPHQGV GRFXPHQWV WKDW UHSUHVHQW WKH EHVWSUDFWLFHVDQGOHVLRQVWRWKHV\VWHPGHYHORSHUVZKRDUHXQGHUWDNLQJDVLPLODU QHZSURMHFW • $ZDUHQHVV 6KDULQJ D FRPPRQ FDOHQGDU ,Q RUGHU WR DUUDQJH PHHWLQJV GLVFXVVLRQ DQG DOO RWKHU FRPPRQ DFWLYLWLHV D FRPPRQ FDOHQGDU UHFRUGLQJ WKH DFWLYLWLHV RI VWDII PHPEHUV LQ D GLYLVLRQ LV VHW XS 7KH FDOHQGDU DOVR UHFRUGV WKH FRPPRQFULWLFDOGDWHVHJWKHGHDGOLQHVRIFRQIHUHQFHDQGWKHGDWHIRUWKHYLVLWRUV RXWVLGH 'RFXPHQWVKDULQJ7KHPHPEHUVLQDYLUWXDO WHDPVKDUHDILOHVSDFH ZKHUHWKHPHPEHUVFDQXSORDGGRFXPHQWVGUDZLQJLPDJHVHWFWRWKHILOHVSDFH DQGWKHV\VWHPFDQLQIRUPRWKHUVWRGRZQORDGWKHGRFXPHQWVLQWLPH • 5HWULHYDO $ VHDUFKLQJ HQJLQH IRU VFLHQWLILF SXEOLFDWLRQV WKURXJK WKH ,QWHUQHW LV DYDLODEOH ZKLFK FDQ KHOS VWDII PHPEHUV WR GHFUHDVH WKH LUUHODWLYH 85/V DQG LQFUHDVHWKHVHDUFKVSHHGE\XWLOL]LQJVHYHUDOIDPRXVVHDUFKHQJLQHVHJ*RRJOH @ • 0DSSLQJ 'LVSOD\LQJ YDULRXV UHODWLRQVKLSV JUDSKLFDOO\ DV ZHOO DV GHSLFWLQJ WKH UROHV RI GLIIHUHQW SHRSOH LQ WKH JUDSK 7KHVH UHODWLRQVKLSV LQFOXGH WKH V\QHUJ\ UHODWLRQVKLSVDPRQJLQGLYLGXDOVDQGJURXSV $OWKRXJK )LJ SURYLGHV D KLJK OHYHO GHVFULSWLRQ RI 7.0 LW VKRZV WKH UHODWLRQVKLSEHWZHHQ(.0DQG7.0&OHDUO\(.0LVWKHIXQGDPHQWDOVXEV\VWHP RI D .0 V\VWHP EHFDXVH WKH GHVLJQ DQG LPSOHPHQWDWLRQ RI 7.0 LV EDVHG RQ GRFXPHQWPDQDJHPHQWRI(.0
2XU.03UDFWLFH 2XU.0SUDFWLFHPDLQO\UHIHUVWRWKHSURMHFWFDOOHG'HOLWH2QOLQHZKLFKLVDYLUWXDO LQIRUPDWLRQ DQG NQRZOHGJH PDQDJHPHQW V\VWHPV IRU WKH GLYLVLRQ RI 'HOLWH RI )UDXQKRIHU ,36, >@ )LJ VKRZV WKH SRUWDO RI WKH V\VWHP DQG )LJ LV WKH *8, DERXWWKHFDOHQGDUVDQGLQWHUQDOVFKHGXOHVRIGLYLVLRQV 7KHVRIWZDUHV\VWHPDUFKLWHFWXUHDQGPDLQIXQFWLRQVSURYLGHGE\WKHV\VWHPKDYH EHHQGLVFXVVHGLQVHFWLRQVDQG)RUWKHWLPHEHLQJPRVWRIWKHIXQFWLRQVKDYH EHHQ LPSOHPHQWHG EDVHG RQ WKH VLPLODU PHWKRGV DQG WHFKQRORJLHV WKDW KDYH EHHQ GHVFULEHGLQ>@7KHUHIRUHZHRQO\LQWURGXFHDQDOJRULWKPIRUNQRZOHGJHILOWHU DQG D PHFKDQLVP VXSSRUWLQJ OHDUQLQJ DQG WUDLQLQJ LQ HQWHUSULVHV ZKLFK VHHP GLIIHUHQWIURPWKHNQRZQRQHVHVVHQWLDOO\
162
Jun Ma and Matthias Hemmje
Fig. 4. The portal of Delite-Online.
Fig. 5. Calendars and internal schedules of divisions.
5.1 A Novel Knowledge Filter Knowledge filter is used in (1) ranking files based on the comments of experts or peers; (2) recommender expertise and files. Although many filters or recommender systems have been studied for e-commerce [5], it is still hard to use them directly for document management because only one evaluation factor is considered in these
.06\VWHP$UFKLWHFWXUHV0DLQ)XQFWLRQVDQG,PSOHPHQWLQJ7HFKQLTXHV
DOJRULWKPV ,Q GRFXPHQW PDQDJHPHQW PXOWLIDFWRUV KDYH WR EH FRQVLGHUHG LQ GRFXPHQW HYDOXDWLRQ VD\ RULJLQDOLW\ FUHDWLYLW\ SUHVHQWDWLRQ UHODWLYLW\ DQG VR RQ )XUWKHUPRUH RIWHQ VRPH IDFWRUV DUH PRUH LPSRUWDQW WKDQ RWKHUV LQ WKH GRFXPHQW UHWULHYDO IRU D VSHFLDO SXUSRVH HJ WKH IDFWRU RI RULJLQDOLW\ LV PRUH LPSRUWDQW WKDQ SUHVHQWDWLRQ :H SURSRVH DQ DOJRULWKP WKDW FRQVLGHUV ERWK PXOWLIDFWRUV DQG WKH ZHLJKWVRIWKHIDFWRU7KHIRUPDOGHVFULSWLRQLVJLYHQEHORZ /HW3UHSUHVHQWDILOHHJZKLFKLVDSURSRVDORUEXVLQHVVVWUDWHJ\:HGHVLJQDQ P×Q PDWUL[ WR FROOHFW WKH FRPPHQWV IRU 3 IURP H[SHUWV /HW 8 ^ X X X Q ` ZKHUH XL ≤ L ≤ Q LV DQ HYDOXDWLRQ IDFWRU VXFK DV RULJLQDOLW\ FUHDWLYLW\ SUHVHQWDWLRQUHODWLYLW\DQGVRRQ7KHFRPPHQWVHW9 ^ Y Y YP `ZKHUH YL LVD FRPPHQW ≤ L ≤ P HJ ³EDG´ ³QRUPDO´ ³JRRG´ DQG ³H[FHOOHQW´ ,I DQ H[SHUW FKRRVHV D FRPPHQW YL WR HYDOXDWLRQ IDFWRU X M IRU 3 WKHQ KHVKH JLYHV D PDUN DW WKH SRVLWLRQ LM RI WKH WDEOH )ROORZLQJ DOJRULWKP JLYHV D V\QWKHWLF FRPPHQW E\ IX]]\ DQGVWDWLVWLFFRPSXWDWLRQWRGHWHUPLQHZKHWKHUWKHNQRZOHGJHLVJRRGHQRXJKWREH DGGHGWRNQRZOHGJHEDVHV7KHGHWDLOZLOOEHGLVFXVVHGEHORZ $OJRULWKPV\QWKHWLFFRPPHQWFRPSXWDWLRQ ,QSXW $ILOH3 7KHHYDOXDWLRQIDFWRUVHW8 ^ X X X Q `ZKHUH XL ≤L≤QLVDQHYDOXDWLRQ IDFWRU 7KHFRPPHQWVHW9 ^ Y Y YP `ZKHUH YL LVDFRPPHQW≤L≤P $IX]]\YHFWRU; [ X [ X [ Q X Q VXFKWKDW ¦Q [L [L ≥ L «Q [L UHSUHVHQWVWKHLPSRUWDQFHRIWKHIDFWRU XL LQWKHHYDOXDWLRQ 7KHH[SHUWV¶FRPPHQWVRQ3 2XWSXW$V\QWKHWLFFRPPHQWRI3 6WHS $IWHU FROOHFWLQJ WKH FRPPHQWV IURP DOO UHIHUHHV IRU D ILOH 3 FDOFXODWH IROORZLQJIX]]\VHWV µ L Y µ L Y µ LP Y P µ LM L «Q M «P :KHUH µ LM WKH QXPEHU RI SHUVRQV ZKR JLYH Y M WR XL WRWDO QXPEHU RI UHIHUHHV × 1 IX]]\ VHWV DUH UHSUHVHQWHG LQ D PDWUL[ ( = µ LM Q×P 6WHS&DOFXODWHIROORZLQJYHFWRU
\ Y \ Y \ P Y P E\IROORZLQJIRUPXOD
\ L = ∨ [ L ∧ µ LM M = P
Q
L =
:KHUH ∨ DQG ∧ UHSUHVHQW D VXLWDEOH RSHUDWLRQ HJ D ∨ E PD[DE DQG D ∧ E PLQDE
-XQ0DDQG0DWWKLDV+HPPMH
6WHS,I \ N LVRIWKHPD[LPXPLQVHW^ \ \ \ P `WKHQRXWSXW YN DVWKHV\QWKHWLF FRPPHQW 3 ZLOO EH DGGHG WR NQRZOHGJH EDVH LI YN LV JRRG HQRXJK 7KH SULQFLSOH RI WKH DOJRULWKP FDQ EH VWDWHG DV IROORZV :H GR WKH VWDWLVWLFV FRPSXWDWLRQ LQ WKH VWHS ZKHUHWKHSHUFHQWDJHRIWKHFRPPHQW Y M JRWWHQLQDOOJLYHQFRPPHQWVRQHYDOXDWLRQ IDFWRU XL LV FDOFXODWHG ≤ L ≤ Q DQG ≤ M ≤ P ,Q VWHS EDVHG RQ WKH SULQFLSOH RI IX]]\SURFHVVLQJWHFKQLTXHDIX]]\YHFWRU; [
X [ X [ Q X Q
LV XVHG ZKHUH [L UHSUHVHQWV WKH LPSRUWDQFH RI HYDOXDWLRQ IDFWRU XL LQ WKH FRPSXWDWLRQRIDV\QWKHWLFFRPPHQWRQDSURSRVDO3 [L LVDVDZHLJKWYDOXHLQWKH FDOFXODWLRQ RI WKH PDWKHPDWLFV H[SHFWDWLRQ RI HDFK Y M ≤ M ≤ P DOVR &OHDUO\ WKH FRPSXWDWLRQRIPDWKHPDWLFVH[SHFWDWLRQRI YL ≤L≤PFRQVLGHUVERWKWKHFRPPHQWV IURP DOO UHIHUHHV DQG WKH LPSRUWDQFH RI HDFK HYDOXDWLRQ IDFWRU :H FKRRVH WKH RQH ZLWK WKH PD[LPXP PDWKHPDWLFV H[SHFWDWLRQ DPRQJ DOO FRPPHQWV DV WKH ILQDO FRPPHQWRQ3 )RU RXU NQRZOHGJH WKLV DOJRULWKP LV WKH PRVW FRPSOLFDWH FRPSXWDWLRQ DPRQJ WKH GHVLJQRIWKHILOWHUVJLYHQLQ>@)RUWKHWLPHEHLQJWKHDOJRULWKPLVRQO\XVHGWR UDQNILOHVEDVHGRQWKHFRPPHQWVRIH[SHUWVRUSHHUVDQGUHFRPPHQGHUH[SHUWLVH,Q WKH IXWXUH ZHZDQWWRUHFRPPHQGGRFXPHQWVDFFRUGLQJWRWKHUDQNLQJDQGPHDVXUH WKHDFFXUDF\RIWKHUDQNLQJLQSUDFWLFH $0HFKDQLVPIRU6XSSRUWLQJ/HDUQLQJ 7UDLQLQJ 7KH V\VWHP DUFKLWHFWXUH RI WKH PHFKDQLVP IRU VXSSRUWLQJ HOHDUQLQJ V\VWHPV LQ RXU V\VWHPGHVLJQLVVKRZQLQ)LJZKHUHWKHPRGXOHVFKHGXOLQJSURGXFHVDOHDUQLQJ PDWHULDOOLVWFDOOHG &XUULFXOXP6FKHGXOH&6 WRDXVHUEDVHGRQKLVRUKHUOHDUQLQJ SXUSRVHV JRWWHQ E\ KXPDQFRPSXWHU LQWHUDFWLRQ WKURXJK WKH PRGXOH LQWHUIDFH 7KH HGXFDWLRQ NQRZOHGJH UHSUHVHQWHG E\ 3UHFHGHQFH 5HODWLRQ *UDSKV 35*V DQG WKH GHVFULSWLRQV RQ WKH OHDUQLQJ PDWHULDOV WH[WERRNV ZKLOH SDSHUV HWF VWRUHG LQ 'DWDEDVHV$&6WHOOVWKHXVHUZKDWPDWHULDOVKHRUVKHVKRXOGVWXG\EDVHGRQKLVRU X VHU LQ WH UID F H
35*V
' D WD E D V H V
6 F K H G X OLQ J
)LJ$QDUFKLWHFWXUHRIWKHPHFKDQLVPVXSSRUWLQJH/HDUQLQJ
.06\VWHP$UFKLWHFWXUHV0DLQ)XQFWLRQVDQG,PSOHPHQWLQJ7HFKQLTXHV
KHUHGXFDWLRQEDFNJURXQGDQGOHDUQLQJSXUSRVHDQGVXJJHVWOHDUQLQJWKHFRXUVHVLQ WKHRUGHUOLVWHGLQWKH&6 $WHUP7LFRQVLVWVRIIROORZLQJGDWDZKLFKFRQVLVWZLWKWKHPHWDGDWDRI7LVWRUHG LQ'DWDEDVHV 1DPHRIWKHOHDUQLQJPDWHULDO $XWKRU1DPH 7\SHLGHQWLILHUERRNSDSHUYLGHR«PDQXDO 3XEOLVKHU 3XEOLVKHGWLPHPPGG\\ /HYHOLGHQWLILHU ,QWHUQDOQXPEHU $SSOLFDWLRQGRPDLQ 85/IRUDFFHVVLQJWKHV\OODEXVRIWKHGRFXPHQW 7KHSUHFHGHQFHUHODWLRQ³´RQWKHVHGRFXPHQWVLVGHILQHGEDVHGRQWKHHGXFDWLRQ DQG WUDLQLQJ NQRZOHGJH HJ WKH UHODWLRQVKLSV DPRQJ WKH FRQWHQWV RI WKH PDWHULDOV DQGWKHHPSLULFDOUHFRPPHQGDWLRQVIURPH[SHUWVDQGOHFWXUHUV$35*LVDGLUHFWHG JUDSK *9$ FRQVLVWV RI D YHUWH[ VHW 9 DQG DQ DUF VHW $ $ YHUWH[ LV WULSOH 1R 1DPH /HYHO ZKHUH 1R GHQRWHV D YHUWH[ RI 9 1DPH 'RFXPHQW 1DPH LQ GDWDEDVHVOHYHO≤OHYHO≤UHSUHVHQWVWKHGHSWKRIWKHFRQWHQWRIWKHGRFXPHQWV LH HOHPHQWDU\OHYHO PHGGOHOHYHODQG SURIHVVLRQDOOHYHO$QDUFXY! ∈$LIIXY 'HILQLWLRQ $Q RSHUDWLRQ FDOOHG 7RSRORJ\ 6RUW RQ D GLUHFWHG JUDSK *9$ ZLWKRXWGLUHFWHGF\FOHVLVWRJHQHUDWHDSHUPXWDWLRQRIWKHYHUWLFHVRI*VXFKWKDWLI YHUWH[[LVDKHDGRI\LQWKHRXWSXWLIIDUF\[!LVQRWLQ$ $Q2_9__$_ WRSRORJLFDOVRUWDOJRULWKPZDVSURPSWHGLQ>@/HW76*/ GHQRWH WKHDWRSRORJ\VRUWRQD*9$ DQG/UHSUHVHQWWKHRXWSXWDQGIXUWKHUDVVXPHWKH SURILOHV RI XVHUV HJ WKH FRXUVHV VWXGLHG EHIRUH FDQ EH DFFHVVHG $OJRULWKP GHVFULEHVWKHPDLQVWHSVWRJHQHUDWHD&67KHFRUUHFWQHVVRILWLVREYLRXVDQGLWLV HDV\WRSURYHWKDWWKHWLPHFRPSOH[LW\RILWLV2_9__$_ DVZHOO $OJRULWKP6FKHGXOLQJ 2XWSXW$FXUULFXOXPVFKHGXOH *HWWLQJ DQ ,' DQ LQWHJHU /(9(/ DQG WKH SXUSRVH RI OHDUQLQJWUDLQLQJ RI D XVHU WKHQGHWHUPLQHD35**L∈35*VPHHWLQJWKHOHDUQLQJUHTXLUHPHQWVRIWKHXVHU 'HOHWHWKHFRXUVHV∈*L∩&RXUVH,' DVZHOODVWKHFRQQHFWHGDUFVIURP*LZKHUH &RXUVH,' GHQRWHVWKHFRXUVHVWKDWWKHXVHUKDVVWXGLHG 'HOHWHDOOWH[WERRNVZKRVHOHYHOV≠/(9(/IURP*LIXUWKHU &DOO 76*L/ WR JHQHUDWHG D WRSRORJ\ VRUW RQ *L ZKHUH WDEOH / FRQVLVWV RI WKH YHUWLFHVRI*L 2XWSXWD&6EDVHGRQ/ZKHUHHDFKLWHPRIWKH&6FRQVLVWVRIQLQHPHWDGDWDWKDW ZHMXVWGLVFXVVHGDERYH 7KHDOJRULWKPZLOORXWSXWDVROXWLRQEDVHGRQWKHDFTXLUHGVHDUFKLQJUHTXLUHPHQW XQOHVVV\VWHPFDQQRWILQGFRUUHVSRQGLQJ35*LQWKHNQRZOHGJHEDVHVRIRXUV\VWHP +RZHYHU VLQFH 35*V DUH HVWDEOLVKHG EDVHG RQ JHQHUDO HGXFDWLRQ DQG HPSLULFDO NQRZOHGJHWKHRXWSXWRIWKHDOJRULWKPFDQQRWHQVXUHLWLVWKHRSWLPDODUUDQJHPHQWIRU OHDUQLQJIRUHDFKV\VWHPXVHU
166
Jun Ma and Matthias Hemmje
6. Conclusions and Future Work In this paper we address three important issues in the design and implementation of KM systems, i.e., the software system architecture, main functions in the subsystems of KM systems as well as the techniques to implement the KM systems based on the software system architecture. Although the examples given in the discussion come from our prototype system designed for supporting the KM in research institutes in the application domain of software system development, the software system architecture and implementing techniques are, in fact, independent of our research institute. The system architectures of KM systems for other applications can be deduced from the given software architecture, say changing the organization of documents in order to suit the situation of organizations. The software architecture of general KM systems is proposed based on our investigation on known successful KM systems and our KM practice. We argued that general software system architectures for KM systems should be service-oriented and human-centered. Based on this viewpoint, a general KM system consists of three subsystems, i.e., the subsystem named E-KM, which manages the explicit knowledge and provides basic knowledge management functions, the subsystem named T-KM, which manages the tacit knowledge in terms of providing the cooperative working environment and intelligent services for staff members and partners to work together, and the subsystem named IKM, which provides the tools for knowledge. The software system architecture shows that E-KM is the knowledge infrastructure of an organization, T-KM is established based on E-KM. I-KM provide the basic services for supporting human innovational activities based on both E-KM and T-KM, however, it is a relative independent subsystem. We showed our knowledge filter and a mechanism for supporting selflearning in the implementation of our prototype in this paper. We think that the knowledge filter can also be used for e-commerce and the mechanism for supporting learning can be utilized in enterprise learning and training independently. Our future work is to complete the implementation of some modules belonging to T-KM and I-KM in our prototype system further, and try to add more functions belonging to I-KM into the prototype system.
References [1] Abecker A., Bernardi A., Hinkelmann K., Kehn O. and Sintek M., Toward a technology for organizational Memory, IEEE Intelligent systems, May/June 1998, 40-48. [2] A.V.Aho, J.E.Hopcrof and J.D.Ullman, The Design and Analysis of Computer Algorithms, Addison-Wesley Publishing Company, Inc, 1974, pp 70. [3] Borghoff U.M and Pareschi R., Information Technology for Knowledge Management, Springer-Verlag Press, 1998, 1-8. [4] Cladwell N.H.M, Clarkson P.J., Rodgers P.A. and Huxor A.P., Web-based Knowledge Management for Distributed Design, IEEE Intelligent Systems, May/June 2000, 40-47. [5] Communications of ACM, march 1997, 40(3). [6] Fayyad U.M, etc. edited, Advances in Knowledge Discovery and Data Mining, AAAI Press, California USA, 1996.
KM: System Architectures, Main Functions, and Implementing Techniques
167
[7] Gresse C von Wangenheim, Althoff K.D. and Barcia R.M., Goal-Oriented and SimilarityBased Retrieval of Software Engineering Experienceware, Proceedings of 11th International Conference on Software Engineering and Knowledge Engineering, Eds by G.Ruhe and F.Bomarius, Kaiserslautern, Germany, June 1999, Lecture Notes in Computer Science 1756, pp 118-141. [8] Guarino N., Understanding, Building, using ontologies. Int. J. Hum. Comput. Stud. 46(2/3):293-310,1997. [9] Guerrero L.A. and Fuller D.A., A Web-based OO platform for the development of multimedia collaborative applications, Decision Support Systems, 27, 1999, 255-268. [10] L.Huang, M.Hemmje, E.J.Neuhold, ADMIRE: An Adaptive Data Model for Meta Search Engines, Computer Networks, 33(1-6), 2000, 431-448. [11].Kappel G., Schott S.R., and Retschitzegger W., Proceedings on Coordination in Workflow Management Systems A Rule-Based Approach, Coordination Technology for Collaborative Applications,1997, Eds. Wolfram Conen, G.Neumann ,Singapore, Springer-Verlag press, 100-119. [12] Khalifa M. and Kwok R.C.W., Remote learning technologies: effectiveness of hypertext and GSS, Decision Support Systems, 26, 1995, 195-207. [13] Kim K.H. and Paik S.K. Practical Experiences and Requirements on Workflow, Proceedings on Coordination Technology for Collaborative Applications, Wolfram Conen and G.Neumann (Eds) ,Singapore, Springer-Verlag press, 1997, pp 145-160. [14] G.Lawton , Knowledge Management: Ready for Prime Time? IEEE Computer, 34(2), 2001, 12-14. [15] Leary D.E.O, Using AI in knowledge management: Knowledge Bases and Ontologies, IEEE Intelligent systems, May/June 1998, 34-39. [16] Liebowitz J. Knowledge Management-Learning from knowledge engineering, CRC Pess LLC, 2001, 93-102. [17] Ma J. and Hemmje M., Developing Knowledge Management Systems Step by Step, The Proceedings of the 2nd European Conference on Knowledge Management, Edit, Dan Remenyi, Bled, Slovenia, 2001, 301-310. [18].Mandviwalla M and Khan S., Collaborative Object Workspaces(COWS): exploring the integration of collaboration technology, Decision Support Systems, 27, 1999, 241-254. [19] McDonald, D. W. and M.S. Ackerman. Expertise Recommender: A Flexible Recommendation Architecture. Proceedings of the ACM Conference on ComputerSupported Cooperative Work (CSCW '00), 2000: 231-240. [20] Mertins K. H, Heisig S.P, and Vorveck J., Knowledge management-Best practices in Europe, Springer-Verlag, Berlin, Germany, 2001. [21] I.Nonaka and H.Takeuchi, The knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation, New York: Oxford Univ. Press, 1995. [22] Rabarijaona A., Dieng R., Corby O. and Inria R.O., Building and Searching an XMLBased Corporate Memory, IEEE Intelligent systems, May/June 2000, 56-63. [23] E.Motta, Reusable Components for Knowledge Modeling. IOS, Amsterdam, The Netherlands, 1999. [24] C.Rouveirol, A Knowledge-Level Model of a Configurable Learning System, IEEE Expert, August 1996, 50-58. [25] P.M.Schiefloe and T. G Syvertsen, Proceedings on Coordination in Knowledge-Intensive Organizations, Coordination Technology for Collaborative Applications, 1997, Eds. Wolfram Conen, G.Neumann ,Singapore, Springer-Verlag press, 9-23. [26] Shin M.,.Holden T, and Schmidt R.A. From knowledge theory to management practice: towards an integrated approach, Information Processing & Management, 37, (2001), 335355. [27] XML and RDF, www.w3.org/TR/REC. [28] Yang C. Steifield C. and Pfaff B, Virtual team awareness and groupware support: an evaluation of the TeamSCOPE system, Int. J. Human-Computer Studies, 56, 2002, 109-126.
A Dynamic Matching and Binding Mechanism for Business Service Integration Fengjin Wang, Zhuofeng Zhao, and Yanbo Han Institute of Computer Technology, Chinese Academy of Sciences {wfg,zhaozf,yhan}@software.ict.ac.cn
Abstract. The dynamic nature of business applications over the Internet requires that distributed business services be dynamically composed. Dynamic matching and binding of business services, through which activities specified in a process definition can be dynamically bound to an appropriate service at runtime, become a key issue. This paper presents a mediation mechanism that supports dynamic resource matching and binding. Having modified and extended the traditional workflow philosophy, we propose a flexible composition model that coordinates business services according to user requirements in a flexible way. Besides functional aspects, we add QoS templates for business services in order to make service selection more rational. We have implemented a prototype system based on the proposed concepts of dynamic service matching and binding to show its applicability in achieving flexible business service integration.
1 Introduction With the development of B2B applications and service grid [Kra02, Fos02], serviceoriented architecture has gained popularity. We broadly refer all B2B-related, loosely-coupled distributed components communicating over the Internet as business services. Business services provided by individual business organizations can be accessed and shared across a wide-area network using standard protocols, such as the standard stack of Web services. In an Internet-based environment, we are increasingly confronted with problems of change. Business organizations change more frequently with the trend of globalization, their roles in inter-organizational processes as well as the business services they provide may change from time to time. For various use cases, distributed business services need to interact and cooperate flexibly and adaptively to provide integrated and value-added business services over the Internet. In such a rapidly changing environment, integration of legacy systems as well as new services has become a long-lasting challenge and undertaking. These trends give rise to the demand of more automated service matching and binding. Recently, extending workflow technology to manage business service integration has drawn some attention [Alo99, Str00, Fab00, WSFL]. Efforts have been made to integrate services provided by different business organizations to conduct a joint Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 168-179, 2002 © Springer-Verlag Berlin Heidelberg 2002
A Dynamic Matching and Binding Mechanism for Business Service Integration
169
business venture with the help of inter-organizational workflow. The dynamism caused by the inter-organizational cooperation requires that process models be finalized at runtime and that activities specified in a process model be dynamically bound to an appropriate business service at runtime [Han98, Men02]. For this purpose, traditional workflow technology can offer some help, but not much. The project WISE [Alo99] (Workflow based Internet SErvices) aims at designing, building, and testing a commercially viable infrastructure for developing distributed applications over the Internet. The infrastructure includes an Internet-based workflow engine acting as the underlying distributed operating system controlling the execution of distributed applications, a set of brokers enabling the interaction with already existing systems that are to be used as building blocks, and tools for programming in the large to allow final users to configure and develop distributed applications. However, the workflow system they provide does not offer all the needed facilities to capture and manage the dynamic properties of business service integration. eFlow [Fab00] is a system that supports the specification, enactment, and management of composite e-services, which are modeled as processes that are enacted by a service process engine. In eFlow the selection of desired service is according to a set of service selection rules. The service selection rules are defined in a service-broker-specific language. WSFL (Web Services Flow Language) [WSFL] is a new proposed standard that describes the composition of Web services in an XML-based language. It takes a flow model approach to defining and executing business processes and defines a public interface that allows business processes to advertise themselves as Web services. However, WSFL doesn’t support dynamic composition and selection of services without predefined workflow. The dynamic selection rules of Web services can only be defined by UDDI APIs, which do not support the semantics of QoS(Quality of Service), such as legal obligations of a service, costs and prices for performing an activity, etc. Based on the above-stated observation, we have tried to provide a workflowbased mediation mechanism for business service integration, through which workflow processes can be assembled and activities specified in a process definition can be dynamically bound to an appropriate service at runtime. This paper presents a flexible, workflow-based model that supports the mediation mechanism. Dynamic matching and binding of resources are realized on the basis of constraint templates and service templates. Constraint templates are used to describe the requirements and constraints of activities, and service templates provide a uniform specification of business services. The paper is organized as follows. In section 2, we introduce the conceptual design of our business service integration system. The architecture of the mediation server is also described in section 2. In section 3, we introduce an extended workflow model for business service integration. The proposed dynamic resource matching and binding mechanism is also defined in this section. We focus mainly on the concepts and implementation of the constraint templates and business service templates. In section 4, a prototypical implementation of the business service integration system is presented. Finally in section 5, we sum our work up and discuss some future work.
170
2
Fengjin Wang, Zhuofeng Zhao, and Yanbo Han
Concepts and Mechanisms of Mediated Business Service Integration
To address the above-stated requirements in B2B and service grid scenarios, we need an infrastructure that facilitates the integration of business services, which provides: (1) a customizable way through which users can make use of business services in an integrated way; (2) a dynamic matching and binding mechanism to support business service integration at runtime; (3) a uniform way to describe and manage distributed business services. Based on the above-stated considerations, we have tried to build a prototypical business service integration system. The conceptual diagram of our business service integration system is shown in Figure 1.
Directory Server
Client Client
Mediation Server Web
Business
Service
Application
Interface
…
Web
Business
Service
Application
Interface
Fig. 1. The Conceptual Diagram of the Business Service Integration System On client side, the clients utilize a portal interface provided by Mediation Server to express their requirements. With the help of the customization tools and constraint templates, the requirements are converted into process definitions in which no concrete business services are specified. Requirements at activity level will be fulfilled until runtime and actual services will be determined through dynamic matching and binding of business services. To the right of Figure 1, business organizations provide their services in a uniform and standard way and advertise their services in the directory server. We used Web services in our prototypical system. Directory server provides facilities for the registration and publication of business services provided by distributed business organizations over the Internet and provides templates to describe various business services. Mediation server is the core part of this system, which implements the dynamic matching and binding mechanism for business service integration according to user
A Dynamic Matching and Binding Mechanism for Business Service Integration
171
requirements. It looks up the directory server and invokes business services. Service mediation is an essential step in the flexible integration of business services. Figure 2 shows the architecture of a mediation server. It is made up of several key elements, such as a user portal, a workflow-based service composer and an enactment engine. Based on a set of tools template matching, and process definition and customization, the service composer composes a process model for integrating distributed business services. The extended workflow modeling language is based on WfMC’s workflow definition [WfMC96, WfMC99] and will be discussed in the subsequent section. The enactment engine executes the assembled process models. For each activity, it realizes our dynamic service matching and binding mechanism at runtime to achieve flexibility. After an appropriate service is bound, it calls the appropriate services to perform the activities. And the enactment engine will mediate the process flow in case of some exceptions such as the appropriate services requested don’t exist. Directory Server ServiceTemplate
Service Entry
Business Service Business Service
Dynamic Matching and Reference
Binding
Call
Enactment Engine Constraint Template Activity Portal
Workflow-based Service Composer Process Definition Mediation Server
Fig. 2. The Architecture Diagram of Mediation Server
3
Dynamic Matching and Binding of Business Services: Models and Techniques
We extend the XML-based WfMC workflow definition in several ways to accomplish service mediation. We also introduce constraint templates of activities in the workflow model to describe the functional and QoS constraints an activity should satisfy. Business services are bound to process activities at runtime using the dynamic re-
172
Fengjin Wang, Zhuofeng Zhao, and Yanbo Han
source matching and binding mechanism, which matches the service templates in the directory server with the constraint templates of the activities. In this section, we first discuss the templates, especially how they are used within the extended workflow model to accomplish the dynamically resource matching and binding mechanism at runtime. 3.1 Constraint Templates Constraint templates are used to describe the request constraints of activities in a process definition. Typically, activities represent business tasks and interactions between service providers. As such, besides the functional properties such as functional descriptions, parameters and constraints, they do need specify additional business semantics described by properties like legal identifier of the service provider, cost and price for performing the service, the average time to implement the task, and so on. These properties are here simply referred to as QoS properties. For example, queries like "Do you provide a book-ordering service?" just can find a book-order service. However, this mechanism proves to be inflexible and insufficient if the client needs to issue more complex and more appropriate queries, for example, without the QoS properties, it is impossible for a client to specify that he/she is attempting to discover a book-order service whose cost is below 5$ per service and whose provider is in China. The solution we propose seeks to provide sufficient function and QoS semantics respectively in the services descriptions and activities requests, which are described in the templates. Constraint template in our experimental system is defined basically as following: < RequestConstraints > //** specify QoSConstraints**// //** specify FuncConstraints**//
The contents in the < RequestConstraints > tag define the request constraints of an activity. Under the tag, the tag specifies the cost that the activity brings about, the tag specifies the duration that the activity requires to finish its job, and the < Company > tag specifies the company who provides the expected service. Under the tag, the tag specifies the functional constraints of the activity. An example is shown below. There is a “BookOrder” activity in a workflow model, which orders books according to the names and numbers of the books. The user can specifies functional constraints and QoS constraints for the activity.
A Dynamic Matching and Binding Mechanism for Business Service Integration
173
< RequestContraints > 200 3 days ProcessOrder.BookOrder < /RequestContraints >
The request constraints shown above mean that the activity wants to implement the function of ProcessOrder.BookOrder template, and specifies that the cost should be under 200, the time constraints be within 3 days, and there is no constraints confining companies. The ProcessOrder.BookOrder template is registered in the directory server as a business service template. 3.2 Service Templates In order to describe functional and QoS capabilities of a business service provided by individual business units and support the dynamic matching and binding of the services in executing a workflow, we need to register business service templates in the directory server. In our experimental system, we build the directory server on UDDI [UDDI, Pet01, Fen01]. We construct business service templates using tModel mechanism in the UDDI specification to describe the capabilities of a Web service. tModel is a concept of service type in the UDDI specification. The tModel structure takes the form of keyed metadata, provides a reference system based on abstraction. The set of tModels a Web service references form the business service templates of the Web service. We can judge the function and QoS capabilities through the templates referenced by the business service. In accord with the structure of constraint templates, a functional template mainly describes functional classification as well as operations and parameters and a QoS template mainly describes the cost of service, duration and its owner. In the directory server, functional templates and QoS templates are registered as tModel first. When a service provider register a business service as a service entry in the directory server, under the service entry, they should set the parameters of the functional template and QoS template of the business service according to its function and QoS capability. For example, a service provider registers a Web service that implements the “BookOrder” function in the directory server. The functional template referenced by the Web Service is: ProcessOrder.BookOrder, which means that the Web Service can implement the function the ProcessOrder.BookOrder template requires. In the example shown above, we have a simple QoS template a Web service can reference, it has the following parameters: “200; 2days; ICT”, which means that the cost of the Web service is 200$, the task can be finished in 2 days, and the provider is ICT.
)HQJMLQ:DQJ=KXRIHQJ=KDRDQG;3'/@ :H GLIIHUHQWLDWH EHWZHHQ WZR DVSHFWVRIWKHZRUNIORZPRGHOWKHEXLOGWLPHDVSHFWDQGWKHUXQWLPHDVSHFW>6DG@ 7KHEXLOGWLPHDVSHFWUHODWHVWR WKHVHPDQWLFVRIWKHSURFHVVDQGLVFDSWXUHGE\WKH SURFHVVPRGHO7KHUXQWLPHDVSHFWUHODWHVWRWKHSURFHVVLQVWDQFHVDQGLVKDQGOHGE\ WKHSURFHVVLQVWDQFHVDQGH[HFXWLRQWRROV :HFDQYLHZWKHZRUNIORZPRGHO:DVDGLUHFWHGJUDSK : &$5 &&RQQHFWRUQRGH $$FWLYLW\QRGH 56HWRIHGJHEHWZHHQQRGHVUHSUHVHQWVWKHWUDQVLWLRQVEHWZHHQQRGHV 5 ⊆ & × $∪ $×& ∪ $× $ ,QWKHIOH[LEOHZRUNIORZPRGHOWKH-RLQDQG6SOLWFRQVWUXFWVDQGWKHLUFRQVWUDLQWV ZKLFKDUHGHILQHGDVDSDUWRIWKHDFWLYLW\GHILQLWLRQVLQ;3'/DUHFRQVLGHUHGDVD QHZNLQGRIQRGHVFRQQHFWRUQRGHLQRUGHUWRVHSDUDWHWKHVSHFLILFDWLRQRIFRQWURO LQIRUPDWLRQIURPWKHDFWLYLW\GHILQLWLRQWRSURYLGHWKHEHWWHUIOH[LELOLW\DQGPRGXODU LW\7KXVWKHFKDQJHVPDGHWRWKHUHODWLRQVKLSVEHWZHHQDFWLYLWLHVPD\RQO\DIIHFWWKH FRQQHFWRUQRGHVZLWKRXWFKDQJLQJWKHDFWLYLW\QRGHV
'HILQLWLRQ$FWLYLW\W\SHIRUQRGHQ $FWLYLW\7\SHQ ^5RXWH,PSOHPHQW`Q ∈ $ 7KHUHKDYHWZRNLQGVRIDFWLYLW\QRGHVLQWKHSURFHVVPRGHO³5RXWH´DQG³,PSOH PHQW´ 7KH ³5RXWH´ DFWLYLW\ LV D ³GXPP\´ DFWLYLW\ WKDW SHUPLWV WKH H[SUHVVLRQ RI ³FDVFDGLQJ´WUDQVLWLRQFRQGLWLRQVDQGKDVQHLWKHUDSHUIRUPHUQRUDQDSSOLFDWLRQDQG LWVH[HFXWLRQKDVQRHIIHFWRQWKHZRUNIORZUHOHYDQWGDWD7KH³,PSOHPHQW´DFWLYLW\LV D³QRUPDO´DFWLYLW\WKDWSHUIRUPVVRPHRSHUDWLRQDQGFKDQJHVWKHZRUNIORZUHOHYDQW GDWD
'HILQLWLRQ$FWLYLW\([H7\SHIRU³,PSOHPHQW´DFWLYLW\QRGHQ $FWLYLW\([H7\SHQ ^+XPDQ$FWLYLW\$XWR$FWLYLW\4R6$FWLYLW\`Q ∈ $DQG$F WLYLW\7\SHQ ,PSOHPHQW $Q³,PSOHPHQW´DFWLYLW\PD\EHLPSOHPHQWHGLQRQHRIWKHWKUHHZD\VDVVKRZQ DERYH +XPDQ$FWLYLW\ $XWR$FWLYLW\ DQG 4R6$FWLYLW\ +XPDQ$FWLYLW\ PHDQV WKDW WKHLPSOHPHQWDWLRQRIWKLVDFWLYLW\LVQRWVXSSRUWHGE\:RUNIORZXVLQJDXWRPDWLFDOO\ LQYRNHGDSSOLFDWLRQVRUSURFHGXUHVEXWDFFRPSOLVKHGPDQXDOO\$XWR$FWLYLW\PHDQV WKDW WKH DFWLYLW\ LV LPSOHPHQWHG E\ D VSHFLILHG DSSOLFDWLRQ RU D VSHFLILHG EXVLQHVV VHUYLFH DXWRPDWLFDOO\ZKLFKFDQQRWEHFKDQJHGDWWKHSURFHVVUXQWLPH4R6$FWLYLW\ PHDQVWKDWWKHDFWLYLW\LVWREHLPSOHPHQWHGE\DQDSSOLFDWLRQRUDEXVLQHVVVHUYLFHLQ JHQHUDOWKHFRQVWUDLQWWHPSODWHRIDFWLYLW\LVGHILQHGLQWKHSURFHVVGHILQLWLRQDQGWKH DFWLYLW\DUHDFWXDOO\ERXQGWRVXLWDEOHEXVLQHVVVHUYLFHRYHUWKH,QWHUQHWXQWLOUXQWLPH WKURXJKWKHG\QDPLFUHVRXUFHPDWFKLQJDQGELQGLQJPHFKDQLVP
'HILQLWLRQ5HTXHVW&RQVWUDLQWVIRU4R6$FWLYLW\QRGHQ 5HTXHVW&RQVWUDLQWVQ ^V_ ∃ V ∈ )XQF&RQWUDLQV ∃ V ∈ 4R6&RQVWUDLQWV` Q ∈ $DQG$FWLYLW\7\SHQ ,PSOHPHQWDQG$FWLYLW\([H7\SHQ 4R6$FWLYLW\
$'\QDPLF0DWFKLQJDQG%LQGLQJ0HFKDQLVPIRU%XVLQHVV6HUYLFH,QWHJUDWLRQ
$V IRU WKH 4R6$FWLYLW\ QRGH WKH 5HTXHVW&RQWUDLQV IRU WKH QRGH FRQWDLQV WKH IXQFWLRQDOFRQVWUDLQ)XQF&RQWUDLQV DQGWKH4R6FRQVWUDLQ4R6&RQVWUDLQWV ZKLFK VSHFLI\ UHVSHFWLYHO\ WKH IXQFWLRQDO UHTXLUHPHQW DQG WKH 4R6 FDSDELOLW\ WKH DFWLYLW\ UHTXLUHVEDVHGRQFRQVWUDLQWWHPSODWHVGHVFULEHGLQVHFWLRQ:HGHILQHWKHJUDP PDURIDFWLYLW\5HTXHVW&RQVWUDLQWVLQDFFRUGZLWKFRQVWUDLQWWHPSODWH7KHVFKHPDRI WKHDFWLYLW\ORRNVOLNH $FWLYLW\,'1DPH! $FWLYLW\7\SH5RXWH_,PSOHPHQW ! ,PSOHPHQW7\SH1R_7RRO_4R6 ! 3HUIRUPHU3DUWLFLSDQW,'! $SS,','! 3ULRULW\! ([WHQGHG$WWULEXWHV! 5HTXHVW&RQWUDLQWV! 426! &RVW! 7LPH! &RPSDQ\! 4R6! )XQF! &RQVWUDLQW! )XQF! 5HTXHVW&RQWUDLQWV! ([WHQGHG$WWULEXWHV! $FWLYLW\!
'HILQLWLRQ/HW,16EHDVHWRISURFHVVLQVWDQFHVZKLFKSUHVHQWVDOOSURFHVVLQ VWDQFHVRIZRUNIORZPRGHO: 7KHVDWHVRIQRGHVLV1RGHVWDWH $ × ,16 → ^6FKHGXOHG$FWLYH&RPSOHWHG 7HUPLQDWHG`$LVWKHVHWRIDFWLYLW\QRGHVIRU: 1RGHVWDWHQL Q ∈ $L ∈ ,16UHSUHVHQWVWKHVWDWHRIQRGHQRIWKHSURFHVVLQ VWDQFHL 6FKHGXOHGUHSUHVHQWVDVWDWHZKHUHWKHQRGHLVVFKHGXOHGIRUH[HFX WLRQEXWKDVQRWEHHQDOORFDWHGWRWKHZRUNIORZFOLHQW)RUH[DPSOH LQ FDVH ZKHUH WKH $FWLYLW\([H7\SHQ +XPDQ$FWLYLW\ LW UHSUHVHQWV WKDWWKHDFWLYLW\KDVQRWEHHQFKRVHQ $FWLYHUHSUHVHQWVDVWDWHZKHUHWKHQRGHLVEHLQJSHUIRUPHG &RPSOHWHUHSUHVHQWVDVWDWHZKHUHWKHDFWLYLW\GHVLJQHGWRWKHQRGH KDV EHHQ VXFFHVVIXOO\ FRPSOHWHG $QG WKH ZRUNIORZ HQJLQH FDQ GH FLGHWKHQH[WQRGHDFFRUGLQJWRWKHSURFHVVGHILQLWLRQ 7HUPLQDWHG UHSUHVHQWV D VWDWH ZKHUH VRPH HUURU RU H[FHSWLRQ HPHUJHVLQWKHSHUIRUPLQJRIWKHDFWLYLW\ 'HILQLWLRQ7KHZRUNIORZGDWD:)'DWD :)'DWD ^ Y Y Y Q ` Y L LVDYDULDEOHRIZRUNIORZGDWDL «Q 'HILQLWLRQ ,QVWDQFH'DWD ,16 → ^Y 9DOXH Y ` Y ∈ :)'DWD 9DOXH Y UHSUHVHQWVWKHFXUUHQWYDOXHRIWKHZRUNIORZGDWDY:KHQWKHDFWLYLW\LVDFWLYHWKH SDUDPHWHUVRIDEXVLQHVVVHUYLFHLQYRNHGVKRXOGEHERXQGWRWKHZRUNIORZGDWD
176
Fengjin Wang, Zhuofeng Zhao, and Yanbo Han
3.4 Basic Steps towards Dynamic Matching and Binding of Business Services On the basis of the extended workflow model and the templates, we can describe how the workflow engine realizes the dynamic matching and binding mechanism at runtime. When the NodeState(n,i) is changed from “Active” to “Completed”, the workflow engine needs to determine the next activity to be activated, which is dependent on the process model definition and the current values of workflow data. In the process, if the next activity’s type is “Implement” and its ActivityExeType is QoSActivity, the workflow engine should bind the activity to the right business service using the DynServiceBinding function. The DynServiceBinding function will do the following: 1. Search in the directory server to find the appropriate business service whose business service template can satisfy the RequestContrains of the activity, that is, to match the functional templates and QoS templates of the business services with the functional constraints and QoS constraints of the activity defined based on constraint templates. 2. After comparing the parameters of service templates with the RequestContrains of the activity, the function returns the right business services to the workflow engine. 3. If there are more than one business services, the workflow engine chooses a most appropriate service with the help of an interactive dialogue. As default, the engine will select the pre-specified one. Then, the workflow engine will bind the corresponding parameters of the activity and the business service. For example, we can define a workflow process model including “BookOrder” activity node that is an “Implement” activity and “BookOrder” activity’s ActivityExeType is QoSActivity. This activity wants to implement the function of ProcessOrder.BookOrder template, and specifies that the cost be below 200, the time constraints be within 3 days. This activity can be defined with functional constraints and QoS constraints as shown in the example in section 3.1 without assigned to a concrete business services. At runtime, the workflow engine picks up the RequestContrains of the activity, searches in the directory server to find the Web services with ProcessOrder.BookOrder template. (We assume that all providers of the “BookOrder” services have registered their “BookOrder” service according to the service templates defined in directory server in advance). After comparing the functional template, the engine will take the parameters of the QoS template of the service. Let’s assume that the Web service example defined in section 3.2 is found. And the parameters of the QoS template of this service is: "200; 2days; ICT", which means that the cost of the Web service is 200$, the task can be finished in 2 days, and the provider is ICT. The DynServiceBinding function will return this Web service since the parameters of the service’s QoS template is compatible with the RequestContrains of the activity. Then the workflow engine will bind the parameters of the business service with the corresponding workflow data defined in workflow process model. In the matching operation, there are two possible results. First, there are one or multiple services that satisfy all the RequestContrains of the activity. In this case, a service is chosen to implement the activity according to the preference rules stated
A Dynamic Matching and Binding Mechanism for Business Service Integration
177
above. Second, the engine cannot find a service that can satisfy the RequestContrains of the activity. In this case, the matching operation will fail, an exception is triggered and the user is notified to take some measure.
4 The Implementation of the Prototype System We have implemented a prototype system realizing the above-discussed concepts and mechanisms based on a J2EE platform and tools for implementing Web services. Users can use the graphical Process Definition Editor to design a business process template or actual process model, where the RequestContrains of activity can be specified to reflect the service requirements of the activity node. The workflow model serves as glue for integrating distributed business services.
Fig. 3. The Process Runtime interface The workflow engine provides the dynamic resource matching and binding mechanism at runtime. The service requests specified in the process definition phase are bound to suitable business services over the Internet during the enactment of the business process instances through the dynamic resource matching and binding mechanism, which matches the business service templates in the directory server with the constraint templates. At present, the parameter bindings between the workflow data and business service’s parameters are specified manually at runtime. We are working on to achieve partly automatically binding. The screen shot of the runtime binding is shown in Figure 3.
178
Fengjin Wang, Zhuofeng Zhao, and Yanbo Han
5 Concluding Remarks and Future Work To cope with the intrinsic dynamism of the business service integration over the Internet is a big challenge. This paper presented an approach to providing integrated services over the Internet, which supports dynamic matching and binding of distributed business services, and realizes it using constraint templates of process definition and business service templates of the service entries in the directory server. We also introduced the architecture of the business service integration system based on an extended workflow model. Changes of business services will not affect the workflow process model, that is, the actual workflow process instances can be more easily modified to adapt to the changing business environment. There are several issues that are to be addressed in future research. The first question is how to describe the more abstract task requests and let mediation server better understand these requests. Because the requests for services of users vary in nature, we should provide a common and meaningful way through which users can represent their requests exactly and the system can understand these requests. In this paper, we assume standardized business service templates are defined by business organizations of the same business type and constraint template is based on these business service templates. But this assumption may not be realistic because different organizations and users are in different domains and have different semantic about concepts. The third question is, in real business community, the relationship between business services for cooperative work is more complex. The extended workflow model in our paper should be developed further to support complex and cooperative work among business services and realize more flexible and just-in-time integration.
References [Alo99] Alonso, G., Fiedler, U., Hagen, C., Lazcano, A., Schuldt, H., and Weiler, N. “WISE – Business to Business E-Commerce,” Proceedings of the 9th International Workshop on Research Issues on Data Engineering: Information Technology for Virtual Enterprises, Sydney, Australia, March 1999. [Fab00] Fabio Casati, Ski Ilnicki, LiJie Jin,Vasudev Krishnamoorthy, Ming-Chien Shan, “Adaptive and Dynamic Service Composition in eFlow” [Fen01] Fennivel Chai, tModel Architecutre and public tModels, http://www-900.ibm.com/ developerWorks/cn/webservices/ws-tmodel/part2/index.shtml [Fos02]I. Foster, C. Kesselman, J. Nick,and S. Tuecke,“The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration”, January, 2002,http://www.globus.org/research/papers.html#OGSA [Kra02]K. Krauter, R. Buyya, and M. Maheswaran, “A Taxonomy and Survey of Grid Resource Management Systems,'' Software Practice and Experiance, Vol. 32, No. 2, Feb. 2002 [Men02] Meng, J., Su, S.Y.W., Lam, H., and Helal, A., “Achieving Dynamic InterOrganizational Workflow Management by Integrating Business Processes, Events, and Rules,” to appear in the Proceedings of the 35th Hawaii International Conference on System Sciences (HICSS35), Hawaii, USA, January 2002.
A Dynamic Matching and Binding Mechanism for Business Service Integration
179
[Han98] Han Yanbo, Sheth Amit,”A Taxonomy on Adaptive Workflow Management. Towards Adaptive Workflow Systems”, Workshop at the Conference on Computer Supported Cooperative Work. Seattle, WA, USA. November 1998. [Pet01]Peter Brittenham, Francisco Cubera, Dave Ehnebuske, Steve Graham,” Understanding WSDL in a UDDI registry”, http://www-106.ibm.com/developerworks/library/ws-wsdl/ [Sad98] Sadiq Shazia Wasim, Orlowska Maria E. (1998) Dynamic Modification of Workflows. Technical Report No. 442, University of Queensland, Brisbane, Australia, October 1998. [Ste00] Steve Burbeck, “The Tao of e-business services”, http://www106.ibm.com/developerworks/webservices/ library/ws-tao/ [Str00] Stricker, C., Riboni, S., Kradolfer, M., and Taylor, J., “Market-Based Workflow Management for Supply Chains of Services,” Proceedings of the 33rd Hawaii International Conference on System Sciences,Hawaii, USA, 2000. [UDDI] UDDI Technical White Paper, http://www.uddi.org. [WfMC96] Workflow Management Coalition (1996) The Workflow Management Coalition Specifications - Terminology and Glossary. Issue 2.0,Document Number WFMC-TC-1011. [WfMC99] WfMC, “Interface1: Process Definition Interchange V 1.1 Final (WfMC-TC-1016P)," 1999, [WSFL] IBM Corporation, “Web Services Flow Language (WSFL 1.0)”,http://www4.ibm.com/software/solutions/webservices/pdf/WSFL.pdf, May 2001 [XPDL] WfMC,”Workflow Process Definition Interface-XML Process Definition Language”. Version 0.0.3a,WFMC-TC-1025
A Uniform Model for Authorization and Access Control in Enterprise Information Platform Dongdong Li , Songlin Hu, and Shuo Bai Software Division, Institute of Computing Technology, The Chinese Academy of Sciences, Beijing 100080, China E-mail:
[email protected]
Abstract. Enterprise information platform (EIP) is an enterprise model-based platform, aiming at model-driven enterprise design, analysis and evaluation. Its one role is to build up a framework for the easy integration of different systems representing the processes, structures, activities, goals and information, etc of businesses, governments or other enterprises. The topic of this paper is not data integration or application integration of EIP, but integration of authorization. This paper focuses on integration of authorizations of workflow management system and resource management system of EIP. Workflow management and resource management of current EIPs usually have their own models of authorization and access control. This type of separate authorization and access control mechanism causes many security problems. Previous studies focus on each authorization system individually, but the integration of them has hardly been deeply discussed. Here the paper presents a unified authorization and access control model, so as to represent the privileges authorized by different systems in the same format, and to avoid conflicts and other security problems as the consequence.
1
Introduction and Motivation
By definition, an enterprise information platform (EIP) enables an enterprise to unlock internal and external stored information, exposes internal business processes to both employees and external audiences. Therefore, it is very important to control which user can access which resource. A comprehensive authorization and access control model becomes a critical component of EIP. In this paper, we focus on authorization and access control of workflow management and resource management of EIP. In an enterprise information platform, workflow system and resource management system are two essential parts. Authorization and access control play key roles in the two parts. In general, authorizations of the two parts exist in the following aspects: Task authorization: Allocate roles or users for each task (or activity) in the process of a workflow. Application data authorization: Authorize application data in workflow applications to different roles or users. Because users of different roles have
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 180-192, 2002 © Springer-Verlag Berlin Heidelberg 2002
Authorization and Access Control in Enterprise Information Platform
181
different operations on the same application data of a workflow task, they must be authorized with different privileges. Resource authorization: Authorize resources of EIP to different roles or users. Though application data also belongs to system resource, this paper treats resource authorization differently from application data authorization. Resource authorization is not involved in workflow processes. It is defined in resource management system. Users who have been authorized can access to the resource directly. Application data authorization does not use the same method. It is defined in the course of workflow definition and privileges defined for application data must be used in the execution process of a workflow. Currently, most EIPs have two or more models of authorization and access control to deal with the authorizations mentioned above. Usually, both workflow system and resource management system have their respective authorization models, which will cause many secure and administration problems. Some are listed here: Multiple authorization models make it more difficult to implement a globally consistent policy. For example, if a user is permitted to access certain resource in workflow module, while the same user is forbidden to the same resource in resource management module, there is an authorization conflict. It's very difficult to solve authorization conflicts. Each authorization and access control model has its own conflict resolution policies. When conflicts occur, it’s difficult to decide which conflict resolution policies in which model it should adopt. Expressing complex authorization constraints is a troublesome problem, especially when authorizations defined in one model must depend on authorizations defined in another model. Per authorization models is likely to be simple, making it more difficult to express complex authorization constraints. The management of several authorization and access control models is complex. When security rules of an enterprise change, administrators may need to revise the relevant authorization and access control policies of all the models. This task becomes an administrative nightmare without a unified model, for he/she must maintain all policies of all the models at the same time. As stated above, it is very necessary to build a unified authorization and access control model for EIP to represent the privileges permitted by different modules in the same format. Using the unified model can avoid all the problems mentioned above, namely inconsistent authorizations, difficult conflict resolution, hard specification of complex authorization constraints, and complex administration of authorization models.
2
Related Work
Authorization and access control have been widely discussed and many methods have been proposed to model the authorization and access control properties. Here are some related work about authorization and access control in workflow system and information management system.
'RQJGRQJ/L6RQJOLQ+XDQG6KXR%DL
)RUZRUNIORZVHFXULW\SUHYLRXVUHVHDUFKIRFXVHGRQWDVNDVVLJQPHQWFRQVWUDLQWV 8VLQJWDVNDVVLJQPHQWFRQVWUDLQWVDVVLJQPHQWPHWKRGVIRUWKHZRUNIORZV\VWHPVDUH VSHFLILHGLQWHUPVRIFRQVWUDLQWVRQWKHSHUPLVVLEOHDVVLJQPHQWVRIXVHUVWRWDVNVDQG UROHV%HFDXVHWKHUROHEDVHGPRGHOLVDQDWXUDOFKRLFHIRULPSOHPHQWLQJVHFXULW\LQ ZRUNIORZV\VWHPVPRVWRIWKHGLVFXVVLRQVDUHEDVHGRQWKDW>@SURSRVHGDIRUPDO ORJLFDODXWKRUL]DWLRQPRGHOIRUDVVLJQLQJXVHUVDQGUROHVWRWDVNVZLWKERWKVWDWLFDQG G\QDPLF DXWKRUL]DWLRQ FRQVWUDLQWV >@ SUHVHQWHG D VXPPDU\ RI VHFXULW\ DGRSWHG LQ FRPPHUFLDO ZRUNIORZ V\VWHPV ZKLFK LQFOXGLQJ ,%0 046HULHV 6WDIIZDUH ,Q&RQFHUWDQG&RVD$OOWKHVHSURGXFWVSURYLGHFHUWDLQNLQGVRIVHFXULW\PHFKDQLVPV IRUVXSSRUWLQJWDVNDVVLJQPHQWFRQVWUDLQWV,%0046HULHV6WDIIZDUHDQG&RVD DOORZ GXW\ FRQVWUDLQ ELQGLQJ ZKLOH ,Q&RQFHUW DOORZV H[WHUQDO DSSOLFDWLRQ WKDW DUH LQYRNHG DW WDVN DVVLJQPHQW WLPH WR GHWHUPLQH WKH UROHWRWDVN DVVLJQPHQW >@ FRQVLGHUHGDQRWKHULVVXHRI:)06VHFXULW\⎯GDWDDFFHVVFRQWURO7KH\GLVFXVVHGNH\ DFFHVV FRQWURO UHTXLUHPHQWV IRU DSSOLFDWLRQ GDWD LQ ZRUNIORZ DSSOLFDWLRQV XVLQJ H[DPSOHVIURPWKHKHDOWKFDUHGRPDLQLQWURGXFHGDFODVVLILFDWLRQRIDSSOLFDWLRQGDWD XVHGLQZRUNIORZV\VWHPVE\DQDO\]LQJWKHLUVRXUFHVDQGSURSRVHGDFRPSUHKHQVLYH GDWDDXWKRUL]DWLRQDQGDFFHVVFRQWUROPHFKDQLVPIRU:)06V )RU LQIRUPDWLRQ PDQDJHPHQW V\VWHP VHYHUDO DXWKRUL]DWLRQ PRGHOV KDYH EHHQ SURYLGHG>@SUHVHQWHGDXQLILHGIUDPHZRUNWKDWFDQHQIRUFHPXOWLSOHDFFHVVFRQWURO SROLFLHVZLWKLQDVLQJOHV\VWHP7KHIUDPHZRUNLVEDVHGRQDODQJXDJHWKURXJKZKLFK XVHUVFDQVSHFLI\VHFXULW\SROLFLHVWRHQIRUFHGRQVSHFLILFDFFHVVHV>@SURSRVHGDQ DOJHEUDRIVHFXULW\SROLFLHVWRJHWKHUZLWKLWVIRUPDOVHPDQWLFVDQGLOOXVWUDWHGKRZWR IRUPXODWHFRPSOH[SROLFLHVLQWKHDOJHEUDDQGUHDVRQDERXWWKHP 3UHYLRXV VWXGLHV IRFXV HLWKHU RQ DXWKRUL]DWLRQ RI ZRUNIORZ V\VWHP RU RQ DXWKRUL]DWLRQ RI LQIRUPDWLRQ UHVRXUFH PDQDJHPHQW V\VWHP EXW WKH XQLIRUP PHFKDQLVP IRU UHSUHVHQWDWLRQ RI VHFXULW\ SROLFLHVRIDQ(,3V\VWHPKDVKDUGO\EHHQ GHHSO\ GLVFXVVHG 7KLV SDSHU SUHVHQWV D XQLILHG PRGHO RI DXWKRUL]DWLRQ DQG DFFHVV FRQWUROIRU(,3,WUHSUHVHQWVWKHSULYLOHJHVDXWKRUL]HGE\GLIIHUHQWPRGXOHVRI(,3LQ WKH VDPH IRUPDW DQG SURYLGHV EHWWHU VHFXULW\ IRU ERWK ZRUNIORZV DQG UHVRXUFHV RI (,36WDQGDUGL]LQJWKHZD\WKDWUHVRXUFHPDQDJHPHQWPRGXOHDQGZRUNIORZPRGXOH GHILQH WKHLU VHFXULW\ UHTXLUHPHQWV SURYLGHV WKH PHDQV IRU LQWHJUDWLRQ RI ORFDO DQG GLVWULEXWHGVHFXULW\SROLFLHVDQGWUDQVODWLRQRIVHFXULW\SROLFLHVRIGLIIHUHQWIRUPDWLQWR WKHVDPHIRUPDW
$8QLILHG0RGHORI$XWKRUL]DWLRQDQG$FFHVV&RQWURO
,Q WKLV VHFWLRQ WKH DUFKLWHFWXUH RI WKH XQLILHG PRGHO DQG D EULHI LQWURGXFWLRQ RI DOO FRPSRQHQWV RI WKH PRGHO ZLOO EH SUHVHQWHG ILUVWO\ 7KHQ WKH EDVLF HOHPHQWV RI WKH PRGHODQGDILJXUHRIVKRZLQJWKHUHODWLRQVKLSVDPRQJWKHPZLOOEHJLYHQ
$UFKLWHFWXUHRIWKH0RGHO
7KH DUFKLWHFWXUH RI WKH XQLILHG DXWKRUL]DWLRQ DQG DFFHVV PRGHO VKRZQ LQ )LJXUH FRQVLVWVRIWKHIROORZLQJFRPSRQHQWV
$XWKRUL]DWLRQDQG$FFHVV&RQWUROLQ(QWHUSULVH,QIRUPDWLRQ3ODWIRUP
⎯$ FRQVWUDLQW VSHFLILFDWLRQ DQG PDQDJHPHQW PRGXOH WKDW ν VSHFLILHV DXWKRUL]DWLRQ FRQVWUDLQ FRQGLWLRQV E\ V\VWHP DGPLQLVWUDWRUV DQG ξ DVVHPEOHV DOO WKH FRQVWUDLQWV UHOHYDQW WR DQ REMHFW UHVS WDVN RU UHVRXUFH DQDO\VHV WKHP WKHQ JHQHUDWHVDEDVLFDXWKRUL]DWLRQUXOHDQGVWRUHVWKLVUXOHLQWRWKHDXWKRUL]DWLRQEDVH ⎯$FRQIOLFWFKHFNDQGUHVROXWLRQPRGXOHWKDWVSHFLILHVKRZWRFKHFNDQGUHVROYH DXWKRUL]DWLRQFRQIOLFWVRFFXUUHGLQWKHV\VWHP7KHPRGXOHLVXVHGZKHQHYHUV\VWHP DGPLQLVWUDWRUVJUDQWDXWKRUL]DWLRQVRUXVHUVUHTXHVWSHUPLVVLRQVWRDFFHVVREMHFWV ⎯$GHFLVLRQDQGHQIRUFHPHQWPRGXOHLVWRGHWHUPLQHWKHV\VWHP¶VUHVSRQVHHLWKHU JUDQWHG RU GHQLHG WR HYHU\ SRVVLEOH DFFHVV UHTXHVW DFFRUGLQJ WR DOO WKH UHOHYDQW LQIRUPDWLRQVXFKDVWKHEDVLFDXWKRUL]DWLRQUXOHVLQWKHDXWKRUL]DWLRQEDVHDOOW\SHV RIKLHUDUFKLHVKLVWRULFDOLQIRUPDWLRQFXUUHQWV\VWHPVWDWHVDQGWKHFRQIOLFWUHVROXWLRQ SROLFLHV ⎯$ KLVWRU\ EDVH ZKRVH URZV GHVFULEH WKH DFFHVVHV H[HFXWHG :KHQ VSHFLI\LQJ WKRVH DXWKRUL]DWLRQ FRQVWUDLQV ZKLFK DUH UHODWHG WR WKRVH DFWLYLWLHV H[HFXWHG EHIRUH WKHKLVWRULFDOLQIRUPDWLRQPXVWEHWDNHQLQWRFRQVLGHUDWLRQ ⎯$QDXWKRUL]DWLRQEDVHZKRVHURZVDUHEDVLFDXWKRUL]DWLRQUXOHVWKDWDUHVSHFLILHG E\WKHV\VWHPDGPLQLVWUDWRUVH[SOLFLWO\ ⎯&XUUHQW V\VWHP VWDWHV DUH QHHGHG E\ DXWKRUL]DWLRQ UHTXLUHPHQWV WKDW FRQWDLQ V\VWHP VWDWH YDULDEOHV DV SDUDPHWHUV 6RPH H[DPSOHV RI WKLV NLQG RI DXWKRUL]DWLRQ UHTXLUHPHQWVDUH³$WPRVWFRSLHVRIDSURJUDP3FDQEHUXQQLQJFRQFXUUHQWO\LQDOO QRGHVRIWKHV\VWHP´DQG³8VHU$LVDOORZHGWRH[HFXWHSURJUDP3RQO\LIWKHFXUUHQW V\VWHPORDGLVOHVVWKDQ´ $GPLQLVWUDWRU
8VHUUHTXHVWVRS
8QLILHGDX WKRUL]DWLRQDQG$&PRGHO
2UJDQL]DWLRQ0RGXOH 7DVN
5HVRXUFH
$SSOLFDWLRQGDWD
$XWKRUL]DWLRQ
$XWKRUL]DWLRQ
$XWKRUL]DWLRQ
&RQ VWUDLQWVSHFLILFDWLRQ DQG PDQDJHPHQWPRGXOH
JUDQWHG
& RQIOLFWFKHFNDQG UH VROXWLRQPRGXOH
3URFHVV
5HVRXUFH
$SSOLFDWLRQ'DWD
0RGXOH
0RGXOH
LQ:)
GHQLHG
'HFLVLRQDQGHQIRUFHPHQW PRGXOH
6\VWHP
+LVWRU\
$XWKRUL]D
EDVH
WLRQEDVH
VWDWHV
)LJ$UFKLWHFWXUHRIWKH8QLIRUP0RGHORI$XWKRUL]DWLRQDQG$FFHVV&RQWURO
'RQJGRQJ/L6RQJOLQ+XDQG6KXR%DL
%DVLF(OHPHQWVRIWKH0RGHO
7KH EDVLF HOHPHQWV LQ RXU PRGHO DQG WKHLU UHODWLRQVKLS DUH VKRZQ LQ ILJXUH 7KH PRGHO KDV HLJKW HQWLW\ VHWV FDOOHG XVHUV 8 UROHV 5 REMHFWV 2 SULYLOHJHV 3 REMHFWW\SHV27 DGPLQLVWUDWRUV$ FRQVWUDLQWV& DQGDXWKRUL]DWLRQUXOHV$5 'HVFULSWLRQVRIWKHVHHOHPHQWVDUHOLVWHGEHORZ ⎯8DVHWRIXVHUV ⎯5 D VHW RI UROHV 5ROHV DUH KLHUDUFKLFDOO\ RUJDQL]HG LQWR D UROHVXEUROH UHODWLRQVKLS$XWKRUL]DWLRQVVSHFLILHGIRUDUROHZLOODSSO\WRDOOLWVVXEUROHVXVHUV ⎯6DVHWRIVXEMHFWVWRZKLFKDXWKRUL]DWLRQVFDQEHJUDQWHG6XEMHFWVFDQEHHLWKHU XVHUVLHHOHPHQWVRI8 RUUROHVLHHOHPHQWVRI5 6 8∪5 ⎯2DVHWRIREMHFWVGHILQHGLQWKHV\VWHPRQZKLFKVRPHDFWLRQVFDQEHFDUULHG VXFK DV WDVNV DQG UHVRXUFHV 6LPLODU WR UROHV REMHFWV DUH RUJDQL]HG LQWR D SDUWRI KLHUDUFK\)RUH[DPSOHILOHVDQGGLUHFWRULHVZRUNIORZSURFHVVDQGVXESURFHVVHWF ⎯27DVHWRIREMHFWW\SHVLQFOXGLQJILOHGLUHFWRU\GDWDEDVHWDEOHSURFHVVWDVN HWF ⎯3DVHWRISULYLOHJHVGHQRWLQJWKHDFFHVVPRGHVE\ZKLFKVXEMHFWVFDQH[HUFLVH RQ WKH REMHFWV LQ WKH V\VWHP ,Q UHDO VLWXDWLRQV LQWHUDFWLRQV H[LVW DPRQJ SULYLOHJHV )RULQVWDQFHLWLVUHDVRQDEOHWRDVVVXPHWKDWWKHZULWHSULYLOHJHLVVWURQJHUWKDQWKH UHDG SULYLOHJH WKDW LV LW VXEVXPHV WKH UHDG SULYLOHJH )RU WKLV UHDVRQ WKH VHW RI SULYLOHJHV3LVRUJDQL]HGLQWRDKLHUDUFK\WRR ⎯$ D VHW RI V\VWHP DGPLQLVWUDWRUV GHQRWLQJ WKRVH XVHUV ZKR FDQ JUDQW DXWKRUL]DWLRQVDQGUHYRNHWKHP8VXDOO\DGPLQLVWUDWRUVDUHKLHUDUFKLFDOO\RUJDQL]HG )RU H[DPSOH VHFXULW\ RIILFLDOV ! GHSDUWPHQW V\VWHP DGPLQLVWUDWRUV ! RSHUDWLRQ V\VWHP DGPLQLVWUDWRUV (DFK DGPLQLVWUDWRU KDV KLV RZQ FRQWURO VFRSH DQG VSHFLDO SULYLOHJHV +H PXVW QRW JUDQW RU UHYRNH DXWKRUL]DWLRQV EH\RQG KLV SULYLOHJHV DQG FRQWUROVFRSH ⎯& D VHW RI FRQVWUDLQWV (DFK FRQVWUDLQW LV D %RROHDQ H[SUHVVLRQ RQ ZKLFK DQ DXWKRUL]DWLRQUXOHGHSHQGVPRUHGHWDLOVUHIHUWR ⎯$5 D VHW RI DXWKRUL]DWLRQ UXOHV (DFK DXWKRUL]DWLRQ UXOH LV D WXSOH GHQRWLQJ WKDW DQ DGPLQLVWUDWRU JUDQWV GHQLHV D VXEMHFW WR H[HUFLVH VRPH SULYLOHJH RQ DQ REMHFWXQGHUVRPHFRQVWUDLQWPRUHGHWDLOVUHIHUWRGHILQLWLRQ
,PSOHPHQWDWLRQRIWKH8QLILHG$XWKRUL]DWLRQDQG$FFHVV &RQWURO0RGHO
,Q RXU DUFKLWHFWXUH DXWKRUL]DWLRQ LV WKH JUDQWLQJ RI ULJKWV IRU D VXEMHFW WR DFFHVV DQ REMHFWZKLOHDFFHVVFRQWUROLVWKHHQIRUFHPHQWRIWKLVDXWKRUL]DWLRQ 6WHSVRIDXWKRUL]DWLRQDUHDVIROORZV)LUVWDXWKRUL]DWLRQFRQVWUDLQWVDUHVSHFLILHG WKURXJKWKHFRQVWUDLQWVSHFLILFDWLRQDQGPDQDJHPHQWPRGXOH7KHQWKHFRQIOLFWFKHFN DQG UHVROXWLRQ PRGXOH YHULILHV ZKHWKHU FRQIOLFWV RFFXU GXULQJ WKH SURFHVV RI FRQVWUDLQW VSHFLILFDWLRQ DQG H[HFXWLRQ ,I WKHUH DUH FRQIOLFWV WKH\ ZLOO EH UHVROYHG DFFRUGLQJ WR WKH FRQIOLFW UHVROXWLRQ SROLFLHV DXWRPDWLFDOO\ RU E\ DQ DGPLQLVWUDWRU PDQXDOO\ $W ODVW WKH ILQDOO\ JHQHUDWHG DXWKRUL]DWLRQ UXOH LV VWRUHG LQWR WKH DXWKRUL]DWLRQEDVH
Authorization and Access Control in Enterprise Information Platform
A s s ig n m e n te d to
subsum ed
D o n im a te d U ser
185
R o le
R e la te d
P r iv ile g e
a u th o riz e a u th o riz e R e la te d
A u th o riz a tio n ru le
P re p a re
R e la te d
C o n s tra in t
A d m in is tra to r
d o m in a te d
O b je c t
in c lu d e
is O b je c t
Fig.2. Basic Elements of the authorization model and their Relationships
Steps of access control are as follows: When a subject “s” requests permission to execute a privilege “p” on object “o”, the decision and enforcement module will be invoked to check if the authorization (s,o,+p) could be derived from the authorization base, the history base, all types of hierarchies, current system states, and the conflict resolution policies in force. If so, the access is allowed; otherwise, the access is denied. Figure 3 shows the process of authorization and the process of access control. Specify constraints
Check conflicts
Resolve conflicts
Basic rule Process of authorization
Rule Propagates
Check conflicts
Resolve conflicts granted/denied
Process of access control
Fig.3. Process of authorization and process of access control
Aspects of implementation will be discussed in more detail in the next subsections.
'RQJGRQJ/L6RQJOLQ+XDQG6KXR%DL
&RQVWUDLQW6SHFLILFDWLRQDQG0DQDJHPHQW
2QHRIWKHPDLQJRDOVRIWKHXQLILHGDXWKRUL]DWLRQLVWRVSHFLI\DOOWKHDXWKRUL]DWLRQV LQ D XQLILHG IRUPDW 0DQ\ DXWKRUL]DWLRQV HVSHFLDOO\ DXWKRUL]DWLRQV LQ ZRUNIORZ V\VWHPDUHQRWLQGHSHQGHQW7KH\PXVWUHO\RQVRPHDGGLWLRQDOFRQVWUDLQFRQGLWLRQV VXFK DV RWKHUDXWKRUL]DWLRQUXOHVKLVWRULFDOLQIRUPDWLRQDQG V\VWHPVWDWHVHWF+HUH WKHVHFRQGLWLRQVDUHFDOOHGDVDXWKRUL]DWLRQFRQVWUDLQWV (;$03/(&RQVLGHUWKHIROORZLQJUXOHVZLWKFRQVWUDLQWV ⎯³,QDJLYHQZRUNIORZ:)WDVN7DQG7PXVWQRWEHH[HFXWHGE\WKHVDPH UROH´ ⎯³5ROH5FDQDFFHVVWKHILOHVLQGLUHFWRU\',5RQO\LQDJLYHQSHULRGRIWLPH´ ⎯³5ROH5FDQH[HFXWHWDVN7RIZRUNIORZ:)RQO\DIWHUWDVN7RI:)ZDV H[HFXWHGE\UROH5´ +RZ WR VSHFLI\ GLIIHUHQW DXWKRUL]DWLRQ FRQVWUDLQWV LV D SUREOHP WR EH UHVROYHG ILUVWO\IRUDXQLILHGDXWKRUL]DWLRQDQGDFFHVVFRQWUROPRGHO+HUHZHJLYHWKHV\QWD[ RIDXWKRUL]DWLRQFRQVWUDLQWVDVIROORZV &5! FRQMXQFWLYHLWHP!^$1'25FRQMXQFWLYHLWHP!` FRQMXQFWLYHLWHP! FRPSDUHSUHGLFDWH!^$1'25FRPSDUH SUHGLFDWH!`_IXQFWLRQ!_DXWKRUL]DWLRQUXOH! FRPSDUHSUHGLFDWH! OHIWYDOXH!RSHUDWH!ULJKWYDOXH! OHIWYDOXH! YDULDEOH!_IXQFWLRQ! ULJKWYDOXH! FRQVWDQW!_YDULDEOH!_IXQFWLRQ! RSHUDWH! ¶ ¶_¶ ¶_¶!¶_¶!¶_¶¶_¶! ¶_¶ ¶ IXQFWLRQ! IXQFWLRQ_IXQFWLRQ_«_IXQFWLRQQ +HUHDUHVRPHDGGLWLRQDOH[SODQDWLRQV ⎯$FRQVWDQWFDQEHDQ\W\SHRIVWULQJIORDWLQWHJHUHWF ⎯$ YDULDEOH FDQ EH V\VWHP YDULDEOHV SUHGHILQHG E\ DGPLQLVWUDWRUV RU FDQ EH DWWULEXWH YDULDEOHV RI D VXEMHFW RU DQ REMHFW )RU H[DPSOH &XU7LPH GHQRWHV WKH FXUUHQW V\VWHP WLPH &XU7DVN1DPH GHQRWHV WKH FXUUHQW WDVN¶V QDPH 2EMHFW&UHDWHWLPHGHQRWHVWKHFUHDWLRQWLPHRIWKLVREMHFW8VHU/RFDWLRQGHQRWHVWKH ORFDWLRQRIWKLVXVHUHWF ⎯$QDXWKRUL]DWLRQUXOHFDQEHDEDVLFDXWKRUL]DWLRQUXOHLQWKHDXWKRUL]DWLRQEDVH RUDGHULYHGDXWKRUL]DWLRQUXOHDFFRUGLQJWRWKHDGGLWLRQDOLQIRUPDWLRQ ⎯$ IXQFWLRQ LV D SUHGHILQHG IXQFWLRQ RI WKH V\VWHP )ROORZV DUH VRPHIXQFWLRQV XVHGE\H[DPSOHVLQRXUSDSHU EHORQJWRXU LV³WUXH´LIXVHUXLVDVVLJQHGWRUROHU³IDOVH´RWKHUZLVHX∈8 U∈5 RZQHURV LV³WUXH´LIREMHFWRLVRZQHGE\VXEMHFWV³IDOVH´RWKHUZLVHR∈2 V∈6 GRQHXURS LV³WUXH´LIXVHUXDVUROHUKDVDFFHVVHGREMHFWRE\SULYLOHJHS ³IDOVH´RWKHUZLVHX∈8U∈5R∈2S∈3 W\SHRRW LV³WUXH´LIR¶VW\SHLVRW³IDOVH´RWKHUZLVHR∈2RW∈27 FRXQW[Q FRXQWVWKHQXPEHURIGLIIHUHQWDQVZHUVRI[DQGUHWXUQVWKLVYDOXH DVQ FDQGRVRSF LV³WUXH´LIVXEMHFWVKDVSULYLOHJHSRQREMHFWRXQGHUFRQVWUDLQW F ³IDOVH´ RWKHUZLVH &RQVWUDLQW F LV QXOO PHDQV WKDW WKHUH LV QR DFFHVV
$XWKRUL]DWLRQDQG$FFHVV&RQWUROLQ(QWHUSULVH,QIRUPDWLRQ3ODWIRUP
FRQVWUDLQW V∈6 R∈2 S∈3 F∈& 7KLV IXQFWLRQ FDQ EH FRPSXWHG XVLQJ WKH DOJRULWKP,IWKHUHWXUQYDOXHLV³JUDQWHG´FDQGRVRSF LV³WUXH´,IWKH UHWXUQYDOXHLV³GHQLHG´FDQGRVRSF LV³IDOVH´ $IWHU DXWKRUL]DWLRQ FRQVWUDLQWV DUH VSHFLILHG WKH V\VWHP ZLOO DVVHPEOH DOO WKH FRQVWUDLQWV UHOHYDQW WR WKH REMHFW DQDO\]H WKHP DQG WKHQ JHQHUDWH D EDVLF DXWKRUL]DWLRQUXOHLQWRWKHDXWKRUL]DWLRQEDVH
$XWKRUL]DWLRQ%DVH
7KHDXWKRUL]DWLRQEDVHVWRUHVDOOWKHEDVLFDXWKRUL]DWLRQUXOHVVSHFLILHGE\DGPLQLVWUD WRUVH[SOLFLWO\'HILQLWLRQEHORZJLYHVWKHGHILQLWLRQRIDXWKRUL]DWLRQUXOH 'HILQLWLRQ$XWKRUL]DWLRQ5XOH $QDXWKRUL]DWLRQUXOHLVDWXSOHRIWKHIRUP DVRVLJQ!SF ZKHUH D∈$ V∈6 R∈2 S∈3 F∈& VLJQ∈^` GHQRWLQJ WKDW DGPLQLVWUDWRU³D´JUDQWV RUGHQLHV VXEMHFW³V´WRH[HUFLVHSULYLOHJH³S´RQREMHFW ³R´ XQGHU FRQVWUDLQW ³F´ ,I ³F´ LV QXOO LW PHDQV WKDW WKHUH LV QR DXWKRUL]DWLRQ FRQVWUDLQWZLWKWKLVUXOH +HUHDUHVRPHH[DPSOHVRIDXWKRUL]DWLRQUXOH (;$03/(&RQVLGHUWKHIROORZLQJDXWKRUL]DWLRQUXOHV ⎯D$OLFHRZULWHW\SHRIR'RFV ⎯D(PSOR\HHVRUHDGRFUHDWHWLPH ³´ ⎯DX$SSURYH7DVNH[HFXWHEHORQJWRX0DQDJHUV DQG ¬EHORQJWRX'HYHORSHU0DQDJHUV ⎯D*URXS/HDGHUV$SSO\7DVNH[HFXWHFDQGR*URXS/HDGHUV $SSURYH7DVNH[HFXWH ⎯D$OLFHIRUPILOOGRQH$OLFH(PSOR\HHIRUPILOO 7KHILUVWUXOHVWDWHVWKDWDGPLQLVWUDWRUDJUDQWVWKDW$OLFHFDQZULWHDOOREMHFWVRI W\SH'RFV7KHVHFRQGUXOHVWDWHVWKDWDGPLQLVWUDWRUDJUDQWVWKDWUROH(PSOR\HHVFDQ UHDGDOOREMHFWVFUHDWHGLQ³´7KHWKLUGRQHVWDWHVWKDWDOOXVHUVEHORQJLQJWR 0DQDJHUVEXWQRWWR'HYHORSHU0DQDJHUVDUHDXWKRUL]HGWRH[HFXWH$SSURYH7DVNE\ DGPLQLVWUDWRU D 7KH IRXUWK RQH VWDWHV WKDW UROH *URXS/HDGHUV LV DXWKRUL]HG WR H[HFXWH $SSO\7DVN E\ DGPLQLVWUDWRU D LI LW KDV QR SHUPLVVLRQ WR H[HFXWH $SSURYH7DVN)LQDOO\WKHODVWRQHVWDWHVWKDWLI$OLFHDVUROH(PSOR\HHKDVILOOHGWKH IRUPDGPLQLVWUDWRUDGHQLHVKHUWRILOOLWDJDLQ
&RQIOLFW5HVROXWLRQ
%HFDXVHDXWKRUL]DWLRQUXOHVDUHFUHDWHGE\GLIIHUHQWDGPLQLVWUDWRUVDQGGHULYHGIURP GLIIHUHQWSDWKVLQWKHKLHUDUFKLHVEHLQJFRQVLGHUHGREYLRXVO\WKLVZLOOSUREDEO\OHDG WRFRQWUDGLFWRU\DXWKRUL]DWLRQUXOHVLQWKHV\VWHP+HUHDUHVRPHH[DPSOHV (;$03/(&RQVLGHUWKHIROORZLQJUXOHV ⎯DUILOHUHDG DUILOHUHDG 7KHILUVWUXOHVWDWHVUROHULVDXWKRUL]HGWRUHDGILOHE\DGPLQLVWUDWRUDZKLOHWKH VHFRQG VWDWHV WKDW DGPLQLVWUDWRU D GHQLHV UROH U WR UHDG ILOH 7KHQ WKHUH DUH LQFRQVLVWHQWDXWKRUL]DWLRQUXOHVLQWKHV\VWHP ⎯DUILOHUHDG DUILOHUHDG
'RQJGRQJ/L6RQJOLQ+XDQG6KXR%DL
6XSSRVH WKHUH LV D XVHU ZKR LV ERWK UROH U DQG UROH U $FFRUGLQJ WR UXOH¶V SURSDJDWLRQVUHIHUWR WKHXVHUZLOOKDYHDOOWKHSULYLOHJHVGHULYHGIURPERWKUROH UDQGUROHU7KLVDOVROHDGVWRLQFRQVLVWHQF\ $VVWDWHGDERYHFRQIOLFWVPD\DULVHLQDXWKRUL]DWLRQVSHFLILFDWLRQVIRUDYDULHW\RI UHDVRQV +RZ WR UHVROYH WKHVH FRQIOLFWV LV FULWLFDO WR DQ DXWKRUL]DWLRQ V\VWHP 7KHUH DUHPDQ\GLIIHUHQWFRQIOLFWUHVROXWLRQSROLFLHVDQGZHZLOOJLYHVRPHEHORZ ⎯'HQLDOVWDNHSUHFHGHQFH,QWKLVFDVHQHJDWLYHDXWKRUL]DWLRQVDUHDOZD\VDGRSWHG ZKHQDFRQIOLFWRFFXUV,QRWKHUZRUGVWKHSULQFLSOHVD\VWKDWLIZHKDYHRQHUHDVRQ WRDXWKRUL]HDQDFFHVVDQGDQRWKHUWRGHQ\LWWKHQZHGHQ\LW ⎯3HUPLVVLRQV WDNH SUHFHGHQFH ,Q WKLV FDVH SRVLWLYH DXWKRUL]DWLRQV DUH DOZD\V DGRSWHGZKHQDFRQIOLFWRFFXUV,QRWKHUZRUGVWKHSULQFLSOHVD\VWKDWLIZHKDYHRQH UHDVRQWRDXWKRUL]HDQDFFHVVDQGDQRWKHUWRGHQ\LWWKHQZHDXWKRUL]HLW ⎯3ULRULWLHVRIGLIIHUHQWDGPLQLVWUDWRUVWDNHSUHFHGHQFH,QWKLVFDVHFRQIOLFWVDUH UHVROYHGDFFRUGLQJWRWKHSULRULWLHVRIGLIIHUHQWDGPLQLVWUDWRUV,IRQHJUDQWRUSUHYDLOV RQWKHRWKHUWKHFRQIOLFWVDUHUHVROYHGLQIDYRURIWKHIRUPHULHWKHUXOHVZKLFKKDYH EHHQVLJQHGXSE\WKHVWURQJHUJUDQWRUSUHYDLO ,QRXUPRGHODXWKRUL]DWLRQFRQIOLFWVDUHVROYHGDFFRUGLQJWRWKHIROORZLQJSROLFLHV )LUVW WKH JUDQWRUV RI WKH FRQIOLFWLQJ UXOHV DUH WDNHQ LQWR DFFRXQW +HUH WKH DGPLQLVWUDWRU KLHUDUFK\ LV FRQVLGHUHG ,I RQH JUDQWRU LV VXSHULRU WR WKH RWKHU WKH V\VWHPDGRSWVWKHUXOHVDXWKRUL]HGE\WKHVWURQJHURQH 6HFRQGLIWKHJUDQWRUVDUHLQFRPSDUDEOHUHVRXUFHW\SHDQG VHFXULW\OHYHOVDUH H[SORLWHG ,I WKH VHFXULW\ OHYHO RI RQH UHVRXUFH W\SH LV TXLWH KLJK WKH ³'HQLDOV WDNH SUHFHGHQFH´SROLF\LVWDNHQ2WKHUZLVHWKH³3HUPLVVLRQVWDNHSUHFHGHQFH´SROLF\LV WDNHQ +HUH VHFXULW\ OHYHOV DUH SUHGHILQHG E\ V\VWHP DGPLQLVWUDWRUV VXFK DV ³QR VHFUHW´³VHFUHW´³FRPPRQVHFUHW´DQG´KLJKVHFUHW´HWF 5XOH3URSDJDWLRQDORQJ+LHUDUFKLHV $V VWDWHG DERYH WKH REMHFW WKH REMHFW LQFOXGHV VXEMHFW REMHFW DQG SULYLOHJH KHUH KLHUDUFK\ UHSUHVHQWV D SDUWRI UHODWLRQVKLS 7KXV DFFRUGLQJ WR WKH REMHFWRULHQWHG DSSURDFK UXOHV GHILQHG IRU D JLYHQ REMHFW DUH DXWRPDWLFDOO\ LQKHULWHG E\ DOO WKH VXEREMHFWV 7KLV PHDQV WKDW LI IRU LQVWDQFH D XVHU EHORQJV WR PRUH WKDQ RQH UROH KHVKH UHFHLYHV DOO WKH DXWKRUL]DWLRQV VSHFLILHG IRU DOO WKH UROHV WR ZKLFK KHVKH EHORQJV6LPLODUO\LIDXVHUKDVDQDXWKRUL]DWLRQWRUHDGDJLYHQGLUHFWRU\KHVKHLV E\GHIDXOWDXWKRUL]HGWRUHDGDOVRDOOWKHILOHVWKDWEHORQJWRWKDWGLUHFWRU\XQOHVVDQ H[SOLFLWGHQLDOLVVSHFLILHG 1RZZHJLYHDOOWKHKLHUDUFK\W\SHVH[LVWLQJLQWKHPRGHO 'HILQLWLRQ6XEMHFW+LHUDUFK\ 6XEMHFW+LHUDUFK\LVDSDUWLDORUGHU≤6RQ6 FDOOHGVXEMHFWLQFOXVLRQUHODWLRQVKLS*LYHQWZRVXEMHFWVVDQGV′WKHQV≤6V′LPSOLHV WKDW V LV D VXEUROH RI V′ RU V′ LV D XVHU DVVLJQHG WR UROH V ,Q RWKHU ZRUGV V≤6 V′ UHSUHVHQWVERWKWKHUROHVXEUROHKLHUDUFK\DQGWKHPHPEHUVKLSRIXVHUVWRUROHV 'HILQLWLRQ 2EMHFW +LHUDUFK\ 2EMHFW+LHUDUFK\LVDSDUWLDORUGHU ≤2RQ2 FDOOHGREMHFWLQFOXVLRQUHODWLRQVKLS*LYHQWZRREMHFWVRDQGR′LIR≤2R′ZHVD\WKDW R′LVDVXEREMHFWRIR
$XWKRUL]DWLRQDQG$FFHVV&RQWUROLQ(QWHUSULVH,QIRUPDWLRQ3ODWIRUP
'HILQLWLRQ3ULYLOHJH+LHUDUFK\ 3ULYLOHJH+LHUDUFK\LVDSDUWLDORUGHU≤3RQ 3 FDOOHG SULYLOHJH LQFOXVLRQ UHODWLRQVKLS *LYHQ WZR SULYLOHJHV S DQG S′ ZH VD\ SULYLOHJHSVXEVXPHVSULYLOHJHS′LIS≤3S′ 2EYLRXVO\ ZKHQ DXWKRUL]DWLRQ SURSDJDWHV DORQJ WKHVH KLHUDUFKLHV PD\EH WKHUH ZLOO JHQHUDWH FRQIOLFWLQJ GHULYHG DXWKRUL]DWLRQV +HUH :H QRWH WKDW DXWKRUL]DWLRQ UXOHVSURSDJDWHUHJDUGOHVVRIWKHSRVVLELOLW\RIJHQHUDWLQJFRQIOLFWVIRUWKHJHQHUDWHG FRQIOLFWVFDQEHUHVROYHGE\WKHPHWKRGSURYLGHGLQ $VVXPHWKHUHH[LVWVDEDVLFDXWKRUL]DWLRQUXOHDVRVSF KHUHVSLVHLWKHUSRU± S7KHQWKHUHZLOOJHQHUDWHIROORZLQJGHULYHGDXWKRUL]DWLRQV ⎯GHULYHGDXWKRUL]DWLRQVDORQJVXEMHFWKLHUDUFKLHVDV′RVSF LIV≤6V′ ⎯GHULYHGDXWKRUL]DWLRQVDORQJREMHFWKLHUDUFKLHVDVR′VSF LIR≤2R′ ⎯GHULYHGDXWKRUL]DWLRQVDORQJSULYLOHJHKLHUDUFKLHVDVRVS′F LIVS≤3VS′
$XWKRUL]DWLRQ'HFLVLRQDQG(QIRUFHPHQW
$XWKRUL]DWLRQ GHFLVLRQ DQG HQIRUFHPHQW LV LQ IDFW D W\SH RI DFFHVV FRQWURO WKDW WKH V\VWHP GHFLGHV ZKHWKHU WR DOORZ D JLYHQ VXEMHFW WR DFFHVV D JLYHQ REMHFW ,W GHWHUPLQHVWKHV\VWHP¶VUHVSRQVHHLWKHUJUDQWHGRUGHQLHG WRHYHU\SRVVLEOHDFFHVV UHTXHVWDFFRUGLQJWRDOOWKHUHOHYDQWLQIRUPDWLRQVXFKDVWKHEDVLFDXWKRUL]DWLRQUXOHV LQ WKH DXWKRUL]DWLRQ EDVH DOO W\SHV RI KLHUDUFKLHV KLVWRULFDO LQIRUPDWLRQ FXUUHQW V\VWHPVWDWHVDQGWKHFRQIOLFWUHVROXWLRQSROLFLHV ,Q WKLV SDSHU DQ DXWKRUL]DWLRQ GHFLVLRQ DQG HQIRUFHPHQW DOJRULWKP LV JLYHQ 7KH DOJRULWKP WDNHV DV LQSXW D XVHU¶V DFFHVV UHTXHVW 5HTVRS DXWKRUL]DWLRQ UXOHV KLVWRULFDO UHFRUGV DQG FXUUHQW V\VWHP VWDWHV WULHV WR GHFLGH WKDW HLWKHU JUDQWHG RU GHQLHG VKRXOG EH RXWSXW ,I ³JUDQWHG´ LV RXWSXW WKH DFFHVV UHTXHVW LV JUDQWHG ,I ³GHQLHG´LVRXWSXWWKHUHTXHVWLVGHQLHG $/*25,7+0$XWKRUL]DWLRQ'HFLVLRQDQG(QIRUFHPHQW$OJRULWKP ,1387 $QDFFHVVUHTXHVW5HT VRS ZKHUHV∈6R∈2S∈3 7KHDXWKRUL]DWLRQEDVH 7KHKLVWRU\EDVH &XUUHQWV\VWHPVWDWHV 287387JUDQWHGGHQLHG ,W ZLOO EH FKHFNHG ZKHWKHU WKHUH H[LVWV D PDWFKLQJ EDVLF DXWKRUL]DWLRQ UXOH OLNH DVRSF LQWKHDXWKRUL]DWLRQEDVH+HUHVRDQGSDUHLQSXWYDOXHVD∈$F∈& ,IWKHUHLVVXFKDPDWFKLQJUXOHJRWRVWHS2WKHUZLVHJRWRVWHS )RUWKHPDWFKLQJUXOHDVRSF FRPSXWHWKHFRQVWUDLQWFDQGFKHFNLILWLV³WUXH´ ,IFLV³WUXH´³JUDQWHG´LVUHWXUQHGDQGWKHDOJRULWKPHQGV,IFLV³IDOVH´JRWRVWHS &RPSXWHDQGJHWDOOWKHGHULYHGDXWKRUL]DWLRQUXOHVDFFRUGLQJWRUXOHSURSDJDWLRQ DORQJDOOWKHKLHUDUFKLHV &KHFNLIWKHUHH[LVWFRQIOLFWVDPRQJDOOWKHGHULYHGDXWKRUL]DWLRQUXOHVUHOHYDQWWRV RDQGS,IWKHUHDUHFRQIOLFWVUHVROYHWKHPDFFRUGLQJWRUHOHYDQWFRQIOLFWUHVROXWLRQ SROLFLHV)LQDOO\JHWDQDXWKRUL]DWLRQUXOHOLNHDƍVRSFƍ RUDƍVRSFƍ +HUHV RDQGSDUHLQSXWYDOXHVDƍ∈$Fƍ∈& ,IWKHILQDOUXOHLVDƍVRSFƍ FRPSXWHLIWKHFRQVWUDLQWFLV³WUXH´,IFLV³WUXH´ UHWXUQ³JUDQWHG´RWKHUZLVHUHWXUQ³GHQLHG´DQGWKHDOJRULWKPILQLVKHV ,IWKHILQDOUXOHLVDƍVRSFƍ UHWXUQ³GHQLHG´7KHDOJRULWKPILQLVKHV
190
5
Dongdong Li, Songlin Hu, and Shuo Bai
Integration into an EIP System
Figure 4 shows how to integrate the unified authorization and access control model into our EIP system. In our EIP system, all permissions are authorized to users through the unified module. And all access requests of different users are also decided by this module. If there exist some authorization rules defined outside of the EIP whose formats are maybe different from that of our module, translations or adaptations should be done. Roles of our EIP system may browse and search data directly or gain information indirectly through executing workflow tasks. But all accesses to resources must be via a unified access interface of resource and constrained by the authorization rules in the authorization base. System uses a unified resource directory to realize the resource management and the resource authorization and provides users a unified resource access interface to access the resources they need. The data access interface calls the unified authorization and access control module to check resource authorization information and decide whether the current access to resources is permitted. Using the unified resource access interface simplifies the system management and cuts down the possibility of security leakage during the process of resource use. During the execution process of workflow, authorizations of workflow tasks should also be checked. System administrators use the unified authorization module to specify the tasks and system resources accessible for each role. When a role tries to execute a task, both the authorization rules related to this task and the data access authorization of the application data used by this task will be checked. U s e r lo g in
user
a u th e n tic a tio n
ro le
R e q u e s t/re s p o n s e
In te rfa c e s o f
W o rk flo w
reso u rc e a c c ess
s u b s y s te m
Ta s k (a c tiv ity )
reso u rc e R o le m anagem ent
m anager
U n ifie d a u th o riz a tio n a n d a c c e s s c o n tro l m o d u le
H isto ry b a se
A u th o riz a tio n r u le b a s e
Fig.4 Integration the unified model into EIP system
S y s te m s ta te s
R e q u e s t/re s p o n s e
Authorization and Access Control in Enterprise Information Platform
6
191
Conclusions
Authorizations of current EIPs are usually completed through different separate authorization models, which leads to many troublesome problems, such as inconsistent authorizations, conflicts resolutions, expression of complex authorizations and so on. In this paper, we put forward a unified authorization and access control model so as to solve these security problems resulted from separate authorization models. First, We gave the architecture of the model and its components. Then we made some discussions on five topics in detail, namely constraint specification and management, authorization base, conflict resolution, rule propagations along hierarchy and authorization decision and enforcement. Future work should be focused on the proving of integrity and completeness of the model and on the improvement of the model. Another important issue we should do is the combination of this model with PMI (Privilege Management infrastructure) technology, which might bring about an approach to implement the exchange of authorizations through credentials.
References 1. Bertino E., Ferrari E. and Atluri V. An Approach for the Specification and Enforcement of Authorization Constraints in Workflow Management Systems, ACM Transactions on Information Systems Security, February 1999, Vol 1, No.1 2. Castano,S., Casati F., and Fugini M. Managing Workflow Authorization Constraints through Active Database Technologh, Information Systems Frontiers, 3(3) September 2001. 3. Shengli Wu, Amit Sheth John Miller, Zongwei Luo. Authorization and Access Control of Application Data in Workflow Systems, Intelligent Information Systems, January 2002, pp.71-94 4. S. Jajodia, P. Samarati, M.L. Sapino, and V.S. Subrahmanian, Flexible Support for Multiple Access Control Policies, in ACM Transactions on Database Systems, vol. 26, n. 2, June 2001, pp 214-260 5. P. Bonatti, S de Capitani di, P. Samarati, A Modular Approach to Composing Access Control Policies, In Proceedings of the 7th ACM conference on Computer and communications security, page 164-173, Athens Greece, Nov 2000. ACM Press 6. T.Y.C. Woo and S.S.Lam, Authorizations in distributed systems: A new approach. Journal of Computer Security, 2(2,3):107-136,1993 7. P. Samarati, M.K. Reiter, S. Jajodia, An Authorization Model for a Public Key Management Service, in ACM Transactions on Information and System Security (TISSEC), vol. 4, n. 4, November 2001, pp. 453-482 8. Bertino E., Ferrari E., Administration Policies in a Multipolicy Authorization System. Proc. 11th IFIP Working Conference on Database Security, Lake Tahoe (CA), August 1997, pp.15-26 9. R. Sandhu, E. Coyne, H.L, Feinstein, C.E. Youman, Role-based access control models. IEEE Computer, pages 38-47, February 1996
192
Dongdong Li, Songlin Hu, and Shuo Bai
10. Blaze, Feigenbaum, Strauss, Compliance checking in the policymaker trust management system. In FC: International Conference on Financial Cryptography. LNCS, SpringerVerlag, 1998 11. N.H. Minsky, V. Ungureanu, Unified support for heterogeneous security policies in distributed systems. In proceeding of the 7th USENIX Security Symposium (SECURITY98), pages 131-142, Berkeley, Jan. 26-29, 1998. Usenix Association 12. AHN,G.J, the RCL 2000 Language for Specifying Role_Based Authorization Constraints, PhD Thesis, George Mason University, January 2000
Constraints-Preserving Mapping Algorithm from XML-Schema to Relational Schema* Hongwei Sun1, Shusheng Zhang2, Jingtao Zhou3, and Jing Wang4 1,2,3,4
National Specialty Laboratory of CAD/CAM, Northwestern Polytechnical University, Xi’an, China, 710072 1
[email protected]
Abstract. XML is fast emerging as the dominant standard for representing data in Internet, so there are increasing needs to efficiently store it. One potential path to this goal is transforming XML data into relational database. But exiting XML-to-RDB algorithms focus only on the structure and largely ignore semantic constrains, in addition, their inputting is DTD rather than XML-Schema, which is recommended by W3C as the standard of XML Schema language. In this paper, we present an algorithm for mapping XML-Schema to relational schema. Our main ideas are as follows: 1) On the basis of regular tree grammar, propose a concise and precise formalization representing method for XMLSchema, FD-XML, which can perfectly drive and describe structure as well as semantic constraints from a given XML-Schema; 2) Extend the traditional entity relational model to EER (Extended Entity Relational model); 3) Map FDXML to EER and then EER to relational schema. During the mapping, both data structures and semantic constrains are correctly preserved to relational schema. With the above procedures, the mapping algorithm comes into being. Based on the algorithm, XML data is stored into relational database. Experiment results are also presented.
1 Introduction As the World Wide Web becomes a major means of disseminating and sharing information, Extensible Markup Language (XML) is emerging as a potential candidate data format because it is simpler than SGML, and more powerful than HTML. In the near future, XML will assuredly become the standard data format in Internet and the main carrier for data exchanging between distributed heterogeneous application systems. There is a substantial increase of the amount of data in XML format. To store and query XML Document, several projects have proposed alternative strategies classified according to the underlying systems they used: file system, database system, or object manager. Hereinto, one potential and feasible way to manage XML data is to reuse the effective and mature relational database techniques by converting and storing XML data in relational storage. To this end, Several XML-to*
We differentiate two terms, XML Schema(s) and XML-Schema; the former is a general term for a schema for XML, while the latter refers to one of the XML Schemas proposed by W3C.
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 193-207, 2002. © Springer-Verlag Berlin Heidelberg 2002
194
Hongwei Sun et al.
RDB algorithms have been proposed (see [1][2][3][4][5]), and they all transform the given XML Document Type Definition (DTD) to relational schema, but there is still many troubles to be solved. Firstly, they are incomplete in the sense that they focus only on data structures and largely ignore semantic constraints. Secondly, they are all XML DTD based and haven’t touch upon XML-Schema, which is much more complex and powerful than DTD and has been recommended by W3C to replace DTD as the unique standard for XML Schema languages. Thirdly, the DTD based mapping algorithm can’t be straightforwardly utilized to map XML-Schema to relational schema for the many differences between XML-Schema and DTD. Therefore, there is an imperious need to study a new algorithm whose mapping input is not DTD but XML-Schema. Aiming at the above troubles, we propose a mapping algorithm from XMLSchema to relational schema preserving both logical data structures and integrated semantic constrains. Firstly, a formal representation method named FD-XML is proposed for XML-Schema based on the regular tree grammar. FD-XML can integrally represent the information of XML-Schema. At the same time, we put forward the extended ER model, EER, which includes two parts: diagram and accessories. FDXML is then converted to EER model whose diagram represents the data structure and accessories contain the integrated data constraints. Besides, to avoid the “data fraction” problem occurring in the converting process, we adopt the equivalence transformation method of graph theory in simplifying the EER diagram by reducing entities. Then, the simplified EER model is converted to the general relational schema, in this step, the normal form theory is introduced to optimize the general relational data model with correct and logical data structure, also the data constraints in the EER accessories are converted to the data constraints of relational schema to realize the non-loss conversion. The remainder of this paper is organized as follows. In Section 2, we give background and related works, and the EER model is introduced in the background part. In section 3, the mapping algorithm is introduced in detail. In section 4, as an experiment, we store an XML instance document into the resultant relational database. Finally some concluding remarks and future works are given in section 5.
2 Background and Related Works
2.1 Background: Extend ER Model to EER E-R is a powerful and widely used approach to describe the real world, and it has become the most commonly used expression tool for concept model ever since (see [13]). But there exist many problems in directly mapping XML-Schema to E-R because of the many differences between them. For example, it’s easy to express the parent-children relationship between elements by XML-Schema, while it’s not so easy to express it obviously by traditional E-R model. So we extend E-R model to EER (Extended Entity-Relation) before XML-Schema is mapped.
Constraints-Preserving Mapping Algorithm from XML-Schema to Relational Schema
195
The extension is carried out in three aspects (see figure 1). Firstly, although the relationships represented by E-R model are abundant, there doesn’t exit clear parentchildren relationship, which is the main relationship in XML-Schema. We use the arrowhead starting from the parent element and ending at the subelement to express the parent and subordinate elements in the parent-children relationship; (n, m) is then used to represent the occurrence time in XML-Schema, where n means the minimum occurrence time and m means the maximal. At last, as E-R model can’t express data semantic constraints perfectly while XML-Schema holds a powerful data constraints representing mechanism, which is to be represented in FD-XML by some sets, accessories are given to EER in order to preserve the integrality of data constraints.
1, 1
el ement has
n, m
Fig. 1. Basic structure of EER diagram. The diagram is composed of element, their relationship and attribute along with their occurrence time. Where the parent-children relationship is expressed by “has” in diamond box. In addition, EER accessories are not represented in this figure
n, m
at t r i but e
subEl ement
2.2 Related Works Which is the best way of storing XML documents, in a text file, or in RDB, or in OODB? It is still an open question. Although storing XML in a text file is now the standard approach, one potential and feasible way to manage XML data is to reuse the effective and mature relational database techniques. After all, most data is still stored in RDB today. More recently, towards conversion from XML to RDB, an array of researches has addressed the particular issues lately. On the commercial side, database vendors are busily extending their database to adopt XML type. On the research side, Table 1 partly shows the classification of such related work. Table 1. Part of Methods classification from XML schema to relational schema
Conversation Data-Structure-oriented Data-Constrains-oriented method DTD to Deutsch et al. (1998); Shan- Lee and Chu, (2000b); relational schema mugasundaram et al (1999); Lee and Chu’CPI (2001) XML-schema to This paper Attempt to do Relational schema Data-Structure-oriented from DTD to Relational Schema Conversion: Work done in STORED by Deutsch et al. is one of the first significant and concrete attempts to this end. Shanmugasundaram et al. present three inlining algorithms that focus on table level of the schema conversions in 1999 (see [2][3]). Data-Constraints-oriented from DTD to Relational Schema Conversion: In 2000, Lee Dongwon presented a constraint-oriented algorithm where the hidden semantic constraints in DTD are systematically found and integrally preserved to rela-
+RQJZHL6XQHWDO
WLRQDOIRUPDWV7KLVDOJRULWKPLVLPSURYHGDQGSHUIHFWHGE\WKHPVHOYHVLQVHH >@>@ 8S WR QRZ WR WKH EHVW RI RXU NQRZOHGJH WKHUH LVQ¶W DQ\ DOJRULWKP IRU PDSSLQJ ;0/6FKHPDWRUHODWLRQDO6FKHPD,QWKLVSDSHUZHZLOODWWHPSWWRHVWDEOLVKDPDS SLQJDOJRULWKPIURP;0/6FKHPDWRUHODWLRQDO6FKHPDDFFRXQWLQJ IRUERWKFRUUHFW GDWDVWUXFWXUHDQGLQWHJUDWHGGDWDFRQVWUDLQWV
)URP;0/6FKHPDWR5HODWLRQDO6FKHPD
)RUPDO'HVFULSWLRQIRU;0/6FKHPD)';0/ ;0/LVIRXQGRQWKHEDVLVRIWZRHVVHQWLDOLGHDV XVLQJWUHHWRUHSUHVHQWGRFXPHQW DQGGDWD XVLQJWUHHJUDPPDUWRUHSUHVHQWGRFXPHQWW\SHDQGGDWDVWUXFWXUH,QWKLV VHFWLRQ DV D PHFKDQLVP IRU GHVFULELQJ SHUPLVVLEOH ;0/ LQVWDQFH GRFXPHQW ZH ERUURZWKHGHILQLWLRQVRIUHJXODUWUHHODQJXDJHDQGWUHHDXWRPDWDLQ>@EXWXQOLNH WKHGHILQLWLRQVLQLWZHDOORZWUHHVZLWKLQILQLWHDULW\WKDWLVDOORZDQRGHWRKDYH DQ\QXPEHURIVXEQRGHVDQGDOORZWKHULJKWKDQGVLGHRIDSURGXFWLRQUXOHWRKDYHD UHJXODUH[SUHVVLRQRYHUQRQWHUPLQDOV :H DVVXPH WKH H[LVWHQFH RI D VHW ĕ RI QRQWHUPLQDO QDPHV D VHW ġ RI WHUPLQDO QDPHV DQG D VHW Ă RI DWRPLF GDWD W\SHV GHILQHG LQ >@ :H DOVR XVH WKH QRWDWLRQV ¦GHQRWHVWKHHPSW\VWULQJ³´IRUXQLRQ³´IRUWKHFRQFDWHQDWLRQ³D"´IRU]HURRU RQHRFFXUUHQFH³D ´IRUWKH.OHHQFORVXUHDQG³D´IRUWKH³DD ´1RZD57*EDVHG IRUPDOL]DWLRQH[SUHVVLRQIRU;0/6FKHPD)';0/FDQEHJLYHQ 'HILQLWLRQ)';0/ $Q)'LVGHQRWHGE\DQWXSOH)' 176($&8 . ZKHUH 1LVDVHWRIQRQWHUPLQDOV\PEROVZKHUH1⊆ĕ 7LVDVHWRIWHUPLQDOV\PEROVZKHUH7⊆ġ 6LVDVHWRIVWDUWV\PEROVZKHUH6⊆ĕ (LVDVHWRIHOHPHQWSURGXFWLRQUXOHVRIWKHIRUP³; ⎯ ⎯→ D5(´ZKHUH;∈1 D∈7DQG5(LVWKHFRQWHQWPRGHORIWKLVSURGXFWLRQUXOHLWLV 5( ¦̮Q̮´̮5( ̮5(5(̮5(5(̮5("̮5( ̮5( ZKHUHQ∈ĕ´∈Ă $LVDVHWRIDWWULEXWHSURGXFWLRQUXOHVRIWKHIRUP³; ⎯ ⎯→ D5(´ZKHUH;∈1 D∈7DQG5( ¦̮¢̮5( ̮5(5(ZKHUH¢LVDQDWWULEXWH &LVDVHWRIWHUPLQDOV\PEROGDWDW\SHVDQGWKHLUFRQVWUDLQWV ∀ FL ∈ &FLLVDQ WXSOH &L%L/L/PD[L/PLQL3L(L:6L0$;,L0,1,L0$;(L0,1(L7'L)'L)L[L'HIL2SWL 3URL5HTL ZKHUH%LGHQRWHVWKHEDVLFXVHUGHILQHGGDWDW\SHRI&LPHDQZKLOH/L IRUWKHOHQJWKFRQVWUDLQW/PD[LIRUWKHPD[LPDOOHQJWKFRQVWUDLQW/PLQL IRUWKHPLQL PXPOHQJWKFRQVWUDLQW3LIRUWKHFKDUDFWHUSDWWHUQ(LIRUWKHHQXPHUDWHVHW:6L
&RQVWUDLQWV3UHVHUYLQJ0DSSLQJ$OJRULWKPIURP;0/6FKHPDWR5HODWLRQDO6FKHPD
IRUWKHZKLWH6SDFHFRQVWUDLQW0$;,L IRUWKHPD[LPDOYDOXHLQFOXGLQJLWVHOI0,1,L IRUWKHPLQLPXPYDOXHLQFOXGLQJLWVHOI0$;(L IRUWKHPD[LPDOYDOXHQRWLQFOXGLQJ LWVHOI0,1(LIRUWKHPLQLPXPYDOXHQRWLQFOXGLQJLWVHOI7'LIRUWKHWRWDO'LJLWVDW WULEXWH)'LIRUWKHIUDFWLRQ'LJLWVDWWULEXWH)L[LIRUWKH)L[HGDWWULEXWH'HILIRUWKH 'HIDXOW DWWULEXWH 2SWL IRU WKH 2SWLRQDO DWWULEXWH 3URL IRU WKH SURKLELWHG DWWULEXWH DQG5HTLIRUWKH5HTXLUHGDWWULEXWH 8LVDVHWRISULPDU\NH\VDQGXQLTXHFRQVWUDLQWV ∀ SNL ∈ 8SNLLVDWXSOHSNL .PL;3VL;3IL ZKHUH.PL GHQRWHVWKHQDPHRISNL;3VL GHQRWHVWKHVHOHFWRUGRPDLQ H[SUHVVHGE\;SDWKRISNLDQG;3ILIRUWKHILHOGGRPDLQH[SUHVVHGE\;SDWKRISNL .LVDVHWRIIRUHLJQNH\V ∀ INL ∈ .INLLVDQWXSOHINL.IL.PL;3VL;3IL ZKHUH .I L GHQRWHVQDPHRIINL.PL GHQRWHVWKHUHIHUHQFHGSULPDU\NH\LQ8DQG;3VL GH QRWHVWKHVHOHFWRUGRPDLQH[SUHVVHGE\;SDUWKRIINLDQG;3IL GHQRWHVWKHILHOGGR PDLQH[SUHVVHGE\;SDWKRIINL 7R GLVWLQJXLVK WKH V\PEROV LQ LQVWDQFH GRFXPHQWV ZLWK WKH VDPH RQHV LQ ;0/ 6FKHPD KHUH ZH SUHVFULEH WKH V\PEROV LQ LQVWDQFH GRFXPHQWV ZLWK EUDFNHWV IRU H[DPSOHWKHV\PERO³SXUFKDVH2UGHU!´LQLQVWDQFHGRFXPHQWVLVHTXLYDOHQWWR³SXU FKDVH2UGHU´ LQ ;0/6FKHPD $OVR WR GLVWLQJXLVK HOHPHQWV ZLWK DWWULEXWHV ZH DGG ³#´EHIRUHDWWULEXWHV [VGHOHPHQWQDPH SXUFKDVH2UGHUW\SH 3XUFKDVH2UGHU7\SH! [VGFRPSOH[7\SHQDPH 3XUFKDVH2UGHU7\SH! [VGVHTXHQFH! [VGHOHPHQWQDPH VKLS7RW\SH 86$GGUHVV! [VGHOHPHQWQDPH ELOO7RW\SH 86$GGUHVV! [VGHOHPHQWQDPH SXUFKDVH&RPPHQWW\SH FRPPHQWPLQ2FFXUV ! NH\UHIQDPH NIBSXUFKDVHUHIHU NBFRPPHQW! VHOHFWRU [SDWK USXUFKDVH2UGHUU 3XUFKDVH2UGHU7\SH SXU FKDVH&RPPHQW! ILHOG[SDWK #YDOXH! NH\UHI! [VGHOHPHQW! [VGHOHPHQWQDPH LWHPVW\SH ,WHPVLj [VGVHTXHQFH! [VGDWWULEXWHQDPH RUGHU'DWHW\SH [VGGDWH! [VGFRPSOH[7\SH! [VGFRPSOH[7\SHQDPH 86$GGUHVV! [VGVHTXHQFH! [VGHOHPHQWQDPH QDPHW\SH [VGVWULQJ! [VGHOHPHQWQDPH VWUHHWW\SH [VGVWULQJ! [VGHOHPHQWQDPH FLW\W\SH [VGVWULQJ! [VGHOHPHQWQDPH VWDWHW\SH [VGVWULQJ! [VGHOHPHQWQDPH ]LSW\SH [VGGHFLPDO! [VGVHTXHQFH! [VGDWWULEXWHQDPH FRXQWU\W\SH [VG1072.(1 IL[HG 86! [VGFRPSOH[7\SH! [VGFRPSOH[7\SHQDPH ,WHPV! [VGVHTXHQFH! [VGHOHPHQWQDPH LWHPPLQ2FFXUV PD[2FFXUV XQERXQGHG! [VGFRPSOH[7\SH! [VGVHTXHQFH! [VGHOHPHQWQDPH SURGXFW1DPHW\SH [VGVWULQJ! [VGHOHPHQWQDPH TXDQWLW\! [VGVLPSOH7\SH!
+RQJZHL6XQHWDO
[VGUHVWULFWLRQEDVH [VGSRVLWLYH,QWHJHU! [VGPD[([FOXVLYHYDOXH ! [VGUHVWULFWLRQ! [VGVLPSOH7\SH! [VGHOHPHQW! [VGHOHPHQWQDPH 863ULFHW\SH [VGGHFLPDO! [VGHOHPHQWQDPH LWHP&RPPHQWW\SH FRPPHQWPLQ2FFXUV ! NH\UHIQDPH NIBLWHPUHIHU NBFRPPHQW! VHOHFWRU[SDWK ULWHPVULWHPULWHP&RPPHQW! ILHOG[SDWK #YDOXH! NH\UHI! [VGHOHPHQW! [VGHOHPHQWQDPH VKLS'DWHW\SH [VGGDWHPLQ2FFXUV ! [VGVHTXHQFH! [VGDWWULEXWHQDPH SDUW1XPW\SH 6.8XVH UHTXLUHG! [VGFRPSOH[7\SH! [VGHOHPHQW! [VGVHTXHQFH! [VGFRPSOH[7\SH! 6WRFN.HHSLQJ8QLWDFRGHIRULGHQWLI\LQJSURGXFWV! [VGVLPSOH7\SHQDPH 6.8! [VGUHVWULFWLRQEDVH [VGVWULQJ! [VGSDWWHUQYDOXH ?G^`>$=@^`! [VGUHVWULFWLRQ! [VGVLPSOH7\SH! FRPSOH[7\SHQDPH FRPPHQW! FRPSOH[&RQWHQW! UHVWULFWLRQEDVH DQ\7\SH! DWWULEXWHQDPH YDOXHW\SH VWULQJ! UHVWULFWLRQ! FRPSOH[&RQWHQW! FRPSOH[7\SH! NH\QDPH NBFRPPHQW! VHOHFWRU[SDWK UFRPPHQW! ILHOG[SDWK #YDOXH! NH\! )LJ$QH[DPSOHRI;0/6FKHPD7KLVLVDPRGLILHGYHUVLRQRIWKHH[DPSOHDERXWSXUFKDVH RUGHU LQ >@ 7KH H[DPSOH FRQVLVWV RI D PDLQ HOHPHQW SXUFKDVH2UGHU DQG WKH VXEHOHPHQWV VKLS7RELOO7RFRPPHQWDQGLWHPV7KHVHVXEHOHPHQWVH[FHSWFRPPHQW LQWXUQFRQWDLQRWKHU VXEHOHPHQWVDQGVRRQXQWLODVXEHOHPHQWVXFKDV863ULFHFRQWDLQVDQXPEHUUDWKHUWKDQDQ\ VXEHOHPHQWV
7KH;0/6FKHPDLQ)LJXUHLVIRUPDOL]HGDV)' 176($&8. ZKHUH˖ 1 ^SXUFKDVH2UGHU 3XUFKDVH2UGHU7\SH VKLS7R ELOO7R ,WHPV SXUFKDVH&RP PHQW86$GGUHVVLWHPQDPHVWUHHWFLW\VWDWH]LSSURGXFW1DPHTXDQWLW\863ULFH LWHP&RPPHQWVKLS'DWHFRPPHQW` 7 ^SXUFKDVH2UGHU!3XUFKDVH2UGHU7\SH!VKLS7R!ELOO7R!,WHPV! SXUFKDVH&RPPHQW!RUGHU'DWH!86$GGUHVV!LWHP!QDPH!VWUHHW! FLW\!VWDWH!]LS!FRXQWU\!SURGXFW1DPH!TXDQWLW\!863ULFH!LWHP &RPPHQW!VKLS'DWH!SDUW1XP!FRPPHQW!6.8!YDOXH!` 6 ^SXUFKDVH2UGHU`
⎯→ SXUFKDVH2UGHU! 3XUFKDVH2UGHU7\SH 3XUFKDVH2U ( ^SXUFKDVH2UGHU ⎯ ⎯→ 3XUFKDVH2UGHU7\SH! VKLS7RELOO7RSXUFKDVH&RPPHQW˛ ,WHPV GHU7\SH ⎯
&RQVWUDLQWV3UHVHUYLQJ0DSSLQJ$OJRULWKPIURP;0/6FKHPDWR5HODWLRQDO6FKHPD
⎯→ VKLS7R!86$GGUHVV ELOO7R ⎯ ⎯→ ELOO7R!86$GGUHVV 86$G VKLS7R ⎯ ⎯→ 86$G GUHVV ⎯ ⎯→ SXUFKDVH&RP GUHVV!QDPHVWUHHWFLW\VWDWH]LS SXUFKDVH&RPPHQW ⎯ ⎯→ ,WHPV!LWHP PHQW!FRPPHQW ,WHPV ⎯ ⎯→ LWHP!SURGXFW1DPHTXDQWLW\863ULFHLWHP&RPPHQW"VKLS'DWH" LWHP ⎯ ⎯→ LWHP&RPPHQW!FRPPHQW FRPPHQW ⎯ ⎯→ FRPPHQW!¦ LWHP&RPPHQW ⎯ ⎯→ VWUHHW!¦ FLW\ ⎯ ⎯→ FLW\!¦ ⎯→ QDPH!¦ VWUHHW ⎯ QDPH ⎯ ⎯→ ]LS! ⎯→ VWDWH!¦ ]LS ⎯ VWDWH ⎯
¦
SURGXFW
⎯→ TXDQWLW\!¦ ⎯→ SURGXFW1DPH!¦ TXDQWLW\ ⎯ 1DPH ⎯ ⎯→ VKLS'DWH!¦ ` ⎯→ 863ULFH!¦ VKLS'DWH ⎯ 863ULFH! ⎯ $ ^
⎯→ SXUFKDVH2UGHU!¦ 3XUFKDVH2UGHU7\SH SXUFKDVH2UGHU ⎯
⎯ ⎯→ 3XUFKDVH2UGHU7\SH!#RUGHU'DWH! VKLS7R
⎯ ⎯→ VKLS7R!¦
⎯→ ELOO7R ⎯
⎯ ⎯→ 86$GGUHVV!
ELOO7R!¦ 86$GGUHVV
⎯→ SXUFKDVH&RPPHQW!¦ ,WHPV #FRXQWU\! SXUFKDVH&RPPHQW ⎯ ⎯ ⎯→ ,WHPV!¦ LWHP
⎯ ⎯→ LWHP! #SDUW1XP! LWHP&RPPHQW ⎯ ⎯→ ⎯→ ⎯→ FRPPHQW!#YDOXH! QDPH ⎯ LWHP&RPPHQW!¦ FRPPHQW ⎯ ⎯→ FLW\!¦ VWDWH ⎯ ⎯→ VWUHHW!¦ FLW\ ⎯ ⎯→ VWDWH!¦ QDPH!¦ VWUHHW ⎯ ⎯→ ]LS! ]LS ⎯
¦
⎯→ SURGXFW SURGXFW1DPH ⎯
⎯→ TXDQWLW\!¦ 1DPH!¦ TXDQWLW\ ⎯ ⎯→ VKLS'DWH!¦ ` ⎯→ 863ULFH!¦ VKLS'DWH ⎯ 863ULFH! ⎯ & ^RUGHU'DWH! GDWH¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ QDPH! VWULQJ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ VWUHHW! VWULQJ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ FLW\! VWULQJ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ VWDWH! VWULQJ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ ]LS! GHFLPDO ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ FRXQWU\! VWULQJ¦¦¦¦¦¦¦¦¦¦¦¦86¦¦¦¦ SURGXFW1DPH! VWULQJ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ TXDQWLW\! SRVLWLYH,Q WHJHU¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ 863ULFH! GHFLPDO ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ VKLS'DWH!GDWH ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ YDOXH!VWULQJ ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ SDUW1XP! 6.8
200
Hongwei Sun et al.
ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,"required") , (string,ε,ε,ε,"\d{3}-[A-Z]{2}",ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε) } U ={("k_comment",xpath="r:comment" ,xpath="@value")} K={("kf_item”,"k_comment" xpath="r:items/r:item/r:itemComment",xpath="@value"/>),("kf_purchase","k_com ment",xpath="r:purchaseOrder/r: PurchaseOrderType /: purchaseComment ", xpath="@value")} Obviously, from the example above, FD-XML can wholly describe both the data structure and data constraints. In FD =(N, T, S, E, A, C, U, K), tuples of N,T,S,E,A represent the XML-Schema structure information and tuples of C,U,K represent the information of data type and data constraints. 3.2 From FD-XML to EER In this section we convert FD-XML to EER. The detailed procedure is as follows: 1) Represent every element in the tuple N of FD-XML as an entity, which is denoted by rectangle box in EER model. 2) Build up the relationships between entities according to the tuple E of FD-XML, hereinto the relationships are denoted by diamond boxes, the parent and subordinate elements in parent-children relationship are distinguished by arrowhead’s starting and ending point. 3) Build up the entities’ attributes and confirm their occurrence times according to 4) Keep the tuples C, U and K of FD-XML as the accessories of EER. pur chaseOr der 1, 1
or der Dat e
1, 1 1, 1 0, m 1, 1
has
1, 1
1, 1
1, 1
1, 1
has
has 0, 1
i t emComment
has
par t Num
1, 1
has
1, 1
1, 1
1, 1
i t em
0, 1
zi p
has 0, 1
st at e
has 1, 1
ci t y
has
val ue
comment
1, 1
has
i t ems
1, 1
has 1, 1 1, 1
has
1, 1
has 1, 1
st r eet
1, 1
1, 1 1, 1
1, 1
has
has
pur chaseComment
1, 1
USAdr ess
1, 1
1, 1
1, 1
name
1, 1
has
1, 1
1, 1
bi l l To
has
count r y
has
0, 1
1, 1
1, 1
has
shi pTo
1, 1
Pur chaseOr der Type
( 1, 1)
1, 1
has
1, 1
1, 1
1, 1
has
Pr oduct Name quant i t y USPr i ce shi pDat a
Fig. 3. EER diagram of the illustrated FD1. The original elements in N are converted to the entities in rectangle box, the parent-children relationships is build up with the “has” symbols along with arrowheads according to E and the entities’ attributes are built up according to A
After the above four steps, now we can get the resultant EER diagram of the illustrated FD1 (see figure 3). EER diagram and its accessories constitute the integrated EER model, where EER diagram is mainly produced by N, T, S, E, A and represents
&RQVWUDLQWV3UHVHUYLQJ0DSSLQJ$OJRULWKPIURP;0/6FKHPDWR5HODWLRQDO6FKHPD
WKHGDWDVWUXFWXUHWKHDFFHVVRULHVLQFOXGHWKHWKUHHVHWRI &8.DQGUHSUHVHQWWKH GDWDFRQVWUDLQWV 6LPSOLI\((5 7KHGLUHFWPDSSLQJIURP((5WRUHODWLRQDOVFKHPDLVXQVFLHQWLILFIRULWZLOOUHVXOWLQ PDQ\SLHFHPHDODQGH[LJXRXVUHODWLRQVSRVVHVVLQJIHZRUHYHQQRDWWULEXWHVZKLFKLV FDOOHG³GDWDIUDFWLRQ´LQWKLVSDSHU7RDYRLGLWZHSURSRVHDVLPSOLI\LQJPHWKRGIRU ((5 GLDJUDP DFFRUGLQJ WR WKH HTXLYDOHQFH WUDQVIRUPDWLRQ PHWKRG RI JUDSK WKHRU\ :LWK WKLV VLPSOLI\LQJ PHWKRG ZH FDQ VLPSOLI\ ((5 WR PDNH LW ILW IRU WKH PDSSLQJ DQGNHHSVHPDQWLFHTXLYDOHQFHDWWKHVDPHWLPH7KHVLPSOLI\LQJPHDQVLVPDLQO\LQ WZRDVSHFWV $Q HQWLW\ LV FRQYHUWHG WR LWV SDUHQW HQWLW\¶V DWWULEXWHLILWVDWLVILHVWKHIROORZLQJ FRQGLWLRQVD WKHHQWLW\KDVDXQLTXHSDUHQWHQWLW\E WKHHQWLW\SRVVHVVHVQRVXE HQWLW\ F WKH SDUHQWFKLOGUHQ HQWLWLHV PHHW WKH TXDOLILFDWLRQ RI (OHPHQW
⎯ ⎯→ VXEHOHPHQW RU WKH TXDOLILFDWLRQ RI (OHPHQW ⎯ ⎯→ VXEHOHPHQW $IWHUFRQYHUVLRQWKHIRUPHUJHWVWKHDWWULEXWHRFFXUUHQFHWLPHRI DQG WKHODWWHUJHWV 7KHVXEHQWLW\LVUHPRYHGIURPLWVSDUHQWHQWLW\WKDWVDWLVILHVWKHIROORZLQJFRQ GLWLRQVD WKHHQWLW\SRVVHVVHVQRDWWULEXWHE WKHHQWLW\KDVQRRUMXVWRQHSDUHQW HQWLW\ F WKH HQWLW\ SRVVHVVHV RQO\ RQH VXEHQWLW\ DQG PHHW WKH TXDOLILFDWLRQ WKDW
⎯→ VXEHOHPHQW RU WKH TXDOLILFDWLRQ (OHPHQW (OHPHQW ⎯ ⎯ ⎯→ VXEHOHPHQW $IWHU WKH DERYH WZR SURFHGXUHV WKH ((5 GLDJUDP LOOXVWUDWHG LQ ILJXUH FDQ EH VLPSOLILHGWRWKH((5GLDJUDPLQILJXUH
FL W \
FRPPHQW
KDV
]L S
VW DW H
KDV
YDO XH
3U RGXFW 1DPH
P
VW U HHW
RU GHU 'DW H
KDV
86$GU HVV
QDPH
KDV
KDV FRXQW U \
KDV
L W HP
SDU W 1XP
3XU FKDVH2U GHU
TXDQW L W \
863U L FH
VKL S'DW D
)LJ 6LPSOLILHG ((5 GLDJUDP RI )LJ 7KH (QWLWLHV QDPH VWUHHW FLW\ VWDWH ]LS SURGXFW 1DPHTXDQWLW\863ULFHDQGVKLS'DWHLQ)LJDUHUHVSHFWLYHO\FRQYHUWHGWR86$GUHVV¶VDQG LWHP¶V DWWULEXWHV LQ VWHS WKH HQWLWLHV 3XUFKDVH2UGHU7\SH VKLS7R ELOO7R ,WHPV SXU FKDVH&RPPHQWDQGLWHP&RPPHQWLQ)LJLVUHPRYHGLQVWHS
%HVLGHV WKH DFFHVVRULHV RI ((5 PRGHO DUH VLPSOLILHG $FFRUGLQJ WR WKH SULPDU\ NH\DQGXQLTXHFRQVWUDLQWVHW8LQ((5DFFHVVRULHVWKH;3DWKH[SUHVVLRQLVSDUVHGWR SNH\
⎯→ HQ IRXQGWKHHQWLW\¶VXQLTXHFRQVWUDLQWZKLFKLVPDUNHGDV SULPDU\NH\ ⎯⎯
+RQJZHL6XQHWDO
WLW\6LPLODUO\$FFRUGLQJWRWKHIRUHLJQNH\FRQVWUDLQWVHW.LQ((5DFFHVVRULHVWKH ;3DWK H[SUHVVLRQ LV SDUVHG WR IRXQG WKH HQWLW\¶V IRUHLJQ NH\ ZKLFK LV PDUNHG DV INH\
⎯ UHIHUHQFHGHQWLW\SULPDU\NH\ UHIHUHQFLQJHQWLW\IRUHLJQNH\ ⎯⎯→ )URP((5WR*HQHUDO5HODWLRQDO6FKHPD 7KHVLPSOLILHG((5PRGHOLVSHUIHFWO\ILWWREHPDSSHGWRUHODWLRQDOVFKHPD7KHUH IRUHWKHSHQGLQJSUREOHPVZHKDYHWRIDFHZKHQPDSSLQJ((5PRGHOWRUHODWLRQDO VFKHPDDUH KRZWRFRQYHUWWKHHQWLWLHVDVZHOODVWKHLUUHODWLRQVKLSVWRUHODWLRQV KRZ WR FRQILUP WKH DWWULEXWHV DQG NH\V RI WKH HQWLWLHV KRZ WR FUHDWH GDWD FRQ VWUDLQWVIURP((5DFFHVVRULHV ((5GLDJUDPFDQIXOO\UHSUHVHQWWKHRULJLQDOGDWDVWUXFWXUHRI;0/6FKHPDZLWK FRQFLVLRQ DQG SUHFLVLRQ DQG WKH DFFHVVRULHV DPSO\ GHQRWH WKH DSSDUHQW GDWD FRQ VWUDLQWVLQ;0/6FKHPDKHQFHWKHWZRSDUWVRI((5VKRXOGEHPDSSHGUHVSHFWLYHO\ GXULQJ WKH ZKROH PDSSLQJ SURFHGXUH 7KDW LV FRQYHUW WKH ((5 GLDJUDP WR WKH GDWD VWUXFWXUHRIUHODWLRQDOVFKHPDWKHQFRQYHUWWKHFRQVWUDLQWVLQIRUPDWLRQRIWKHDFFHV VRULHVWRWKHFRUUHVSRQGLQJGDWDFRQVWUDLQWVRIUHODWLRQDOVFKHPDZKHUHWKHSULPDU\ NH\VHW8LQ((5DFFHVVRULHVLVFRQYHUWHGWRHQWLW\LQWHJUDOLW\FRQVWUDLQWWKHIRUHLJQ NH\VHW.WRWKHUHIHUHQFHLQWHJUDOLW\FRQVWUDLQWDQGWKHGDWDW\SHDORQJZLWKLWVFRQ VWUDLQWVWRWKHXVHUGHILQHGLQWHJUDOLW\FRQVWUDLQW 2QH WKLQJ WR EH H[SODLQHG LV WKDW VRPH QHFHVVDU\ GDWD FRQVWUDLQWV IRU UHODWLRQDO VFKHPD DUH QRW DSSDUHQWO\ SUHVHQWHG E\ ;0/6FKHPD WKH\ FRQFHDOHG LQ WKH ((5 GLDJUDP )RU H[DPSOH VRPH HQWLWLHV SRVVHVV NH\V LQ WKH ((5 DFFHVVRU\ 8 ZKLOH VRPHQRWZKLFKIDOOVKRUWRIWKHHQWLW\LQWHJUDOLW\FRQVWUDLQWVRZHPXVWVXSSOHPHQW WKH LQWHJUDOLW\ FRQVWUDLQW IRU WKHP %HVLGHV IRU WKH ODWHQW UHIHUHQFH LQWHJUDOLW\ EH WZHHQSDUHQWFKLOGUHQ HQWLWLHVWKHUHDUHRQO\DSSDUHQWFRQVWUDLQWVZLWKRXWFRQFHDOLQJ UHIHUHQFHV LQ WKH ((5 DFFHVVRU\ . VR ZH KDYH WR GLJ WKH UHIHUHQFH FRQVWUDLQWV WR UHSUHVHQWWKHFRQVWUDLQWVEHWZHHQHQWLWLHVLQWHJUDOO\ ,Q DGGLWLRQ WKH GDWD VWUXFWXUH DQG FRQVWUDLQWV VHW E\ ;0/6FKHPD LV VLQJOH LQ VWDQFH GRFXPHQW VDWLVI\LQJ DQG LI ZH PHUJH WZR RU PRUH LQVWDQFH GRFXPHQWV WR JHWKHUWKHUHVXOWDQWGRFXPHQWZRQ¶WDOZD\VVDWLVI\WKHFRQVWUDLQWV)URPWKHSRLQWRI VHW WKHRU\ ;0/6FKHPD LV QRW FORVH IRU WKH ³8QLRQ´ RSHUDWLRQ )RU LQVWDQFH DQ HOHPHQW SRVVHVVHV D ³EH XQLTXH´ FRQVWUDLQW DQG VR WKH GDWD LWHP RI WKLV HOHPHQW LV QRQUHFXUULQJLQDVLQJOHLQVWDQFHGRFXPHQWEXWWKHUHPD\H[LWVDPHGDWDLWHPVLQWZR RUPRUHGRFXPHQWV7KHVHGRFXPHQWVPHUJHGWKHHOHPHQW¶V³EHXQLTXH´FRQVWUDLQWLV VXUHO\ EURNHQ +HUH ZH FDOO LW ³XQLRQ PXWH[´ :KLOH WKH UHODWLRQDO VFKHPD PDSSHG IURP;0/6FKHPDLVJHQHUDOO\XVHGWRVWRUHWZRRUPRUH;0/LQVWDQFHGRFXPHQWV DELGLQJ E\ WKH ;0/6FKHPD 7R GUDVWLFDOO\ DYRLG WKH ³XQLRQ PXWH[´ ZH DSSHQG LGHQWLILFDWLRQ FRGH WR WKH RULJLQDO HQWLW\ $FFRXQWLQJ IRU WKH DERYH WURXEOHV WKH PHWKRGWRFRQYHUW((5PRGHOWRUHODWLRQDOVFKHPDLV 0DSDQHQWLW\WRDUHODWLRQDQGWKHHQWLW\¶VDWWULEXWHVDUHFRQYHUWHGWRWKHUHOD WLRQ¶VDWWULEXWHV $GG DQ LGHQWLILFDWLRQ FRGH RI ORQJ W\SH IRU HYHU\ HQWLW\ UHODWLRQ WR HQVXUH WKH HQWLW\¶VLQWHJUDOLW\,GHQWLILFDWLRQFRGHQDPHĪ HQWLW\QDPH,'DQGWKLVDWWULE XWHLVPHUJHGLQWR8DVWKHSULPDU\NH\RIWKHHQWLW\
&RQVWUDLQWV3UHVHUYLQJ0DSSLQJ$OJRULWKPIURP;0/6FKHPDWR5HODWLRQDO6FKHPD
)URPWKHDERYHWZRVWHSVZHJHWWKHIROORZLQJIRXUUHODWLRQV 53XUFKDVH2UGHU3XUFKDVH2UGHU,'RUGHU'DWH 5&RPPHQW&RPPHQW,'YDOXH 5$GUHVV86$GUHVV,'FRXQWU\QDPHVWUHHWFLW\VWDWH]LS 5LWHPLWHP,'SURGXFW1DPHTXDQWLW\863ULFHVKLS'DWHSDUW1XP 0DSDUHODWLRQVKLSEHWZHHQHQWLWLHVLQ((5WRDUHODWLRQDQGFRQYHUWDOONLQGVRI HQWLW\NH\VDVZHOODVDWWULEXWHVRIWKH((5UHODWLRQVKLSWRWKHDWWULEXWHVRIWKHUHOD WLRQ7KHUHODWLRQVKLSPD\DSSHDULQWKUHHFDVHVD ERWKWKHNH\VRIWKHSDUHQW HQWLW\DQGWKHVXEHQWLW\FDQVHUYHDVWKHFDQGLGDWHNH\IRUWKHUHODWLRQDQGZHVH OHFWWKHNH\RIWKHSDUHQWHQWLW\KHUHE QZHVHWWKHQHQWLW\NH\DVWKHUHODWLRQ NH\F QPZHVHWWKHFRPELQDWLRQRIDOOWKHHQWLW\NH\VDVWKHUHODWLRQNH\)URP WKHDERYHVWHSZHJHWILYHPRUHUHODWLRQV 5+DV3XUFKDVH2UGHU86$GUHVV,'6KLS7R3XUFKDVH2UGHU,'86$GUHVV,' VKLS7R 5+DV3XUFKDVH2UGHU86$GUHVV,'%LOO7R3XUFKDVH2UGHU,'86$GUHVV,'ELOO7R 5+DV3XUFKDVH2UGHUBFRPPHQW3XUFKDVH2UGHU,'&RPPHQW,' 5+DV3XUFKDVH2UGHUBLWHP3XUFKDVH2UGHU,'LWHP,' 5+DVLWHPBFRPPHQWLWHP,'&RPPHQW,' 0HUJH WKH UHODWLRQV SRVVHVVLQJ VDPH SULPDU\ NH\ 5 5 5 DUH PHUJHG LQWR 5 DQG 5 5 DUH PHUJHGLQWR57KHQWKHDERYHUHODWLRQVUHPDLQIRXUDVIRO ORZVZKLFKLVPDUNHGDV50 53XUFKDVH2UGHU3XUFKDVH2UGHU,'86$GUHVV,'VKLS7R86$GUHVV,'ELOO7R RUGHU'DWH&RPPHQW,' 5&RPPHQW&RPPHQW,'YDOXH 586$GUHVV86$GUHVV,'FRXQWU\QDPHVWUHHWFLW\VWDWH]LS 5LWHPLWHP,'SURGXFW1DPHTXDQWLW\863ULFHVKLS'DWH&RPPHQW,'SDUW1XP 3XUFKDVH2UGHU,' $FFRUGLQJWRWKHSULPDU\NH\FRQVWUDLQWVHW8LQ((5HVWDEOLVKWKHSULPDU\NH\ FRQVWUDLQWDQGDGGSULPDU\NH\RIHYHU\HQWLW\WR8DQGGHOHWHVRPHUHGXQGDQF\ NH\VWKHQZHFDQJHW8DVIROORZV SNH\
SNH\
⎯ SXUFKDVH2UGHU LWHP,' ⎯⎯→ ⎯ LWHP FRPPHQ 8 ^SXUFKDVH2UGHU,' ⎯⎯→ SNH\
SNH\
⎯→ FRPPHQW 86$GUHVV,' ⎯⎯ ⎯→ 86$GUHVV ` W,' ⎯⎯ $FFRUGLQJWRWKHIRUHLJQNH\FRQVWUDLQWVHW.LQ((5HVWDEOLVKWKHIRUHLJQNH\ FRQVWUDLQW)RUWKHUHODWLRQVPHHWLQJVRPHJLYHQUHTXHVWVZHFDQGLJWKHIRUHLJQ NH\VDQGHVWDEOLVKWKHUHIHUHQFHLQWHJUDOLW\FRQVWUDLQW7KHJLYHQUHTXHVWVDUHD $ NH\$LVDGGHGIRUUHODWLRQ$WRHQVXUHWKHHQWLW\LQWHJUDOLW\E $QRWKHUUHODWLRQ% WDNHVWKH$DVDQDWWULEXWH6RZHFDQHVWDEOLVKWKHUHIHUHQFHFRQVWUDLQWIURPWKH$ INH\
LQUHODWLRQ%WRWKH$LQUHODWLRQ$DQGPDUNLWDV%$ ⎯⎯→ ⎯ $$ WKHQDGG LWWR.DQGZHJHW.DVIROORZV INH\
. ^LWHP&RPPHQW,' ⎯⎯→ ⎯ &RP INH\
⎯ &RP PHQW&RPPHQW,' 3XUFKDVH2UGHU&RPPHQW,' ⎯⎯→ INH\
⎯ 3XUFKDVH2U PHQW&RPPHQW,' LWHP3XUFKDVH2UGHU,' ⎯⎯→ GHU3XUFKDVH2UGHU,' 3XUFKDVH2UGHU
86$GUHVV,'VKLS7R
INH\ ⎯⎯→ ⎯ 86$
204
Hongwei Sun et al. fkey
→ USAdress,USAdressID),(PurchaseOrder, USAdressID-billTo dress,USAdressID)} 7) Establish all the attribute data types as well as their constraints according to the set C in EER, and present the data type of the relations key ID automatically generated in step 2), then merge them into C. C={(long,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε, ε), (long,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), (long,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), ( date,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε),( string, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), ( string, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), ( string, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε),( string, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε),( decimal, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε),( string,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,"US",ε,ε,ε,ε), (long,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), (string, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), (positiveInteger,ε,ε,ε,ε,ε,ε,ε,ε,100,ε,ε,ε,ε,ε,ε,ε,ε), ( decimal, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε),(date, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε),(string, ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε), (string,ε,ε,ε,"\d{3}-[A-Z]{2}",ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε,ε) } Thus, we get the outcomeing relational schema, which is made up of four tuples:RM gotten in step 4 , and U in step 5, K in step 6, C in step 7 respectively. Hereinto, RM denotes the data structure of the relational schema, U for the entity integrality, K for the reference integrality, C for the user-defined semantic integrality, and the latter three constitute of the integrated data constraints for the relational schema. 4
Application and Experiment: Storing XML Instance Document into Relational Database
In section 3, the mapping algorithm from XML-Schema to general relational schema has been achieved. On our implementation we generate a concrete DBMS-Based relational schema based on Microsoft® SQL Server 2000 from the general relational schema, and store the XML instance document into it, the detailed implementation is not described for the paper length and only the results are given. The DDL of the generated relational schema is omitted, and its visual model is shown as figure 5.
Constraints-Preserving Mapping Algorithm from XML-Schema to Relational Schema
205
Fig. 5. Visual model of the DBMS-based relational schema generated from the general relational schema in section 3
XML instance document (see figure 6) is stored into the generated relational database(see table 2 to 5) according to the symbol character. Alice Smith 123 Maple Street Mill Valley CA 90952 Robert Smith 8 Oak Avenue Old Town PA 95819 Hurry, my lawn is going wild! Lawnmower 1 148.95 Confirm this is electric Baby Monitor 1 39.98 1999-05-21 Fig. 6. An example of XML instance Document that conforms to XML-Schema in figure 2
206
Hongwei Sun et al.
Table 2. PurchaseOrder
PurchaseOrderID 1
USAdressID ShipTo 1
USAdressID BillTo 2
orderDate
CommentID
1999-10-20
1
Table 3. Comment
CommentID 1 2
Value “Hurry, my lawn is going wild!” “Confirm this is electric”
Table 4. USAdress
USAdressID 1 2
country "US"
name Alice Smith
"US"
Robert Smith
City Mill Valley Old Town
street 123 Street 8 Oak Avenue
state CA
zip 90952
PA
95819
Table 5. item
Item ID
Product Name
Quant -ity
US Price
1
Lawnmower Baby Monitor
1
148.9
1
39.98
2
Ship Date
199905-21
Part Num 872-AA
PurchaseOrderID 1
926-AA
1
Comment ID 2
5 Conclusion and Future Works The experimental results indicate that our algorithm works well in both logical data structures and integrated data constrains. Still, we will improve the FD-XML to make it fit for any XML Schema such as DTD, RELAX etc.. This amelioration achieved, we can get a universal XML-to-relational mapping algorithm.
References 1. Florescu, D., Kossman, D.: Storing and Querying XML Data Using a RDBMS. IEEE Data Engineering Bulletin, Vol. 22. No. 3 ( 1999)27-34 2. Shanmugasundaram, J., Gang, H., et al.: Relational Databases for Querying XML Documents: Limitations and Opportunities.VLDB'99,Proceedings of 25th International Conference on Very Large Data Bases, Edinburgh, Scotland (1999) 302-304 3. Shanmugasundaram, J.,et al.:Querying XML Views of Relational Data.The VLDB Journal(2001) 261-270 4. Lee, D. W., Chu ,W. W.: Constraints-preserving Transformation from XML Document Type Definition to Relational Schema. International Conference on Conceptual Modeling / the Entity Relationship Approach (2000) 323-338
Constraints-Preserving Mapping Algorithm from XML-Schema to Relational Schema
207
5. Lee, D. W., Chu, W. W.: CPI: Constraints-preserving Inlining Algorithm for Mapping XML DTD to Relational Schema.Data Knowledge Engineering(2001)3-25 6. Lee, D. W., Chu, W. W.: Comparative Analysis of Six XML Schema Languages.ACM SIGMOD Record (2000) 76-87 7. Thompson, H.,et al.: XML Schema Part 1: Structures. http://www.w3.org/TR/xmlschema1(2001.5) 8. Biron, P. V., Malhotra, A.: XML Schema Part 2: Datatypes. http://www.w3.org/TR/xmlschema-2(2001.5) 9. Clark, J.,DeRose, S.: XML Path Language (XPath)Version 1.0. http://www.w3.org/TR/xpath (1999.11) 10. Fallside, D. C.: XML Schema Part 0: Primer. http://www.w3.org/TR/xmlschema-0 (2001.5) 11. Makoto Murata, Dongwon Lee, Murali Mani,Taxonomy of XML Schema Languages using Formal Language Theory", Extreme Markup Languages, Montreal, Canada, August, 2001. 12. Murali Mani, Dongwon Lee, Richard R. Muntz, Semantic Data Modeling using XML Schemas Proc. 20th Int'l Conf. on Conceptual Modeling (ER), Yokohama, Japan, November, 2001. 13. C. Batini, S. Ceri, and S. B. Navathe. Conceptual Database Design: An Entity-Relationship Approach The Benjamin/Cummings Pub, 1992. 14. H. Comon, M. Dauchet, R. Gilleron, F. Jacquemard, D. Lugiez, S. Tison, and M. Tommasi. Tree Automata Techniques and Applications, 1997.
Study on SOAP-Based Mobile Agent Techniques* Dan Wang, Ge Yu, Baoyan Song, Derong Shen, and Guoren Wang Department of Computer Science and Engineering Northeastern University, Shenyang 110004,China {wangdan,yuge}mail.neu.edu.cn
Abstract. SOAP is a new generation distributed computing protocol on Internet. After analyzing traditional distributed objects based mobile agent systems, this paper introduces the approach to developing mobile agent systems based on the SOAP protocol, proposes the architecture of the mobile agent system based on SOAP, and presents the implementation techniques based on .NET platform, including the migration mechanism, the communication mechanism based on SOAP, the message presentation based on XML, the interoperation support of Web services, etc. Such a system has better flexibility and expansibility, and is suitable for loosely coupled Web-based computing environments.
1 Introduction The heterogeneity of network structure and the blooming of network resources bring new challenges to the management and interoperability of these resources. How to make use of the resources efficiently has become hot topic drawing lots of attention. Mobile agent technology provides new solutions for it. A mobile agent is a software entity with autonomy and certain intelligence, which has the ability to migrate independently in the network environments and can carry out tasks on behalf of their users [1]. Since an agent can migrate dynamically to the server where resources are available, asynchronous computation can be achieved to improve system performance Network computation based on mobile agent technology provides an open distributed application development framework, which is a rival to the traditional network programming patterns. There are many implemented mobile agent systems at present such as: Aglet[2], Telescript[3], concordia[4], D'Agent[5], Voyager[6] and Grasshopper[7]. Some systems are implemented using specific programming languages (e.g. Telescript), others relays on Java language that is platform independent. However the design and their implementations vary a lot, which makes it be hard to rebuild and reuse the systems and hinders the interoperation between different platforms. As far as system architecture concerned, the management of distributed objects in current Java based mobile *
This work is supported by the 863 Hi-Tech Program of China (No. 2001AA415210), the National Natural Science Foundation of China (No. 60173051) and the ShenYang City Foundation of China.
Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 208-219, 2002 © Springer-Verlag Berlin Heidelberg 2002
Study on SOAP-Based Mobile Agent Techniques
209
agent systems, for example, Aglet uses Java RMI mechanism. The RMI model only provide the Java based objects locating mechanism, but no more services or techniques are provided yet. In addition, the distributed applications using RMI must guarantee Java VM running on both sides of the network connection, thus it restrict the system to be implemented on different platforms. The distributed computing platform standards released in 1990's, for example, CORBA platform[8], provides interoperability between heterogeneous systems. It brings efficiency in developing mobile agent systems by introducing software bus to simplify the intercommunication access. MASIF[9] (Mobile Agent System Interoperation Facility) released by OMG has defined the CORBA IDL specification that supports agent transportation and management; and interoperations among mobile agent systems developed by different corporations. However, these distributed computing platforms have some limits. CORBA can work efficiently in applications that run in the tightly-coupled LAN environments such as organization in a single corporation, but it cannot work well for distributed heterogeneous applications on Internet which are loosely-coupled. CORBA requires strong coupling between clients and services provided by the system, i.e., they require specific protocols of the same type between clients and system-provided services. CORBA usually requires every machine to run the same ORB product, applications have to access the distributed components on the server side by the specific skeleton and stub module. The systems developed on the platforms have little flexibility, some change of executing mechanism on one side can cause breakdown on the other. If the interface of the server side applications is altered, the clients must make the same alteration since their implementation techniques are fully dependent of that of the server side. The development of mobile agent systems encounters the same problem. Along with the popularization of Internet applications, relationship among users distributed on Web environments are becoming even looser. The users cannot guarantee the required fundamental structure, and may even have no knowledge of the operating systems, object models and programming languages on the other side of the communication. This problem has restricted the extension of traditional distributed object computing pattern into Web environments. Hence, traditional distributed platforms are not suitable for the development of mobile agent systems in the loosely-coupling Internet environments. In year 2000, the international organization W3C released a new distributed object computing protocol --- SOAP[10].SOAP, which supports platform independent and language independent Web invocations, makes it possible for developing new mobile agent systems for Web applications by providing a new mechanism of communication and cooperation among different mobile agent systems. We think that the new SOAP protocol will provide better support for the development of mobile agent systems in Web environments. Communication and corresponding service invocation can be achieved by means of SOAP message passing and SOAP RPC request/response mechanism, instead of binding to specific stub and skeleton in CORBA. SOAP-based mobile agent system can easily implement interoperability and XML[11] based SOAP protocol makes communication message more flexible and abundant.
210
Dan Wang et al.
In this paper, we discuss the issues in developing SOAP-based mobile agent systems, study the key techniques and architecture of designing such systems and provide the implementation paradigm. Finally, the comparisons are made between our approach and the traditional ones.
2 SOAP Protocol Based Communication and Service Invocation SOAP is a simple protocol to exchange messages in distributed environments. It can implement data communication or RPC mechanism among distributed objects. SOAP protocol defines simple rules for service request and message format. A SOAP message is an XML document with as its document element, which includes sub-elements , and . is an optional element used to express auxiliary explanations. is the main body of a SOAP message. includes data contents if the message is to pass data, or name of invoked method and parameters if the message is used for RPC. It uses element to return error messages due to message sending abnormity.
2.1 Message Passing Communication Based on SOAP In order to cooperate to accomplish complex tasks, in mobile agent technology based distributed applications, mobile agents must have the ability to communicate with other entities including users, mobile agent systems. For the sake of transfer data between mobile agents or mobile agent systems, we can transform message format into an XML document and encapsulate it in the of SOAP. XSD can both define the structure of an XML document and check whether current XML documents conform to the requirements. It can also exactly describe data type information and make it easy for computers to process XML. The process of message passing is as following: first, the two sides of interaction should define a common XSD outline which accords with the complex information structure among agents, then the sender generates a normative XML document according to the XSD outline and encapsulate it into the SOAP message to send to the receiver on the Internet. When the receiver gets the SOAP message, it checks the XSD outline to get the XML document, thus a communication is completed. 2.2 RPC among Mobile Agent Systems The RPC mechanism in SOAP protocol can implement interoperability among mobile agent systems. SOAP protocol defines remote methods, naming scheme, parameters and return values that encapsulated in SOAP message. SOAP message including SOAP request messages that describe the invoked remote methods and the destination, and SOAP response messages that describe the execution results of invoked methods and the caller that will be returned to them.
Study on SOAP-Based Mobile Agent Techniques
211
When generating a SOAP message, the specification of SOAP protocol must be strictly complied. On the sender side, users write SOAP message requests; on the receiver side, requests are received and SOAP messages are parsed, then corresponding operations are invoked, finally a SOAP response message is returned to the sender. The whole process will be facilitated with the help of a message generator. 2.3 XML Based KQML Communication Protocol KQML, which is a most commonly used agent communication language(ACL) for agents, defines a series of basis primitives, such as Inform, Request, Offer, Accept, Refuse, and Command. KQML is not only a message format, but also a message processing protocol to support agent runtime knowledge sharing [12]. There are no more specific stipulation to the contents of message in KQML, so many languages can be selected to exchange knowledge. The component of KQML is illustrated in Fig.1. (performative :sender //the sender of performative :receiver // the receiver performative :language //language used in expressing content :reply-with //acknowledge identifier of this performative :in-reply-to // to which identifier of this performative :content //content of this message ) Fig.1 KQMLComponent
XML is a platform independent data exchange specification. Different distributed component protocols can communicate with various application servers by means of XML. XML is a set of definition rules to semantic markups that divide the document into some parts and then mark them. XML is also a meta-markup language, and has the ability to define markup languages of specific domain. The structure and content of an XML document can be described through XSD mechanism. In agent systems, the complexity of data contents and format for agent interaction bring problems to semantic expression. We think, XML provides a good way of expressing KQML message, it can support the complex information exchange among mobile agents, and an integration of XML into KQML will facilitate KQML KQML semantic analysis and interactions to other Web applications. So SOAP SOAP layer of XML-based KQML Communication can be described in HTTP HTTP Fig.2. The process is as following: Fig.2. Layer of XML-based KQML first, define a XSD outline according to KQML grammar specification and the structure of to be exchanged information, then generate a XML document from XSD outline, finally, take the XML document as a KQML implemen-
212
Dan Wang et al.
tation and encapsulate the XML document into SOAP message for HTTP transportation. Following is a definition of XSD outline according to KQML structure. In the example, Content is simply a string, while we can define more complex structure according to specific applications.
2.4
SOA Architecture Based Service Invocation
SOAP protocol support SOA(Service-Oriented Architecture). Web service is a loosely coupled reusable software module[13]. Any language can be selected to program and use Web services. SOA is built on XML message based communication, and the messages must conform to the assumption of Web service specification. Service specification that is an XML document described by WSDL defines the message format that can be apprehended by Web service. WSDL defines the interface and mechanism of service interaction [14].
Study on SOAP-Based Mobile Agent Techniques
213
Some general modules can be registered as Web service when implementing mobile agent systems, for example the upload and download of agent class code. The upload of class code means that the class code of dispatched agents generated by users must be uploaded to the specific class code server in order that it can be executed when agents are resumed in other sites. The download of class code means that when agents migrate to some other sites, the mobile agent server restarts the agent by means of invoking to the Web service. Moreover, mobile agents are entities with certain knowledge; they can migrate independently and accomplish specific tasks for users since they have the ability to make response to the surroundings. Hence agents with various abilities can act as a Web service provider with WSDL describing agents' knowledge and ability. When users propose service request to solve some problems, they can submit the request by SOAP to certain agent that is able to provide effective services. The process of using a Web service in fact is the binding of users to the Web service and invoking Web methods. In Fig.3, service binding is implemented mobile agent SOAP binding service by pre-fetched static service supplier system WSDL document inspecification stead of dynamic UDDI since mobile Fig.3 Binding Model of Web Service agent server and Web service are developed together..So SOAP protocol offers a simple mechanism for exchanging structured and stylized information by mean of XML in distributed environments. To summarize, SOAP-based communication and service mechanism brings great flexibility and extensibility to the communications and invocations among mobile agent system modules. XML combined with KQML acts as the agent communication language makes information transference free of platform and language restrictions. Additionally, SOAP is used to encapsulate XML information and implement communication combined with HTTP protocol. XSD systems enable platform-independent data types in XML documents when transmitting Web service data. These characteristics provide brand-new communication patterns and implementation mechanisms.
3 Design and Implementation of a Mobile Agent Framework .NET is developed according to SOAP protocol to establish and support Web service platforms by Microsoft. It supports development in different programming languages and meets requirements of many Web applications. .NET architecture supports SOAP, WSDL and UDDI and provides programming languages C#. We have chosen .NET platform to develop our mobile agent system called MA.NET since it is a very popular platform supporting SOAP.
214
3.1
Dan Wang et al.
Architecture Design
MA.NET basically includes three parts: Mobile Agent (MA), Mobile Agent Server (MA Server) and Client. Architecture for MA.NET mobile agent system is illustrated in Fig.4. ! The mobile agent(MA) is the executor of user's tasks. it migrates from one to another node in the network. ! MA Server provides mobile agent runtime environments, including such functions as migration, communication and security and management, etc.. For example, mobile agent can migrate from one MA Server to another by invoking migration module.at the same time, MA server can invoke other Web services. ! A GUI is designed for client to control agent’s execution. The user can make decision to dispatch, retract, stop the execution of MA. For example, when the mobile agent want to migrate, its class code that is dispatched by users is uploaded from Client to codebase which is located in MA server by invoking corresponding Web service, then SOAP is encapsulated according to the migrating request message and sent to the first site of the itinerary to accomplish agent's tasks. .NET Server
.MA .MA Server
Client
secuirty
migration
communication
management
Web service
Web service
SOAP Fig.4
3.2
MA.NET Architecture
Implementation of Migration Mechanism
The agent’s migration is classified into two types according to whether the runtime state is transferred or not[15]: strong migration and weak migration. Strong migration means when an agent requires migration, both the runtime state and the data state are saved and transferred to the next site with agent code; after arrival, the agent is restarted at exactly the same state as they were before migration. Weak migration means only data state is transferred to the next site with agent code, and agent is restarted from the program entrance. The latter is easier to implement, thus adopted by most systems. We will illustrate the specific implementation mechanism of MA.NET weak migration as follows.
Study on SOAP-Based Mobile Agent Techniques
215
(1) Migration Mechanism of Agent’s Class Codes In the process of agent migration, migration of codes required for restarting the agent on the new site will burden agent migration and network load. So we introduce Pull method, i.e., store agent class code on specific code server and allocate unique global identifier for each agent, if agents find no required class code on its local server, class code can be downloaded from this code server immediately. .Net doesn’t provide the dynamic ClassLoader as the one in Java, so we implement class code as following: first, mobile agent class codes are compiled in the form of dynamic link library, for example MobileAgent.dll. Then corresponding code transference service is invoked to upload the agent to specific class code server. When restarting an agent on a new site, local mobile agent system must be informed to invoke corresponding Web service to download class codes from codebase if these codes don’t exist on this site. (2) Migration Mechanism of Agent’s Data State In most Java based systems, serialization mechanism and de-serialization mechanism is adopted to implement data state migration. Serialization is a process that object state can be transformed for durative storage and transference. De-serialization is the process that resumes objects from stream. The two processes provide store and resumption ability of object states. .NET offers a Binary object serialization mechanism[16], which is very useful for preserving object state between different types of storage in an application. Binary mechanism is adopted in MA.NET to transmit data states.A process that an agent migrates from site S1 to site S2 and restarts to resume the task at site S2 is illustrated as follows: ! a piece of code of serializing object: myMobileAgent myagent = new myMobileAgent(); //create agent entity IFormatter formatter = new BinaryFormatter(); //create Binary format object Stream temp = new FileStream("SrcMAfile", FileMode.Create, FileAccess.Write, FileShare.None); //create file stream formatter.Serialize(temp, myagent); //serialization temp.Close(); ! a piece of code of de-serializing object: IFormatter formatter = new BinaryFormatter(); //create Binary format object Uri WebSite = new Uri("http://SrcSiteIP/CodePath/SrcMAfile"); HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create(WebSite); HttpWebResponse myResponse = (HttpWebResponse)myRequest.GetResponse(); Stream temp = myResponse.GetResponseStream();//orientation of net data stream myMobileAgent myagent = (myMobileAgent)formatter.Deserialize(temp); //deserialization temp.Close(); The agent will be restarted as thread to execute Run method when it is completely resumed at site S2. After the execution of the agent thread, MA Server sends a
216
Dan Wang et al.
SOAP request to the next site according to agent's itinerary. Thus, agent's tasks on a certain site are completed.
3.3 Implementation of Communication Mechanism (1) KQML Based Complex Communication among Agents Besides the mentioned binary serialization, .NET provides another mechanism called XML serialization based on predetermined common XSD outline, which can make transformation between XML documents. So SOAP based XML documents can be used to transfer complex messages between agents. A communication process among agents is described briefly as follows: first a KQML based XSD outline that conforms to the structure of exchanging information among agents is defined by both sides of the interaction. Then objects containing data MA .Net Server
MA Server
XSD outline
Serializing XML document
SOAP
MA .Net Server
MA Server
De-Serializing XML document
XSD outline
Fig.5 SOAP based message Communication
are serialized into XML documents by the sender according to the XML serialization mechanism and are encapsulated into SOAP messages to be sent to the receiver on the Internet. When the receiver receives the SOAP messages, XSD outline is checked and objects are extracted from XML documents through a XML de-serialization process. This process is illustrated in Fig.5
(2) RPC in Mobile Agent Systems In SOAP based systems, MA Server can be treated as either the service caller or the service provider. As service provider, it provide service according to the client request; on the other hand, it also request service as client to other MA server .SOAP RPC request/response model is responsible for this information exchange. SOAP request message that represents the caller of methods is sent to remote computers; SOAP response message that represents invoked results is returned to the caller of methods. The interaction between MA server by RCP is as following, shown in Fig.6: ! Users write the descriptions of agent tasks, such as itinerary, routing strategy and constraints. Then a SOAP request message is generated and a SOAP RPC request is sent to the MA Server of the first site in the itinerary. ! MA Server invokes services provided in the received RPC request, creates an agent to fulfill the tasks and then sends a SOAP response message back to the Client. In the process of interaction corresponding Web services relevant to class code are invoked to achieve agent migration.
Study on SOAP-Based Mobile Agent Techniques !
217
If a mobile agent hasn't completed the tasks and needs continuing migration according to its itinerary, the MA Sever where the agent is currently located will send a SOAP RPC request to the next site so that the agent may continue its tasks. In this case, the MA Server acts as a Web Client. The function will con-
SOAP message Client
SOAP RPC Request SOAP RPC Response
MA MA Server .Net Server
SOAP RPC Request SOAP RPC Response
MA MA Server .Net Server
SOAP
Fig.6 Interaction between MA Server by SOAP RPC
tinue until the agent completes its itinerary and sends the final results.
3.4 Construction and Implementation of Corresponding Services In order to integrate application software on different operating systems and hardware platforms and in different programming languages on the Internet, the development and use of Web services should be independent of various operating systems, programming model and languages in Web environments. WSDL specification defines uniform format and grammar rules for Web services. Web services can be described by WSDL documents and their users can make correct request and response according to formatted WSDL documents. Users mainly interact with Web services by means of SOAP. Microsoft and IBM have developed installation tools of Web services, and most software providers also offer wide technical supports for SOAP and related Web services. For example, Microsoft .NET provides a prompt tool wsdl.exe used to create Web service agent class. The Web service agent class is a local class generated from WSDL documents of Web service, including declarations of class and methods. Client uses information in agent class and SOAP protocol is introduced to access Web service and to implement method invocation in the bottom layer. (1) Web Service for Uploading and Downloading Codes As specified previously, mobile agent class code can be downloaded from specific MA Servers so that the agent can resume execution after migration. The upload and download of class code can be deemed as Web services that can be invoked to download class code from the servers. This mechanism reduces the burden of mobile agent systems and facilitates the transference of class code among users, agents and mobile agent servers. The construction process of a service is illustrated as follows: If a user prepares to send an agent with specific tasks to the network, Web service must be bound, and class code is uploaded to the servers by invoking code upload method PutCode. While local virtual path of the class code is implied in the input parameters, it downloads class code according to the path and saves it on the server.
218
Dan Wang et al.
Finally the method sends the user a agent identifier which is corresponding to the path to class code server. [WebMethod] public int PutCode(string VirtualPath){ System.NET.WebClient Client = new WebClient(); Client.DownloadFile("http://" + VirtualPath, ServerAgentClassPath); //upload operation AllocateAgentID(ServerAgentClassPath); //allocate agent ID myAgentID according to path return myAgentID; //return agent ID} When the agent is to be restarted on a new site, Web service method GetCode is invoked for the resumed. The first input parameter is the agent identifier of the class code AgentID, and the second is the path to save the downloaded agent class code. [WebMethod] public void GetCode(int myAgentID, string desPath){ string ServerAgentClassPath = FindAgentPath(myAgentID); //find agent class code path according to agent identifier System.NET.WebClient Client = new WebClient(); Client.UploadFile("http://" + desPath, ServerAgentClassPath);} (2) Miscellaneous Services When to create network connection is according to users' requirements. Users may break connection after sending an agent. In this case, when the agent has completed the tasks and finds the user offline, it will migrate to some site called Dock Server[17] in the network and hangs up. If the user's computer is connected again, it will inform the Dock Server of its network address, or the Dock Server queries the connection of the user's computer to decide whether to wake up the agent or not. The agent will send back the results to its user if woken again. MA.NET provides such a Web service that can temporarily save the results on the Dock Server and takes charge of the monitoring function.
4 Conclusions and Future Work In this paper, we have discussed the issues in developing SOAP-based mobile agent systems and have provided key techniques in mobile agent system MA.NET implemented on Microsoft .NET platform. SOAP-based mobile agent systems have the following advantages compared to traditional ones: ! Mobile agent can use Web services in different languages and platforms. The ability of agents to accomplish tasks is improved through transparent interoperability of Web services. ! Semantics of communication become richer than the old system by using the SOAP mechanism and the combination of KQML and XML. ! Applications based on SOAP RPC adds less workload and is easier for users' development. ! Since SOAP-based mobile agent systems can be treated as Web services, the technique can provide transparent support for users and is more suitable for distributed application development.
Study on SOAP-Based Mobile Agent Techniques
219
But the current SOAP-based mobile agent systems do not provide security and transaction managements yet. The security of the systems depends on HTTP security since SOAP protocol is built on HTTP protocol. If the security in HTTP protocol are not satisfied, SOAP protocol is insecure. The extension to element may be a means to secure mobile agent systems. To summarize, SOAP-based mobile agent systems can improve interoperability among mobile agent systems, simplify implementation processes, and is easy to implement communications with complex semantics. Web services can provide support for mobile agent execution, which has great application value. The support on security is our work in next step. In addition to communication security and the transaction management issues, our future work includes performance issues of mobile agent systems and interoperability between different mobile agent systems.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Lange. D. B., Oshima.M., Seven good reasons for mobile Agents. Comm. of the ACM, 1999, 42( 3): 88 – 89. Lange D B. Java Aglet application programming interface. IBM Tokyo Research Lab., http://www.trl.ibm.co. jp/ aglets, 1997. White J. Telescript technology: An introduction to the language. General Magic White Paper GM-M-TSWP3 -0 495-V1,General Magic Inc., Sunnyvale, CA,1995 Wang D et al. Concordia: An infrastructure for collaborating mobile Agents. Proc. of the 1st Int. Workshop on on Mobile Agents, Berlin, 1997 Dartmouth workshop on mobile agents’97(MA’97),Berlin,German,,1997 Voyager Core package technical preview. Object Space, Inc. http://www.objectspace.com GrassHoper: An intelligent mobile agent platform written in 100% pure Java,. http://www.ikv.de/ products/ grassHoper/, 1998 OMG. The Common Object Request Broker: Architecture and Specification. Revision 2.0.1995 Milojicic, D., Breugst, M, MASIF: the OMG mobile agent system interoperability facility, Proc. of the 2nd Int. Workshop on Mobile Agents, LNCS 1477, Springer-Verlag, Berlin, Germany, pp. 50 - 67. World Wide Web Consortium. SOAP 1.1. http://www.w3.org/TR/#Notes.2001.1 World Wide Web Consortium. XML Schema Specification http://www.w3.org/XML.2001 Finin,T., Weber J et al: KQML as an Agent communication language; Bradshaw,J. (Eds) Software Agents, Menlo Park 1995 Tidwell D. Web service architecture. http://www6.software.ibm.com. 2001 Meredith G., Curbera F.. WSDL specification, W3C. http://www.w3c.org/TR/wsd1, 2001. Torsten. I. Frank.K, Migration of mobile Agents in Java: Problems, Classification and Solutions, MAMA’2000. Austrilia: 1574-362 Microsoft. NET Framework SDK Documentation: Serializing Objects. http://www.microsoft.com.2002.3
17. Shi Z.Z: Intelligent agent and its applications, Scientific Press, Beijing, China, 2000
Securing Agent Based Architectures Michael Maxim and Ashish Venugopal School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 {mmaxim,ashishv}@cs.cmu.edu
Abstract. Agent based architectures provide significant flexibility and extensibility to software systems that attempt to model complex real world interactions between human users and functional agents. Such systems allow agents to be seamlessly published into the system providing services to human agent consumers. Securing agent based architectures in permissions based environments while still maintaining extensibility involves establishing a pathway of trust between the agent producer, container and consumer. This paper focuses on the final trust step, verifying the identity of an agent consumer in order to bound the capability of an agent by the capabilities of the agent consumer. We present an innovative application of zero knowledge proofs to inexpensively authenticate agents and grant them the restricted permissions of their consumer operator. Our scheme’s theoretical foundation guarantees inexpensive detection of “rogue” agents and defends against replay attacks in environments where performance is critical.
1
Introduction
Despite the inherent flexibility of the software development practice, the maintenance and extensibility of large-scale systems is dependent on the underlying design and architecture. As the functional requirements for such systems change over time, modifications must be possible without restricting the current operation of the system, while still maintaining the potential for future modification. As the complexity of the real world system being modeled in software grows, the difficulties in maintenance and extensibility does as well, motivating the transition to component based development models. In the component model, the system is defined to be the union of its functional components, where each component completely encapsulates its functionality [1]. The system publishes some set of interfacing criteria that must be implemented by the component before it can be integrated with the system as a whole. In addition, criteria for the interaction and combination of such components defines a methodology for the creation of subsystems of components which leverage the existing component base to extend the system’s functionality. A well-defined set of interfacing criteria is critical to reaping the benefits of the component model. The advantages to the engineering process are clear. Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 220–231, 2002. c Springer-Verlag Berlin Heidelberg 2002
Securing Agent Based Architectures
221
System maintainability is improved since changes to the specifications can be implemented simply by modifying the underlying components. Extensibility is facilitated by publishing an interfacing criterion that allows for new components to be added without causing friction to the operation of the system as a whole. The model encourages specialization and distribution of the development process, allowing development teams and even third-party developers to focus on the issues of each component, rather than those of the whole system. Languages have been developed to rigorously specify the interfacing and interaction between components and can be considered a mature field of study [2]. The agent based model extends the component architecture by attributing beliefs and desires to the functional components [3]. Each agent has a degree of autonomy, by maintaining its goals and beliefs over time, as well as the ability to interact with human end users as well as other agents. More detailed discussion regarding the foundations of agent based systems can be found in [3]. The features that discriminate agents from common components are helpful to model real world complex systems effectively. Agents that conform to the interfacing criteria of a given system can provide some set of services to all end users of the system. An area of recent interest in agent based modeling, involves creating agents whose operation evolves over time based on their interactions, yielding more dynamic and unpredictable agent behavior. As more powerful agents are built, it becomes increasingly important to define a pathway of trust that extends from the introduction of the agent into the system to its operational capabilities. The pathway of trust involves several entities that must cooperate to introduce and operate an agent. We define the developer of the agent as the agent producer, and the human operator as the agent consumer. From an agent’s perspective the consumer is referred to as an operator. The space in which these agents operate and interact is referred to as the container. The first step of the trust pathway requires that the agent producer be trusted by the container to allow the agent to be made available into the system. Once the agent is in the container, both the agent consumer and the container need to trust the runtime operation of the agent to ensure that operating the agent will lead to predictable results. In a shared agent environment, an agent can be operated by multiple human users, who have been assigned permissions or roles by the systems administrator. The capabilities of the agent must be bounded by capabilities of the human consumer who is operating the agent. This is especially important in applications where privacy is important; the agent’s access to sensitive data must be limited by the access rights of its operator. Completing the trust pathway provides the infrastructure for dynamically evolving agent based architectures. The first two steps in the pathway (the agent screening process) have theoretically sound, efficiently implemented solutions that we will discuss, however, this paper focuses on presenting an inexpensive method to perform the final step of verifying the operator identity of an agent. The remainder of this paper is structured as follows: we begin by describing two possible solutions to the first two steps of the trust pathway problem, followed by a detailed discussion of the identity verification problem which is the focus of this paper. We then present several na¨ıve solutions and discuss their
222
Michael Maxim and Ashish Venugopal
limitations, thereby motivating our ticket verification scheme, which is followed by the implementation details. We will show that our technique, an application of Shamir’s zero knowledge authentication protocol, delivers better performance as well as security against most common malicious masquerading techniques. The theoretical foundations of our scheme are formally explored, and shown to address the problem specifications. Next, we describe a current application of our scheme within an operational agent based architecture. Finally, we conclude with the implications of our scheme to the software engineering process and our expectations for future work.
2
Agent Screening
Solutions to the first two components of the problem are well-documented and for the most part are viable solutions. The first step of the agent trust pathway is at the point when the agent desires to enter the system. We must admit only those agents that have some sort of proof that trusted third parties created them [4]. Digital certificates are a popular and practical solution to admit only agents from trusted developers into the agent container. A digital certificate acts as a watermark on potential agents attempting admittance into the agent container. At the entrance to the container a verifier must be able to confirm that the agent has both provided a valid certificate, and that the agent certificate originates from a trusted third party. The fundamental tenet of the certificate system is that the certificates should be extremely difficult to generate unless specifically granted by a trusted certificate vendor. In addition, these certificates ought to be quickly verifiable and are generally system independent. Digital certificates have been applied very effectively for the distribution of ActiveX controls via the World Wide Web [5]. When a user is presented with a control, the user can be sure the control is safe because it is loaded with a certificate identifying the developer. It is up to the user whether or not to trust the developer, but the crucial point is that the user can be sure that the certificate is real, and the developer listed on the certificate is truly the developer of the control. Once we have established that an agent has a valid certificate from a trusted developer we also should confirm that the agent acts in a responsible way; the second step of the agent trust pathway. Recently there has been intense research into the problem of augmenting machine code with a formal proof of functionality. Every code module contains both the executable machinery, as well as a formal proof regarding the functionality of the executable machinery [6]. Such designs are commonly referred to as proof carrying code (PCC) schemes. In this sense we can guarantee that agents entering the system must provide some sort of proof of functionality, and more importantly a proof that they do not carry out certain forbidden actions. In general, any agent system will have a clearly defined security paradigm most likely written out in logic or another formal language. Agents entering the system would then have to produce a formal proof in the language of the system to convince the system that it behaves according to the rules [3,6]. Researchers have developed systems of this nature most impres-
Securing Agent Based Architectures
223
sively by creating so called certifying compilers which will auto-generate proofs of safety automatically [6,7]. In this sense the PCC scheme fits seamlessly into the agent development cycle. Applications of PCC are numerous but are most prominent for mobile code. Mobile devices often times download code from some source and execute it. The device needs a mechanism for knowing the code is safe without relying on a certificate or restricting the speed of the program (speed restrictions result from possibly executing the code through an interpreter) [7]. Here PCC becomes a natural way to secure mobile agent systems, in that agents can move from a distributed system to a mobile device where it can be trusted as a result of the accompanying formal proof.
3
Tickets and Permissions
The area that this paper focuses on is the security of the system after an agent has gained access. Particularly, we are concerned with the behavior of the agent with respect to the access rights of its consumer operator (human user); the third stage in the agent trust pathway. In order for agents to make any useful queries or actions upon the data store, the agent needs to be associated with some user in order to determine the access rights for the agent. It is clear agents do not have static access rights, their rights are dynamic with respect to a changing user set and user permissions. We define a ticket as a unique identifier for an agent operator. An agent possesses a ticket if a trusted ticket producer has initialized the ticket and loaded the agent with the ticket. A trusted ticket producer is a static entity in the system that has the responsibility of correctly creating tickets for operators (typically occurs when operators authenticate into the system). The entity is trusted because it is part of the original system design and does not change over time, even though the set of users could vary dramatically. Each ticket contains in its state some security information unique to the operator it corresponds to. This security information is private and cannot be viewed by any entity in the system. It is this information that the ticket needs in order to prove that it is not a forgery. Figure 1 illustrates the state of an agent. Now that we have a concrete notion of tickets and how agents obtain them, we next consider how to best implement the permission engine. Much research [8] has gone into creating permissions schemes that are both secure and flexible. The point of our system is not to create or defend any of these systems, but instead to augment them by adding a ticket verifier component. The permissions engine is composed of two main components: the verifier and the permissions processor. The verifier here is the component of the system that is responsible for verifying the validity of the tickets presented to the permissions engine. Once the verifier has accomplished this, it then sends the request or action to the permissions processor that will decide what actions to take based on the access rights of the operator associated with the ticket. The processor can then either perform the request or forward it in some formal language to another component that
224
Michael Maxim and Ashish Venugopal
Fig. 1. Agent and Permission Engine Layout.
Fig. 2. Trust Pathway.
accesses the data store. Figures 1 and 2 both help illustrate the permission engine layout as well as the interaction between an agent and the permissions engine. It is clear from this scheme that the ticket the agent currently carries completely determines the scope of the operations the agent can perform. The lifespan of an agent in the system reduces to acting out beliefs and desires (functional definitions) with respect to the ticket it holds. Tickets can be granted for certain periods of time as well, allowing for agents to act independently [1] as well as in response to operator requests.
Securing Agent Based Architectures
4
225
Ticket Verification Problem
We now return to the problem of fully implementing the trust pathway. We have already presented viable solutions to the first two parts of the problem, initially screening agents from un-trusted sources, using PCC and digital certificate technology. What we need now is a way to prevent agents from “forging”, or maliciously generating tickets. To help make the problem more concrete, we introduce a scenario that could take place if such a safeguard was not in place. Suppose we have a rogue developer working for a trusted company developing agents. This rogue wishes to cause damage to the system by developing an agent that enters the system, obtains administrator tickets, and deletes the information store. In this case the rogue’s agent depends on being run with administrator operator tickets at some point, so that the agent can figure out how to forge them. Notice the rogue does not need administrator status himself, but his whole plan is for an agent to forge these tickets and then compromise system integrity on behalf of the rogue. The rogue is fully able to get a certificate that will pass the screening process and activate his agent in the container. The rogue wins if he can figure out a way to get these tickets. The rogue scenario begs the question: how do we ensure that agents cannot forge tickets? We need to develop a system in which even if an agent passes the initial screening, that it must not be able to extend beyond the capabilities encapsulated in its ticket, and furthermore that it cannot create arbitrary tickets by itself. The problem reduces to creating tickets such that a verifier can confirm with high probability that the tickets are valid, and doing this in a way that the verifier’s verification step reveals no information about the security information inside a ticket. Thus if a verifier knows nothing about a ticket after inspecting it (other than the knowledge that the ticket is valid), neither does the agent.
5
Ticket Verification Na¨ıve Solutions
Before presenting the solution that we believe is optimal, we first explore some lesser solutions. – Passwords: In this scheme each ticket is loaded with a password (the secure information of the ticket) that is unique to the operator that the ticket corresponds to. In order for a verifier to determine if the ticket is legitimate, it needs only to get the password from the ticket and confirm that it corresponds to the operator the ticket corresponds to. The problem with this scheme is immediately apparent from a rogue perspective. If an agent ever determines that it currently holds administrator tickets, it just needs to query the ticket and get the password. From this point on it can then create perfectly legal tickets because it knows the secure information of the administrative operator tickets. A possible solution to this is to encrypt the password in the ticket’s state. We will discuss below why we believe cryptography should be omitted from the verification scheme.
226
Michael Maxim and Ashish Venugopal
– PCC Only: If the system relies exclusively on the formal PCC proof that the agent presented upon admittance to the system, then it fails to cover all possible permutations of agent behavior over time. Permissions in the system change. No formal PCC proof can encapsulate the complete dynamic nature of all the possible relations among permissions, operators and agents at every given point in time. In addition, an evolutionary agent may provide a perfectly sound PCC proof but then proceed to evolve in the system in an unforeseen way. It could then possibly compromise the security of the system by learning how to create tickets. We do not deny the importance or usefulness of PCC, but it alone does not solve the problem at hand. – Cryptography: Cryptography does present itself as a viable solution in a traditional distributed computing environment. However, when we speak of agent development and system adaptability we often want agents that can scale to mobile devices as well. In this case we must conserve both code space and speed as much as possible. With a large-scale cryptography system such as DES or RSA we would require significant code space and computational power [9]. Because of this we generally want to avoid cryptography as a solution to the identity verification problem. The decision to avoid cryptography is a general sentiment echoed by most of the PCC community as well for much of the same reasons [7]. In addition, the attempts to leverage RSA on low power, mobile devices have not been successful in producing signature keys. [10]
6
Zero Knowledge Ticket Verification
The solution to the problem that this paper introduces is based on research done mostly in the 1980’s on the topic of zero-knowledge proofs. A zero knowledge proof consists of a prover P proving to a verifier V that it knows some information x, without revealing any information about x explicitly [11,12]. The most important application of this theory has been in personal identity verification. System users are given private personal identity keys. In order for a person to prove their identity to the system, they need to show that they know the key without revealing any information to the verifier about what the key actually is. This has been used in so-called smart credit cards to confirm that the owner of a card is the one using the card [13,14]. The motivation for using zero knowledge in ticket verification is that the problem reduces to confirming a ticket has certain information without knowing what the information is. This is exactly the thesis of zero knowledge proof theory. Because we can consider agents as possessing an identity (the agent’s ticket), we can use standard authentication schemes to verify the validity of the ticket. In our case, we want the authentication to be zero knowledge; Shamir’s Authentication Protocol meets this specification. Shamir’s algorithm provides us with a method of authenticating human users to systems in a secure lightweight fashion using zero knowledge techniques [15]. We can generalize the algorithm to not just apply to human users, but to any entity that has a unique identity, which includes
Securing Agent Based Architectures
227
agents. We now outline Shamir’s algorithm in detail to show that it is zero knowledge and furthermore that it solves the problem we have outlined in a way that the solutions we discussed above failed. The general flow of Shamir’s algorithm is as follows. We consider a prover P and a verifier V where P and V are described above. P contains a private identifier sP which he must prove he knows to V . 1. The system as a whole generates a random m-bit composite n = pq where p and q are large primes. This is a one time cost for the entire system. 2. For each user in the system U , we define a private key sU . This is the “security information” referenced in various places in this paper. In addition, the system calculates rU ≡ s2U (mod n) and publishes this value rU . Again this is a one time cost per user. Once this has been completed for the system, V can confirm P knows sP via the following conversation. 1. P starts the conversation by generating a random m-bit (large value of m, e.g. 1024) number q and computing x ≡ q 2 (mod n). That is, x is a quadratic residue modulo n. P sends this number off to V . 2. V now flips a coin assigning the value b = 0 if it is heads and b = 1 if it is tails and sends this value back to P . 3. After receiving b from V , P now computes X = q · sbP and sends this value to V . 4. V confirms that P generated the right response by confirming that X 2 ≡ x · rPb (mod n). A technical stipulation of this procedure is that the values x that P initially sends must actually be drawn from a random distribution. V can assure it will see a random distribution by rejecting any x from P that V has already seen, and just ask P for another one. The chances of a legitimate P repeating a number from a space of 2m numbers is low, so this condition primarily affects imposters. If P keeps failing this invariant, V can reject P as an imposter. The first thing to notice with this algorithm is that if P knows sP then he can pass the test every time. This is clearly true. (y0 = q ∧ y1 = q · sP ) ⇒ ( yy10 = sP ) where y0 and y1 are the possible responses P has to make. Secondly an imposter prover PI who does not know sP fails the conversation with probability 12 . This is seen to be true because in order for PI to pass the test he must guess the value b that V generates. The only other option PI has is to find a root of rP , however this is provably equivalent in hardness to factoring (which is known to be intractable) [16]. Therefore running the conversation between P and V k times implies that the probability of V catching an imposter PI is 1 − 21k ≈ 1 for sufficiently large values of k. The third key point about this algorithm is that P reveals no information about sP to V , even when V verifies that P is legitimate. This is explained in detail in Shamir’s original publication [15], and it suffices here to say that any observer M to this conversation sees nothing but random number exchange. In other words M could generate the conversation by talking
228
Michael Maxim and Ashish Venugopal
to himself and following the protocol, which implies he learns nothing about the sensitive information sP that he seeks.
7
Implementation Analysis
As evidenced by the above detailed discussion of the protocol it should be clear that implementation of this scheme is very lightweight and efficient. We need only to be able to produce random m-bit numbers and multiply them together on the agent side. On the verifier side all that needs to be done is squaring of m-bit numbers, which can be implemented efficiently. Furthermore, the code space required to implement the scheme is very small. We only need to multiply a few numbers together as well implementing random number generation for each ticket as opposed to implementing an entire crypto scheme. This scheme is also speed efficient [17]. If we run the conversation k times, and assuming multiplication happens in roughly O(1) time (since m is fixed), the running time of the procedure is O(k) where k is the number of conversations between verifier and ticket. As a result of presuming that the ticket is valid, we can cut off its computation at some designated time interval if we think that for some reason the ticket is trying to brute force (for instance attempting to factor n) a response, justifying the O(1) response/multiplication time. This decision filters out those tickets that can only answer in a brute force way (false tickets, those that need to perform the hard calculations our system relies upon for security) and does not filter out any valid tickets if the time limit is calibrated sufficiently. Security is guaranteed in this system as long as taking square roots modulo n is hard [16] (equivalent to the complexity of factoring). The fact that the protocol is zero knowledge makes an attack like the so-called “replay attack” [18] infeasible. The replay attack consists of a malicious observer M watching the channel of the conversation between a ticket and the verifier (M could possibly be a rogue agent). M observes the channel long enough such the he can completely simulate the prover P by replaying exactly what P transmitted. However in this scheme, M stands little chance of getting the same questions as P got when M started observing the channel. The random nature of the verifier’s questions during the conversation in addition to the condition that P must produce randomly distributed initial random quadratic residues negates this possibility. In order for M to successfully replay the conversation, he would need to know all possible answers to all possible questions; that is he would have to observe the channel ad infinitum. Otherwise the only other option for M is to attempt to figure out an algorithm for taking square roots modulo a large m-bit prime (this is of course known to be very difficult). [15,17,16] In short we see that any observer M to the conversation can learn no more information about the security information of the ticket than he could if he just carried on the conversation with himself. This key point makes the creation of tickets by an agent’s fiat impossible, unless the agent knows the secret security information.
Securing Agent Based Architectures
8
229
Weaknesses
No system, regardless of how well defined, is completely secure. There will always be unintended loopholes or weaknesses in the specification that cause it to possibly go awry. With this realization in mind, we set out to possibly predict the pitfalls of the system we have just described and attempt to offer ways of avoiding these situations. The first area of concern is the location of the security information for a ticket in memory. Technically speaking, each ticket exists in memory with the security information being a part of that state. It is conceivable that a particularly smart rogue agent could somehow introspect the memory region around his own instance (which he is perfectly able to obtain) and sniff out the security information without being caught by the verifier. Such an attack is not covered in the specification of the system as currently described. Nevertheless, we believe that such an attack can be avoided in a couple of ways. – Use a “safe” programming language: By safe we mean those languages that run on “virtual machines” like the popular Java programming language or functional languages like Standard ML. Using these languages severely limits the direct access to memory in a way that could compromise the integrity of our ticket verification system. Ideally a system could use a special agent development language that would have this sort of direct memory access safety built-in. – Encrypt the security information: We have already discussed why crypto techniques probably are not feasible in our verification scheme. However, if the system being developed does not have any plans for mobile support, then implementing this method is very reasonable, and would alleviate the problem for the most part [19] (given that the agent doesn’t figure out how to decrypt the security information). – Utilize PCC: Using PCC techniques we could make as part of our system a security rule specifying that agents (or code in general) cannot make direct memory access outside certain well-defined regions (such as the memory space of the instance of the agent). The proof would guarantee then that the agents do not implement this sort of attack at all, making defenses against it obsolete. Alternatively, the ticket state could be centralized in some privileged location in memory. This location would be known to the PCC proof checker, which would in turn check to make sure agents prove they do not attempt to access this location. If such privileged locations are used, it makes the generation of the proof much easier than proving no direct memory access in general. An obvious drawback of this is that some systems may wish to give agents direct access to memory, in which case the PCC rule would not be in effect. For systems of this type, it would be more practical to implement either of the first two suggestions. The second potential problem is the notion of a trusted ticket producer. Such a producer would have to be infallible in the eyes of our security paradigm; otherwise we sacrifice a key assumption of the security scheme. If a rogue developer
230
Michael Maxim and Ashish Venugopal
ever got access to this producer and was able to modify it in a way detrimental to the integrity of the software system, then our security system is broken. The only available solution is for the administrators of the system to maintain control of the ticket producer; making sure it does not change, and that no one has tampered with it. General administrative conscientiousness comes into play here, and for the most part is the only guard against such an attack.
9
Application
The solution to the identity verification problem that we discuss in this paper was successfully implemented in Circa, a collaborative scheduling and calendaring tool. Circa is built upon an agent based architecture which relies on the development of third party agents to complement system functionality. It manages schedule and event information for university sized communities and this information is used by agents to provide complex functionality such as meeting time negotiations and information retrieval on published events. The need for fine grained permissions is immediately apparent when dealing with personal schedules and commitments and therefore critical that agents that are introduced into the system adhere to the permissions granted to their operators. As described in our scheme’s implementation details, a 1024 bit key was generated by the system and 1024-bit numbers were assigned to each of the system’s 100 users. We did not implement the agent screening steps since our focus was to investigate the performance and effectiveness of our solution to the identify verification problem. There was no noticeable degradation in performance and we verified the correctness of our solution by creating some simple masquerading and replay attack scenarios. The server machine that handled the agent container is a Pentium 400 MHz machine with 256MB of RAM and all testing was performed with 25 users concurrently making requests for agent operations.
10
Conclusion
In providing an efficient solution to the identification verification problem, we feel that we have addressed a fundamental issue in the application of agent based architectures. By bridging the final step in the trust pathway between the agent producer and the agent consumer, we have laid the foundations for significant progress in third party agent development, allowing systems to leverage the increased specialization and manageability that burdens large scale systems. In addition, recent work in agent based technologies has focused on mobile agent environments where agents migrate between systems, making it even more important to bridge the trust pathway. Further work would include more extensive testing with all components of the trust pathway and testing performance of the verification scheme on handheld devices where computation and memory are at a premium. We expect to see that this technique will prove to be an effective solution in such environments and we hope to see its widespread application in systems where data sensitivity and privacy are critical.
Securing Agent Based Architectures
231
References 1. H. Weber, A. Sunbul, and J. Padberg, “Evolutionary Development Of Business Process Centered Architectures Using Component Technologies,” 2000. 220, 224 2. Felix B¨ ubl, “Towards desiging distributed systems with ConDIL,” in Engineering Distributed Objects (EDO 2000), Wolfgang Emmerich and Stefan Tai, Eds., Berlin, November 2000, LNCS Nr. 1999, pp. 61–79, Springer. 221 3. Michael Wooldridge and Paolo Ciancarini, “Agent-Oriented Software Engineering: The State of the Art,” in AOSE, 2000, pp. 1–28. 221, 221, 222 4. Mary Thompson, William Johnston, Srilekha Mudumbai, Gary Hoo, Keith Jackson, and Abdelilah Essiari, “Certificate-based Access Control for Widely Distributed Resources,” pp. 215–228. 222 5. Kevin Fu, Emil Sit, Kendra Smith, and Nick Feamster, “Dos and Don’ts of Client Authentication on the Web,” in Proceedings of the 10th USENIX Security Symposium, Aug. 2001. 222 6. George C. Necula, “Proof-carrying code,” in Conference Record of POPL ’97: The 24th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Paris, France, jan 1997, pp. 106–119. 222, 222, 223 7. J. Feigenbaum and P. Lee, “Trust management and proof-carrying code in secure mobile-code applications,” 1997. 223, 223, 226 8. Ravi S. Sandhu, Edward J. Coyne, Hal L. Feinstein, and Charles E. Youman, “Role-Based Access Control Models,” IEEE Computer, vol. 29, no. 2, pp. 38–47, 1996. 223 9. Communications Of The ACM, “[41] R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public-key cryptosystems.,” . 226 10. N. Modadugu, D. Boneh, and M. Kim, “Generating RSA keys on a handheld using an untrusted server,” 2000. 226 11. Uriel Feige and Joe Kilian, “Zero Knowledge and the Chromatic Number,” in IEEE Conference on Computational Complexity, 1996, pp. 278–287. 226 12. P. Kaski, “Special Course on Cryptology / Zero Knowledge: Rudiments of Interactive Proof Systems,” 2001. 226 13. Martin Abadi, Michael Burrows, C. Kaufman, and Butler W. Lampson, “Authentication and Delegation with Smart-cards,” in Theoretical Aspects of Computer Software, 1991, pp. 326–345. 226 14. Joan Feigenbaum, Michael J. Freedman, Tomas Sander, and Adam Shostack, “Privacy Engineering for Digital Rights Management Systems,” in Proceedings of the ACM Workshop in Security and Privacy in Digital Rights Management, November 2001. 226 15. Safuat Hamdy and Markus Maurer, “Feige-Fiat-Shamir Identification Based On Real Quadratic Fields,” . 226, 227, 228 16. I. Biehl, J. Buchmann, S. Hamdy, and A. Meyer, “A signature scheme based on the intractability of computing roots,” 2000. 227, 228, 228 17. M. J. Jacobson, Jr., R. Scheidler, and H. C. Williams, “The Efficiency and Security of a Real Quadratic Field Based-Key Exchange Protocol,” . 228, 228 18. Aura, “Strategies Against Replay Attacks,” in PCSFW: Proceedings of The 10th Computer Security Foundations Workshop. 1997, IEEE Computer Society Press. 228 19. Victor Boyko and Philip D. MacKenzie and Sarvar Patel, “Provably Secure Password-Authenticated Key Exchange Using Diffie-Hellman,” in Theory and Application of Cryptographic Techniques, 2000, pp. 156–171. 229
Service and Network Management Middleware for Cooperative Information Systems through Policies and Mobile Agents Kun Yang1, Alex Galis1, Telma Mota2, Xin Guo1 and Chris Todd1 1
Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE, United Kingdom {kyang, agalis, xguo, ctodd}@ee.ucl.ac.uk 2 Portugal Telecomm Inovacao, S.A. Rua Eng. José Ferreira Pinto Basto 3810-106 Aveiro, Portugal
[email protected]
Abstract. As cooperative information systems (CIS) grow to cover large-scaled CIS that involves many cooperative sub-systems distributed geographically, management technologies that support them gain in importance. This paper focuses on the underlying technologies that are key to the success of cooperative information system, i.e., service management and network management, which are in the form of middleware, to assure inter-domain cooperation within a big CIS in a flexible, automated and secure way. This flexibility and automation is guaranteed by the use of Policy-based Network Management integrated with Mobile Agent Technology. Extending the mobile agent supporting environment, Grasshopper, to support more secure facilities, largely enhances the security. As a case study, IP Virtual Private Network tunnel between two geographically separated domains within a large-scaled CIS is set up and configured automatically and securely using the service and network management middleware, which shows that service and network management enhanced CIS can operate with more flexibility, automation, security and in a larger information system.
1. Introduction To succeed in the new economy, corporations need to cooperate with other partners or its branches across the world in a flexible and automated way, and this cooperation should be seamless integrated with its inside cooperative information workflow that is usually enforced by traditional Cooperative Information System (CIS). These partners may be geographically distributed across a large area or even globally, thus are connected via networks, typically the Internet. To satisfy these new requirements, new CIS needs to take into account the service management and network management of the Internet to improve cooperation among corporations. Many research works on enabling CIS are currently undergoing [1], which more or less focus on the solution and improvement of the cooperative logic of CISs themselves, e.g., process management systems and workflow management systems, using either middleware or multi-agent system, and less concern has been given to the Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 232-246, 2002. © Springer-Verlag Berlin Heidelberg 2002
Service and Network Management Middleware
233
supporting environment of these systems, i.e., the network and its management. But with the expansion and refinement of the CIS, driven by the increasing varieties of cooperation requirement and the rapidly growing accessibility and broad reach of the Internet, it is getting more imperative to take network features into CIS’s account. The sensitive support from network, the transporting channel for CIS, can obviously improve the efficiency and flexibility of CIS. It is also possible to tune the networks to largely benefit the CIS without deteriorating other services, since this CIS oriented network management can take place in the edge routers within the enterprise networks that are controlled by the corporations themselves. For certain time and security critical cooperation, mechanisms to assure the successful fulfillment of cooperation, such as Quality of Service (QoS) and Secure IP (IPSec), have to be provided in networks. Furthermore, the cooperation may take place at any time and amongst any group of sub-domains or partners, according to the process of cooperation workflow, this requires that the services for the corporation and their downloading and deployment should be done without the human intervention, and the network environment for the execution of these services should be configured on the fly. The expansion of CIS also requires cooperation across enterprises, i.e., horizontal cooperation rather than just vertical or internal cooperation, and this requirement also needs considering the configuration and management of the networks that connect the cooperative parties. The integration of mobile agent technology (MAT) and policybased network management (PBNM) provides a promising means to implement a CIS system that covers both CIS oriented service management and network management. Policy-based Network Management technology offers a more flexible management solution for a specific application tailored to a customer under the environment of large-scaled networks [2]. Nevertheless, this flexibility doesn’t come easily. The current PBNM architecture has problems in the sense that it can only address fairly limited issues and usually requires human intervention. Mobile agent, as an enabling technology, can resolve many of these problems. The mobile agent paradigm intends to bring an increased performance and flexibility to distributed systems by promoting "autonomous code migration" (mobile code moving between places) instead of traditional RPC (remote procedure call) [3]. With code migration, the actual code or script moves from place to place and executes locally, achieving lower latency, little need for remote interactions and highly flexible control. Mobile agents can easily represent one of the roles involved in the CIS and the underlying network management, such as cooperative partner, domain manager, service provider, connectivity provider, resource or end-user, and act on their behalf, based on established policies. Mobile agents are widely used in IT such as telecomm and network management as they can effectively take over the burden of the complex interaction between different business/network players, such as negotiations or new service injection, so as to realize the automation of the system. Among a long list of mobile agent platforms developed [4], either for academic usage or commercial purposes, Grasshopper [5] attracts more attention from the IT and network management due to its conformance with OMG (Object Management Group)'s Mobile Agent System Interoperability Facility (MASIF) [6], which allows interoperability of different mobile agent platforms and the deployment of mobile agents on CORBA environments. In addition, the latest Grasshopper version is also compliant with the specifications of the Foundation for Intelligent Physical Agents
234
Kun Yang et al.
(FIPA) [7]. Furthermore, Grasshopper is a commercial product with extensive documentation and future development under way, therefore is more suitable for commercially oriented CIS. But it also provides the version for non-profitable use. Despite its many practical benefits, mobile agent technology results in significant new security threats from both malicious agents and hosts. A great deal of research work has been done about these problems [8], but most of which are undertaken theoretically and standalone and have yet been put into a real environment that is typical for mobile agent application and security problems, such as commerceoriented CIS. This paper gives a concrete solution to the mobile agent security problems occurring during the use of mobile agents in CIS. This paper aims to provide a CIS system, consisting of CIS oriented service management middleware and CIS sensitive network management middleware, that can quickly, automatically, intelligently and securely deploy and manage the networkoriented CIS actions. The whole paper is organized as follows. Section 1, this section, described the existing challenge of CIS and proposed to introduce middleware to cope with this challenge. Section 2 describes the enhanced CIS architecture and one of its middleware: Service Management Middleware, whilst Section 3 details the design and implementation of another middleware: Network Management Middleware that are fundamentally based on policy management and technically enabled by mobile agents. Mobile Agent Security Facility extension for Grasshopper is discussed in Section 4, taking into account both mobile agent itself and the environment mobile agent runs on. The whole features covered in these two management middleware are further integrated and verified in a practical scenario, inter-domain IP VPN (Virtual Private Network) configuration within a big cooperative information system, which contributes to Section 5. Finally, Section 6 concludes the paper.
2.
2.1
Enhanced CIS Architecture and Service Management Middleware A CIS Architecture Enhanced by Middleware
In a large cooperative information system, the use of network services according to cooperative workflow requirement is inevitable. The providing of some services may involve in multiple domains within a large CIS, therefore require support from network layer such as reconfiguration of routers. Here domain means a relatively autonomous system owning its self-contained network, and several of such domains can loosely constitute a CIS. All the services provided by networks are registered and managed by Service Management Middleware (SMM) waiting to be used by Workflow Management System (WMS), whilst the Network Management Middleware (NMM) provides the configuration, monitoring and management of network elements. The enhanced modular CIS architecture is illustrated in Fig. 1.
Service and Network Management Middleware
235
Workflow Management System (WMS) Network Service Requirement
Service Management Middleware (SMM) XML: IP VPN
MA EE
XML: DiffServ
Middleware
Policy-based Management System
Network Management Middleware (NMM)
VPN QoS
MA EE
Adaptation Layer SNMP, CMIP IIOP Cisco Router Linux Router
Fig. 1. Enhanced CIS Architecture with Middleware
SMM aims to provide a generic service management framework sitting between WMS and NMM to enable the network administrator to manage the services in a user-friendly way, at the same time APIs are also furnished for WMS to use the these services. The services currently provided by network are registered in the service database. If a service required by WMS is not available in service database, SMM tries to negotiate with NMM to get this service. According to monitoring results, NMM can decide if it can provide such a service during the required period of time. If it can, the very service will be registered in the service database so that WMS can use it. Otherwise, an suggestion such as during which period of time the service can be provided may be proposed and transmitted to SMM that can further pass this message to WMS. This functionality gives workflow planning system in WMS more flexibility and more possibility of success, which obviously leads to the increase of resource utilization. Once WMS decides to use certain service, corresponding Service Providing Agreement (SPA) is signed and stored in the SPA repository, which can be in the same physical database as service registry. The purpose of introducing SPA is bilateral: on one hand, WMS has to pay for this service even though it fails to use it; on the other hand, the NMM is obliged to provide exactly the same service as defined in this SPA, such as bandwidth, delay, etc. This feature reflects the real world and enables possible cooperation between different enterprises. The implementation for service register/deregister is common and such mechanism as yellow page can be explored, thus only SPA management with SMM is discussed in this paper as follow. 2.2
Service Providing Agreement (SPA) Management System
SPA basically defines which user or which kind of users (i.e., user group) is given the privilege to do which application or which kind of application (i.e., application group), on which server/router, at which time point or during which period of time. If
236
Kun Yang et al.
the underlying network supports QoS, user-level QoS parameters such as gold, sliver, bronze can also be provided in the SPA. The SPA management system architecture is shown in Fig. 2. SPA Management System Authentication Check & Database Connection
User List Control
User Group Control
User QoS Control Router Control
App. List Control
Networl QoS Control
Server Control
App. Group Control QoS Mapping
SPA Control
SPA Display
SPA Update
SPA Retrieval
SPA Delete Policy Generating Module Policy Sending Module
Fig. 2. SPA Management System Architecture
QoS parameters can be divided into two levels: user level, which is technicalirrelative and easy for consumer to understand, and network level, which is technical relative. User level QoS parameters include gold, silver, bronze and they are open for administrator to add other parameters or delete the existing ones. Of course, if new user level QoS parameter is introduced, the mapping between this parameter and the underlying network level QoS parameter has to be given. Network level QoS parameters include bandwidth, delay, jitter and packet loss, which are further mapped to different DiffServ behaviors such as Best Effort (BE), Assured Forwarding (AF) and Expedited Forwarding (EF). This mapping can be defined differently at different servers or routers in the network. SPA Control is the most important component because it provides the means by which WMS defines and updates SPAs automatically. A SPA management GUI is also provided for network administrator to view/edit/delete SPAs in an offline way. Selected SPA is translated via Policy Generating Module into XML-based policy/policies that is further handled to (via Policy Sending Module) and enforced by policy and mobile agent based network management middleware so as to provide the defined service required by WMS.
3.
Network Management Middleware (NMM) Powered by Policies and Mobile Agents
For the deployment of Policy Based Network Management Systems in the Internet, a standardization process is required, to ensure the interoperability between equipment from different vendors and PBNM systems from different developers. Both the
Service and Network Management Middleware
237
Internet Engineering Task Force (IETF) [9] and the Distributed Management Task Force (DMTF) [10] are currently working for the standards of Policy Based Network Management. The most important contribution of DMTF is that it has defined the Common Information Model (CIM) management schema. But the CIM is an information model for general management information and does yet have a direct relation with policy based network management. DMTF is also working in association with the IETF policy workgroup, to extend the CIM syntax and make the CIM usable in real PBNM systems. Policy Core Information Model Extensions [11] has also been defined by IETF, which means that IETF policy architecture has covered almost all the components required by a PBNM system. The IETF policy architecture gains more popularity and is followed in the policy based CIS middleware. The architecture of policy and mobile agent based NMM and its components are shown in Fig. 3, which is fully in line with the PBNM architecture proposed by IETF but with detailed design and implementation of every component. Other Management Station
administrator
Policy Tools
Policy Receiving Module Policy Editor GUI
JDBC
Policy Repository
JDBC
Credential Check
Policy Parser
PDP Manager Resource Monitoring
PEP Manager
COPS
SOAP
PDP
PDP (VPN)
Sub-policies Generator
Secure MA Factory
Mobile Agents Generator
Mobile Agent Execution Environment
PEP
Element Wrappers SNMP, CMIP IIOP Cisco Router
Linux Router
Fig. 3. PBNM Architecture enabled by Mobile Agents
3.1
Policy Tools
Policy Tools provide the mechanisms to receive policies from Service Management Middleware. The specification of policies is in XML and the extension of policy core information model (PCIM) [11] defined by the IETF, in the form of XML schema, is
238
Kun Yang et al.
used to express the syntax of policy. eXtensible Mark-up Language, XML, is suitable for flexibly expressing the data and their structure, and it has built-in syntax check. The widely existing XML parsers make it usable across the heterogeneous platforms. The use of XML to represent the policies, thus the cooperation requirement, can easily and greatly enlarge the scope of cooperation. XML is commonly regarded as one of the key technologies for data exchange across the Internet. Policy Tools mainly include Policy Receiving Module and Policy Editor GUI. Policy Receiving Module is implemented as a static agent for receiving XML-based policies given by upper management station, e.g. SMM. A Policy Editor GUI is also designed and implemented to provide another means to input policies directly. Policy Editor GUI provides a user-friendly way for administrator to input some simple information such as source and destination host names, select the QoS parameter required by user such as gold or sliver or bronze, etc., then it can generate XMLbased policy automatically and store it into the Policy Repository. 3.2
Policy Repository Components
The Policy Repository is used for the storage of policies, after they have been defined and validated by the policy management tool. The general framework of IETF does not require a specific implementation for the policy repository, or the repository access protocol. In this paper, SQL Relational database management system, PostgreSQL [12], is used for policy database, which is connected to Policy Tools and Policy Decision Point (PDP) via Java DataBase Connectivity (JDBC). 3.3
Policy Decision Point (PDP)
PDP is the component that retrieves policies from repository, parses them thus evaluates them and eventually sends the necessary commands to the policy target. Additionally, the PDP performs a local conflict check, checking only those devices that are controlled by the specific PDP. The PDP also checks if the resources needed for a specific policy are available in all controlled devices. The main role of PDP manager is to co-ordinate the PDPs to support integrated scenarios, and resolve possible conflict. PDP manager can also dynamically download PDP code according to the availability of the code and the decision of PDP. PDP manager serves as the coordinator of PDP, Policy Parser and Credential Check Module. It reads policy from policy DB and to parse policy with the help of Policy Parser, and then it calls Credential Check Module to check the validity of policy user. The existence of PDP manager makes the whole policy management middleware extensible to contain other future PDPs. The Credential Check module is in charge of checking the privileges for services granted to any actor. Each actor that involves in cooperation should also submit a credential. The Credential Check Component then takes this credential and looks in the meta-policy database for a meta-policy related with that credential to check if the intended management actions (policies) are available to the actor that presented the
Service and Network Management Middleware
239
credential. Finally, if the credential is correct, i.e., the actor has the corresponding privileges, these policies are passed to PDP, e.g., VPN PDP. Since management systems are typically hierarchical, policies could mirror this hierarchy resulting in hierarchical policies when and where appropriate. PDP Module together with Sub-policies Generator can translate higher-level policy into subdomain level policies, with the information from monitoring service. After receiving the policies in XML file, which also has passed the credential check, the PDP Module extracts from the XML file the sub-domain level policies. Then, it needs to decide when this policy should be applied by looking at the conditions of the policy thus deciding whether it needs any information to make a decision. If so, it will ask Monitoring Service to register the condition to be monitored. Otherwise, it asks the resource Monitoring Service if there are enough resources to apply this policy. If the answer is positive, the policy will be passed to PEP Manager Module to be fulfilled. All the policies are based on fixed schema, so that they are understandable by different levels. The Resource Monitoring module, existing in both PDP and PEP, receives the registration of resource monitoring according to the requirement of policies and make sure that all the resources registered can be monitored. If the necessary metering code (daemon) is not currently instantiated, it will try to download it by making a query to the specific resource monitoring service and install it. Here in this NMM, the codes of PDP and PEP are in the form of mobile agents so that they can move themselves down to wherever needed in response to the requirement, which makes the whole structure more flexible and dynamic. Active network technology is another option for dynamically installing new functional modules to the network elements but usually positions itself in network layer. Mobile Agent Technology can sit in any layer and provide a more flexible means to deploy new services, though it is less effective than active packets. According to the given policy, PEP Manager can select which kind of PEP enforcement protocols, such as COPS (Common Open Policy Service), SOAP (Simple Object Access Protocol), together with its parameters to be used to fulfil the policy. COPS and SOAP are two standard, well-implemented protocols for policy enforcement. They are based on the client-server structure thus not as flexible and adaptable as mobile code technology, pioneered by mobile agents. In this paper, we evaluate the use of mobile agent technology, which is also exploited as a means for PDP and PEP code deployment, as another alternative for enforcing policy. Based on the parameters given by PEP Manager, Mobile Agent Generator can automatically generate the corresponding mobile agents, which can migrate themselves to the specific PEPs to enforce the policy. Grasshopper is used for mobile agent supporting platform, and its agency contributes to the Mobile Agent Executable Environment (MA EE) as shown in Fig. 1. 3.4
Policy Enforcement Point (PEP)
The policy target is the managed device, where the policy is finally enforced. There is a need of a transport protocol for communication between PDP and PEP, so that the consumer can send policy rules or configuration information to the target, or read
240
Kun Yang et al.
configuration and state information from the device. There is no specific requirement of protocol for this operation. Apart from COPS protocol, which is defined by the IETF Resource Allocation Protocol workgroup, mobile agent technology is mainly used for policy enforcement in this paper. In order for mobile agents to communicate with different controlled elements, such as Cisco routers, Linux routers, the corresponding element wrapper needs to be provided, which constitutes the adaptation layer as shown in Fig. 1. For instance, as long as the corresponding element wrappers for WDM (Wavelength Division Multiplexing) network are given, this architecture can very easily adapt to the management of IP over WDM.
4.
Mobile Agent Security Facility (MASF) Extension for Grasshopper
Security is critical to cooperative information systems. Since the network service requirement of CIS, based on the Service Providing Agreement and expressed by policies, is transported and especially carried out by mobile agents, it is essential to assure mobile agent security. Grasshopper does provide certain degree of security services, which mostly come from the security mechanisms supplied by Java 2 (mainly JDK1.2). Based on these security services and the security analysis that may occur during the whole lifecycle of mobile agent that also reflect the security concerns during cooperative process, a security extension of Grasshopper, called MASF (Mobile Agent Security Facility), is exploited to cope with these potential security threats. MASF takes into account the potential security threats in both mobile agent and the host mobile agents intend to move to, making Grasshopper more secure, thus called s-Grasshopper for secure Grasshopper. 4.1
Security Threats and Strategies
Taking into account the whole lifecycle of MA and the cooperation process, the potential threats, from both mobile agent and host points of view, can be: Threat A: in the policy system architecture discussed above, the PDP manager and PEP manager can be in the form of mobile agent so as to obtain dynamical installation. These mobile agent classes are stored in the mobile agent repository, which might be invaded and the class for the mobile agent might be changed before the initiation of the mobile agent. During mobile agent transit, mobile agent is under great possibility to be attacked, such as: Threat B1: When a mobile agent transports confidential business information, disclosure of the data can be fatal. Since the migration of mobile agent often takes place across the networks that are out of the control of both sender and receiver thus cannot be physically secured. Threat B2: The execution logic of mobile agent might also be changed by the interrupter, leading to the damage to the destination host of mobile agent. It is especially dangerous in a fully automated CIS environment as illustrated in Fig. 1,
Service and Network Management Middleware
241
where mobile agents enabled to fulfill the business tasks are on behalf of the partners in the business and sometimes are granted the right to configure routers or firewalls, in which case, Simple Network Management Protocol (SNMP), whose commands have the root permission of accessed elements, is used to configure the network elements. After mobile agent’s arrival at destination, the following threats may come up: Threat C1: The “destination” might not be the correct destination. This destination may be a counterfeit one created by business rival to steal the important information carried by mobile agent. Threat C2: Even if the destination is correct, malicious destination host may still deceive mobile agent. For example, it might not be provided the contracted services or resources, or might even be maliciously changed before it goes for another hop to fulfill another part of the business transaction. Threat C3: At the same time, the landing host of mobile agent should also be sure that the mobile agent is from the correct service contractor and it will not cause any damage to the host. Threat C4: even if the mobile agent does come from the correct peer, the host still needs to keep alarmed on mobile agent’s behaviors in case it may do something beyond the SPA. To address these threats, the MA platform used in CIS middleware must provide the following strategies: Authentication: involves checking that the agent was sent from a trustworthy role and also enables mobile agent to be aware of the real identity of the receiver, i.e., the proper Service Level Agreement (SLA) contractor. Authentication can be mainly used for the solving Threat C1 and Threat C3. Authentication can also be used to check users who want to access mobile agent repository, which involves in the Threat A. Confidentiality: implemented by encryption/decryption, can cope with the potential data disclosure of Threat B1. Encryption can also prevents the mobile agent repository from attack, i.e., Threat A. Integrity check can prevent mobile agent from the code modification attack in Threat B2. Authorization determines the mobile agent’s access permissions to the host resources and, empowered by access control, can defeat the potential threat from C4. Logging: it is a kind of mechanism to keep track of any security relevant events, such as agent’s trying to access system resources or the system itself, as well as authentication failures. These events should be logged to a file for later analysis. Logging can, in some degree, detect and therefore latterly prevent the cheat of mobile agent from host, as described in Threat C2. The implementation of these features, for the protection of both mobile agent and host, is carried out in Mobile Agent Security Facility (MASF) service. 4.2
Mobile Agent Security Facility (MASF) Architecture
Fig. 4 illustrates the architecture of Mobile Agent Security Facility (MASF), together with major dependencies among components during the lifecycle of a multi-hop mobile agent. The MASF architecture is functionally divided into two layers, with the higher layer as function layer and lower layer as base service layer. The components or services in base service layer are common functionalities used by function layer.
242
Kun Yang et al.
Security Database
Mobile Agent jarsigner
SSL
Secure MA Repository
Agent Receiving Module Integrity Check
Authentication Check
Cryptography Library
Key Management
keytool
J2SDK 1.4.0 (JCE)
MA Execution rights
Access Control
Rights Adjusting
Signer/ Encryptor MASF
Resource Usage Logging
Policy Management
SSL
Agent Sending Module
Network Elements/ Resources/Agents
Authorization Check
Mobile Agent
Location Service
policytool
IAIK iSaSiLk3.04 (SSL)
Grasshopper Common API
Fig. 4. Mobile Agent Security Facility (MASF) Architecture
Obviously, many services of function layer depend on cryptographic functions based on either symmetric or asymmetric keys to encrypt/decrypt and sign data. Therefore, MASF integrated a cryptography library in its base service layer. The Key Management service enables roles to administer their own public/private key pairs and associated certificates for use in self-authentication (where the user authenticates himself/herself to other users/services) or data integrity and authentication services, using digital signatures. The authentication information includes both a sequence (chain) of X.509 certificates, and an associated private key, which usually is referenced as "alias". To achieve security, the MASF framework supports flexible security policies to govern the interactions of agents with both other agents and with the available resources in the execution sites. This function is practiced by Policy Management service in the base service layer. The definition and enforcement of appropriate security policies can only proceed after a precise identification of the principals, i.e., roles that can be authenticated. Resource Usage Logging service fulfils the logging requirement mentioned in previous section. Although not specific to MASF, location service is sometimes used by MASF to identify the role. 4.3
MASF Workflow
All the mobile agents classes codes are stored in JAR files that are digitally signed either by jarsigner or Java API. Whenever the NMM wants to fulfill network management task using mobile agents, it has to get its signature verified first of all before it can get access to these mobile agents stored in a protected agent repository. Then NMM signs the agent to show the initiator, the first agent system etc. The last step is to start the mobile agent. NMM can also supply agent with necessary rights if no complex access control, i.e., security policy, is applied. If a mobile agent system, i.e., Agency in Grasshopper, receives a mobile agent from the communication network via ATP (Agent Transport Protocol), it decrypts it and tests the integrity of the data received by checking the signature that the sender
Service and Network Management Middleware
243
has appended. After successfully passing the integrity check, the next step is authentication. The mobile agent system verifies signature and certificates attached to this mobile agent and further gets the information such as, who wrote the mobile agent, who sent it at the very beginning or in the intermediate locations. The information can be further used for authorization and access control latterly. This step will involve in security database. Once authenticated, MASF authorizes the agent, i.e., it gets rights attached to the mobile agent or determines rights based on the security policies defined in advance, i.e., via SPA. Using security policy based access control is more flexible, though it might cause some performance deterioration. Then mobile agent can be executed with the care of access control to enforce the network management task. When the mobile agent has finished its work and wants to migrate to another location, the mobile agent system stops the execution of mobile agent and packs the agent with its current state, as it normally does. According to the network management task, rights adjusting module may be called at this moment to adjust the current right of mobile agent, e.g., give mobile agent more rights at its next location. Then, signer/encryptor module is called by Agency to sign the mobile agent to confirm the execution or any change of mobile agent. Encryption may be applied as well by this module. Finally, Agency opens a communication channel to the new Agency (or Place) and sends the agent. The channel can be a secure one enhanced by Secure Socket Layer (SSL). 4.4
Implementation Issues
Regarding to implementation, MASF uses X.509 certificates for authentication, which ascertain the role of the agent principal before authorizing any interaction with resources. The issue as to the management of certificates and other related administrative tasks has yet been integrated, but a commercial Public Key Infrastructure (PKI) provided by Entrust [14] expects to be used. Confidentiality is granted by encrypting/decrypting communications with SSL. Secure Socket Layer (SSL) provides a means for securing communication interchanges between agents. This protocol authenticates and encrypts TCP streams. This protocol is popular and appears to have become a standard for secure network communication. There are a number of third party implementations available in Java, e.g., IAIK SSL [15]. The integrity check can employ either MD5 or SHA1, which are fully provided in J2SDK1.4.0. Using Java Access Control mechanism can perform access control of agent action. Furthermore, three tools in J2SDK1.4 are used: keytool is used to create public/private keys; to display, import, and export certificates; and to generate X.509 certificates. jarsigner signs JAR (Java Archive Format) files. policytool creates and modifies the external policy configuration files of a role such as service provider.
244
5.
Kun Yang et al.
Case Study: IP VPN Configuration between Two Cooperative Information Domains
Based on middleware-enhanced CIS architecture given above, this section presents a case study to evaluate this architecture, i.e., safe inter-domain IP VPN provisioning within a largely distributed CIS, as shown in Fig. 5. WMS Service Requirement
MA code download
MA Storage XML
SMS (SMM)
SSL
XML
MA code download
CIS Domain B
CIS Domain A SNMP
Linux + s-Grasshopper
Cisco
IP VPN
Cisco
SNMP
Linux + s-Grasshopper
Fig. 5. Inter-domain IP VPN Configuration Case Study
According to the workflow requirement, Workflow Management System (WMS) handles service requirement, i.e., setting up IP VPN tunnel between its two subdomains, to Service Management Middleware (SMM) on Service Management Station (SMS) which checks from the service registry the availability of this service. If this service is not currently available, SMM transfers this requirement to Network Management Middleware (NMM), which, with the knowledge of its own domain, can decide if it can provide this service. If networks could not provide this service, WMS either delays this requirement or finds other resolution. In this scenario, the required service is available in SMS service registry, so that the SPA (Service Providing Agreement) is signed and stored in the SPA database. At the same time, the SPA is translated into XML-based policy following the PCIM schema and transported to subdomain PBNM station after digitally signed. The sub-domain PDP (Policy Decision Point) manager can download the proper PDP (if it is not available in this sub-domain network management station) via SSL, which is in the form of digitally signed mobile agent, to make the policy decision. After this, the selected or/and generated policies are handed to PEP (Policy Enforcement Point) manager, which, also sitting on the sub-domain PBNM station, downloads the PEP codes, e.g., for new IP VPN configuring, according to the requirement given in XML file. The PEP, also in the form of (digitally signed) mobile agent, moves itself to the Linux machine, on which it uses SNMP to configure the Cisco router so as to set up one end of IP VPN tunnel. Same procedure takes place at the other end of the IP VPN tunnel, therefore set up the
Service and Network Management Middleware
245
IP VPN tunnel. Till now, the network service required by WMS is satisfied and the workflow goes on, possibly using this secure channel to perform cooperation. Security mechanisms unavoidably cause performance deterioration. In order to get a better trade-off between security needs and required performance, the following guideline is adapted in above scenario: agents in trusted environments, i.e., intradomain such as a private Intranet of a division, could directly access resources after the authorization check, without using SSL; whilst agents moving in un-trusted environments such as the network belongs to other autonomous system, i.e., interdomain, have to pass the confidentiality and integrity check apart from authentication and authorization, i.e., using SSL.
6. Conclusions Despite the advantages of current Cooperative Information Systems, a wider application of this technology is greatly limited by the lack of flexibility and automation in a large-scaled cooperative information system consisting of multiple autonomous systems. To cope with this problem, CIS needs to take into account the underlying service and network features and needs to have the ability to use the network services transparently. This paper presents a practical solution for this challenging problem by introducing two cooperative middleware: service management middleware and network management middleware. Both of middleware are based on policy-based network management technology to obtain flexibility of management and enabled by mobile agent technology to maintain automation. Furthermore, a mobile agent security facility (MASF), which supports a wide span of security mechanisms including authentication, integrity, authorization, confidentiality and logging, is provided to guarantee the security during the service and network management. Most of the work presented in this paper has been developed in the framework of the EU IST Project MANTRIP [15]. A commercially oriented test-bed has been set up. As a case study and also a practical application used in big CIS environment, IP Virtual Private Network tunnel between two geographically separated domains within a large-scaled CIS is set up and configured automatically and securely using the service and network management middleware, which shows that service and network management enhanced CIS can operate with more flexibility, automation, security and in a larger information system. Moreover, the policy-based network management system presented in this paper also covers most of the network management issues that commonly exist in another EU IST project, WINMAN [16]. This policy and MAT based network management system intends to provide a ubiquitous network management system regardless of the underlying network resources, either pure IP as the main scope of MANTRIP or IP over WDM (Wavelength Division Multiplexing) as the main stream of WINMAN. Based on the service and network management middleware, workflow management system is expecting more intelligence so as to reasonably and adaptively react to the different service requirement results provided by the middleware, especially when the result of service requirement is negative. Negotiation may be a way to satisfy this expectation.
246
Kun Yang et al.
Acknowledgements This paper describes part of the work undertaken in the context of the EU IST projects MANTRIP and WINMAN. The IST programme is partially funded by the Commission of the European Union.
References 1.
2.
3. 4. 5. 6.
7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Batini, C., Giunchiglia, F., Giorgini, P., Mecella, M. (eds.): Proceedings of the 9th International Conference on Cooperative Information Systems (CoopIS 2001). Lecture Notes in Computer Science, Vol. 2172. Springer-Verlag, Berlin Heidelberg New York (2001) Damianou, N., Dulay, N., Lupu, E., Sloman, M.: The Ponder Specification Language. In: Sloman, M., Lupu, E., Lobo, J. (eds.): Workshop on Policies for Distributed Systems and Networks (Policy2001). Lecture Notes in Computer Science, Vol. 1995. Springer-Verlag, Berlin Heidelberg New York (2001) Kozt, D., Gray, R.S.: Mobile Agents and the Future of the Internet. In: ACM Operating Systems Review, Vol. 33, Issue 3. ACM Press New York (1999): 7-13 Mobile agent system list: http://mole.informatik.uni-stuttgart.de/mal/mal.html, 1999 Grasshopper website: http://www.grasshopper.de, 2002 Milojicic, D., Breugst, M., Busse, I., Campbell, J., et al.: MASIF: The OMG mobile agent system interoperability facility. In: Rothermel, K., Hohl, F. (eds.): Proceeding of 2nd International Workshop on Mobile Agents. ). Lecture Notes in Computer Science, Vol. 1477. Springer-Verlag, Berlin Heidelberg New York (1998). FIPA home page, www.fipa.org, 2002 Yang, K., Guo, X., Liu, D. Y.: Security in mobile agent system: problems and approaches. In: ACM SIGOPS Operating Systems Review, Vol. 34, Issue 1. ACM Press, New York (2000): 21-28 IETF Policy Workgroup website: http://www.ietf.org/html.charters/policy-charter.html DMTF website: http://www.dmtf.org/ IETF PCIM draft on web: http://www.ietf.org/internet-drafts/draft-ietf-policy-pcim-ext08.txt, 2002 PostgreSQL website: http://www.postgresql.org, 2002 Entrust Website. http://www.entrust.com, 2002 IAIK website. http://www.iaik.tu-graz.ac.at/, 2002 EC IST Project MANTRIP website: http://www.solinet.com/mantrip, 2002 EC IST Project WINMAN website: http://www.winman.org, 2002
Research on Enterprise Modeling of Agile Manufacturing Hongxia Xu, Li Zhang, and Bosheng Zhou Software Engineering Institute Beijing University of Aeronautics and Astronautics, Beijing, China 100083
[email protected]
Abstract. In order to make rapid reflection to market change, enterprise process must be agility. In this paper, a multidimensional enterprise modeling method is presented through its multi-views, lifecycle management and hierarchy modeling. This research provides a foundation for the development of agile manufacturing enterprise modeling support environment.
1.
Introduction
Agile manufacturing is an advanced manufacturing technology, the background of which is the internationalization of economy development and the diversification and individuation of market requirement. Enterprises supporting agile manufacturing should have virtual and agile characters. Virtual character describes the ability of several enterprises can cooperate dynamically in order to response the market requirement quickly. Agile character is RRS, namely, reconfigurable, reusable and scalable character. Enterprise model can describe above characters formally and visually, moreover, based on the results of model analysis and optimization, enterprise reengineering can reduce risk. In this paper, an enterprise modeling method of agile manufacturing based on extended COSMOS is presented, which has following features, 1. Can support the foundation and the enactment of virtual agile enterprise, and the RRS feature of dynamic alliance. 2. Can describe the comprehensive enterprise process from different views, such as process, infrastructure, coordination and information. 3. Can support the enterprise process total life cycle including planning and designing, acting, maintaining. 4. Can support the dynamically adjustment and reengineering of virtual agile enterprise process based on the change of market requirement through the analysis and optimization of process model in different stages. 5. Can support process reusable technology based on model component. In a word, the development of enterprise modeling of agile manufacturing should be driven by market opportunity, and can response quickly to user requirement. Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 247-256, 2002. © Springer-Verlag Berlin Heidelberg 2002
248
2.
Hongxia Xu, Li Zhang, and Bosheng Zhou
Enterprise Model Architecture
In resent years, some methods on enterprise modeling of agile manufacturing have been presented by many scholars at home and abroad. For example, many scholars extend such enterprise modeling architecture as CIMOSA, ARIS, GIM and PERT, and make it adapt to the integration of different enterprises. Based on the analysis to above modeling method and the experience of cooperating with China Aviation Industrial Corporation, author proposes a three-dimensional enterprise modeling architecture for agile manufacturing, which includes view-dimension, lifecycle-dimension and hierarchy-dimension as shown in figure 1. lifecycle-dimension process model infrastructure model coordination model
maintenance enactment
information model
design plan hierarchy-dimension
view-dimension
virtual
real
business
Fig. 1. Architecture of enterprise model for agile manufacturing
View-dimension describes the comprehensive character of virtual agile manufacturing enterprise. It is based on process model and combined with infrastructure model, information model and coordination model. Lifecycle-dimension reflects the periodicity of enterprise modeling, which ranges from planning to maintaining. Hierarchy-dimension depicts the unique hierarchy structure of virtual agile manufacturing enterprise. Virtual enterprise is buildup dynamically by different cooperative enterprises driven by market requirement. Virtual enterprise establishes the holistic model based on the local model of all cooperative enterprises. 2.1 Multi-view Model Virtual agile manufacturing enterprise has the feature of common real enterprise, such as business process, organization structure and resource support, and so on. Enterprise model can describe an enterprise formally and structurally. Many modeling methodology usually describes an enterprise from different multiple views that reflects dif-
Research on Enterprise Modeling of Agile Manufacturing
249
ferent aspects of the enterprise. In addition, virtual enterprise has its own feature. From the point of process, virtual enterprise process is the set of many relative processes, and from the point of organization, virtual enterprise is buildup dynamically by different independent enterprises for one common market competitive goal. To support above characters, based on COSMOS model proposed by Prof. Romand T.Yeh, and analyzing and comparing some typical methods, such as ProVision model of American Proforma Company and ARIS model posed by Prof. A.W.Scheer, we put forward an extended enterprise modeling model EE_COSMOS, and think an enterprise model should be a comprehensive model which is based on process model and combined with infrastructure model, information model and coordination model. Because an enterprise always has more than one business process, an enterprise model should include one infrastructure model, one comprehensive information model, one coordination model and more than one process models. These process models share the one infrastructure model and the comprehensive information model, and are coordinated by the coordination model. So an enterprise model can be described as a set having four elements, EM = {{PM}, IM, CM, IFM}
(1)
and figure 2 illustrates the constitute of enterprise model and the relation between each part. Coordination
Process
Model
Comprehensive Informa-
Infrastructure Model
tion Model
(organization/resource)
Fig. 2. Constitute of EE-COSMOS Model
Based on EE-COSMOS model we design the latest version Visual Process Modeling Language (VPML). VPML is one kind of modeling language aimed at enterprise process, and has fine vision effect and formalization. It includes three layers as figure 3 shown. VPML describes such aspects as process, infrastructure, information and coordination of an enterprise by combining graphics with text. The graphic part can support sequence control structure, condition control structure and loop control structure that are essential for programming language, so can describe the structure of process more visually. The text part can support the attribute definition of the process object in that
250
Hongxia Xu, Li Zhang, and Bosheng Zhou
Graphics describe the structure of process
VPML Presentation Layer Syntax Layer describe the object primitive and the relation of them
Text describe attributes per object
Semantics Layer dynamic interpretation of process model executing
Fig. 3. Architecture of VPML
can be built in enterprise models and the relation of them, and includes the syntax rules of process model, infrastructure model, information model and coordination model. The semantics layer describes the dynamic interpretation of VPML process model executing based on event-driven mechanism and queue theory. Thus the model built with VPML is visible, moreover, can be interpreted and executed, that is to say, the model can be simulated, optimized and operated. 2.1.1 Process Model Process model depicts all activities in an enterprise process, and resource required, input product and output product, the control to activities from the view of management and relation between activities. Even if there are usually several businesses developed simultaneously, an enterprise process should include several process models. Each process model can be described as a connective set which includes activities and the relation between them. So each process model ( PM ) can be described as follow, PM = < A, P, R, Input, Output, Support, Control >
(2)
PM, means process model. A, means activity set. P, means product set. Input, means the input relation between activity and product. Output, means the output relation between activity and product. Support, means the relation between activity and resource. Control, means the controlling relation to model. Each activity has its predecessor and its successor, supporting condition required to run, input condition and the output result after executing. Virtual enterprise process includes two parts. One part is a special process of virtual enterprise, which is involved in the cooperation, control and restriction between different enterprises. Another part is several processes of cooperative enterprises, which describes the process of cooperative enterprise that is concerned with the virtual enterprise.
Research on Enterprise Modeling of Agile Manufacturing
251
2.1.2 Infrastructure Model IM depicts organization structure and resource held by enterprise and the relation between them, and provides resource required for activities of process model. IM can be described as: IM =
(3)
Organization model is a hierarchical relation diagram that describes an organization and its different level of inferior organizations. Organization can possess of manpower resource and other non-manpower resource. Resource model depicts the resource held by all level organization units in detail. In fact, OM and RM describe the infrastructure model of the same enterprise from two different views. Flexibility of interior organization and dynamic alliance of exterior enterprises are the character of agile manufacturing organization. The organization structure mainly includes two levels, the upper is virtual enterprise composed of several virtual groups, and the lower is team. Each virtual group is composed of teams that come from several cooperative enterprises. Driven by market opportunity and user requirement, virtual enterprise is formed by several partners that intercommunicate with advanced network technology. One core partner is in charged of the selection and the coordination of other partners. Team is composed of several basic organization units based on certain business or production. That is to say, project team should be configured according to the business of enterprise, and each team is responsible for users rather than its superior. Project team has more meaning than a closed department. The resources held by one team are alterable in different process, and one certain resource may belong to several businesses at the same time. Project team not only is autonomous but also cooperates with others despite of its level. Thus the organization is dynamic and flexible. In the latest version VPML, there are five sub-resource types, describe all kinds of person role, application tool, instrument, physical location of an enterprise respectively. Resource Type *
Role
Machine
-- inherit
Location
Tool
-- aggregation
Group
1
1 *-- one to more
Fig. 4. Framework of Resource & Organization
Group is one set of all sorts of resource, and is used to being held by one organization unit or one project team, moreover, it can include other groups. Thus the structure of group is hierarchical.
252
Hongxia Xu, Li Zhang, and Bosheng Zhou
Since all process models of an enterprise are supported by only one infrastructure, how to describe that one resource can be shared by more than one process, but used at different time is more important. To solve resource conflict, we define resource from two aspects, capability and usability. The capability is described through efficiency, cost, number, and so on. The usability is described through status, such as sharable, local, idle, usable or busy, and is involved in the control and assignment of resource. One same resource can be assigned to different processes in modeling environment, but can’t be used by them simultaneously in simulating and executing environment. For sharable resource, one process can use it only when its status isn’t busy, namely, other processes don’t use it at that time. 2.1.3 Coordination Model Virtual agile manufacturing enterprise is an extensive enterprise, from order transmission to product maintenance, not only interior organization structure, but also enterprise cooperator and competitor both affect the business process of the enterprise. So CM describes enterprise strategy, business policy and management policy, furthermore, should describe the cooperating and constraining relation within and external to enterprise. Following expressions can illustrate it, CM =
(4)
Rdecision = {gi ( parameterj)}, i = 1,2,…,m; j=1,2,…, n; Rinteraction = < Rconsumer, Rcooperation, Rcompeting > Rcommunication= < Rdirect, Rindirect > Rdecision is a system having multi-factor and multi-goal, moreover, it is a open system, can be customized based on the requirement of consumer. Rinteraction describe the interaction between interior organization and exterior organization. The interactive style includes supply chain, subcontract, joint venture, virtual cooperation, and so on. In latest VPML we extend the meaning of source product and non-source product, So supply chain management can be executed through business interaction model. Rcommunication describe the constraint of interior organization units, which include direct communication relation and indirect communication relation. Communication model is complementary to organization model. 2.1.4 Information Model IFM describe product and mid-product that produced or used during the enterprise process, relation between them, and all sorts of data that need to be disposed in process simulation and operation. IFM is the foundation to execute information integration, together with communication technology, database technology and PDM technology. It is impossible to realize information share between different application systems without comprehensive information model.
Research on Enterprise Modeling of Agile Manufacturing
253
2.2 Lifecycle-Dimension Model Agile manufacturing has market requirement analysis and plan, designing, enacting and maintaining four phases during its whole business process. That is an iterative process, and lifecycle-dimension model can describe the whole procedure as figure 5 shown. DESIGN Build virtual agile enterprise model Simulate and optimize the model Recombine core resource Test the model
ENACT Mapping model to practical system Build unite data source Training
PLAN Analysis (market opportunity, existing competition advantage) Market strategy plan Select cooperative enterprise
MAINTAINCE Virtual enterprise practical application Process monitor and control Tracing Knowledgebase maintain and update Virtual agile enterprise dissolution
Fig. 5. Agile manufacturing lifecycle-dimension model
During plan phase, based on analysis of market opportunity and user requirement, to establish market strategy goal, and based on evaluation of core resource and competition advantage, to select cooperative enterprise. During design phase, to build virtual agile enterprise multi-view model, which includes process, information, infrastructure and coordination. The model can be simulated and optimized, and support the reengineering of enterprise. During enactment phase, to practically realize the multi-view model, to buildup unite data source and data interface, and make training before delivery. During maintain phase, to practically apply virtual agile enterprise system, to monitor and control the process, to trace certain production. After the dissolution of virtual enterprise, experience could be summarized and knowledgebase could be updated. 2.3 Hierarchy-Dimension Model Hierarchy-dimension model describes the special recursive hierarchy structure of agile manufacturing as figure 6 shown. Any enterprise achieves its business goal by combining a series of relative process. So does agile manufacturing. However, the distinction is the process sets of agile virtual enterprise come from different real enterprise, which is an agile, open and distributed architecture.
254
Hongxia Xu, Li Zhang, and Bosheng Zhou
Business (process) layer describes all activities of each real enterprise that are involved in the agile virtual enterprise the resource required, input production and output production, and the logic relation of activities. Real enterprise (task) layer describes all tasks of cooperative real enterprise after process optimization and recombination. Task is a function description of a series of correlative processes, which is undertaken by independent enterprise organization. Virtual agile enterprise (project) layer describes all projects taken by the virtual enterprise based on market opportunity and user requirement. Project is a set of a series of tasks controlled by constaining condition. Virtual agile enterprise (Project) Real enterprise (Task) Business (Process) Fig. 6. Hierarchy structure of virtual agile enterprise
Process comes from each real enterprise is the foundation of virtual agile enterprise. Agile manufacturing can accomplish its business goal by effectively integration of processes. Thus enterprise modeling of agile manufacturing can be supported by process component technology. 2.4 The Relation between Different Dimension Model Multi-view model describes the enterprise from different views and should support the total lifecycle of enterprise process. Namely, not only the process-driven multiview model can be simulated, but also can be enacted, moreover, the model can be analyzed and optimized at different period. As shown in figure 7, during plan phase, take analysis of market requirement, then make sure the current ability of enterprise by defining and building the multi-view model of the original process, so can get the difference and plan the market strategy and tasks. During design phase, select partner enterprise, firstly build local enterprise multi-view model, then analyze and optimize the model to meet the requirement of agile manufacturing. Secondly, coordinate and synthesize these local enterprise models and form the global enterprise model of agile manufacturing. During enactment phase, realize the physical environment of multiview model, based on the project/task assigned, partner enterprise organize the team, supply supporting resource, and so on. During maintenance phase, apply the enterprise model of agile manufacturing, monitor and control the multi-view model, ac-
Research on Enterprise Modeling of Agile Manufacturing
255
cording to the method of ABC and ABM, confirm the assignment of benefit/risk among partners. According to the tracing to enterprise model application, the structure of virtual enterprise can also be reconfigured. On one hand, its partner can reconfigure its enterprise model, on the other hand, virtual enterprise can reconfigure its partners and enterprise model. After dissolution, methods of enterprise modeling and new requirement can also be pick up, which will induce more perfect model. In extend VPML, the model can support such technology as abstract and concretion, decompose and aggregation. So multi-view model can be applied to different hierarchy modeling. At the level of virtual agile enterprise, the multi-view model describe the holistic model of agile manufacturing, including projects, virtual group coming from different partners, the constrain condition and coordination between different partners. At the level of real enterprise, the multi-view model describe the process of partner that is relevant to agile manufacturing, including tasks assigned, the team that can support those tasks, and information flowing between the tasks. At the bottom level, multi-view model depicts in detail, to that extent, activities that can execute directly and can’t be concreted more, some role or equipment that support the execution of the activity, inputs or outputs between activity and its preceding or succedent activities. Market requirement analysis
New requirement acquirement
plan Current enterprise modeling
design
partner enterprise modeling
virtual enterprise modeling
Different analysis
partner model analysis and optimize
Enterprise model analysis and optimize
enactment maintenance
Apply virtual enterprise model
Model monitor and control
Fig. 7. Modeling virtual agile enterprise during total lifecycle
3.
Conclusion
This paper presents an enterprise modeling architecture of agile manufacturing. Multiview model can integrally describe the enterprise from the point of process, infrastructure, coordination and information. Lifecycle modeling supports the full evolving process of the enterprise. Level modeling reflects the modeling method ranging from
256
Hongxia Xu, Li Zhang, and Bosheng Zhou
global to local to particular. VPML developed by Software Engineering Institute of Beijng University Aeronautics and Astronautics can support this kind of enterprise modeling methodology. Later we shall make more deep research on the basis of the architecture and develop integrated modeling support environment.
References 1.
A W Scheer. ARIS-businees process frameworks, springer, Verlag Berlin, 1998
2.
Fan yushun, wang gang, introduction to enterprise modeling theory and methodology, tsinghua university press, 2001.10
3.
Dai yiru, yan junwei, modeling technology for virtual agile manufacturing enterprise, journal of tongji university, 2001.11
4.
Raymond T Yeh, Whit Knox. an integrated approach to business process reengineering. Draft
5.
Tan wenan, zhou bosheng, integrated enterprise model architecture based on processdriven, computer engineering and application, 2001.12
6.
Wang lei, zhou bosheng, the study of enterprise model, computer engineering and application, 2001.12
7.
Zhou bosheng, discussion of the polymorphism of complex enterprise process model, the second annual seminar of process engineering and integrated technology, Beijing, 1999.8
8.
Zhou bosheng, zhang li, discussion on virtual enterprise resource plan system, the academic thesis of national nature science foundation project, 1999,12
9.
Zhang li, xu hongxia, zhou boshen, enterprise modeling and its flexible support environment, 1st CENNET workshop on digital manufacturing and business, 2002,4
HCM – A Model Describing Cooperation of Virtual Enterprise Yan Zhang and Meilin Shi Department of Computer Science and Technology, Tsinghua University, {zyan,shi}@csnet4.cs.tsinghua.edu.cn
Abstract. Based on the three most important characteristics of cooperation in virtual enterprise, namely hierarchical, dynamic and self-organizational, we propose a Hierarchical Cooperative Model (HCM) to describe the whole cooperative relation and cooperative way in virtual enterprise. A four-tuple is used to formally describe a cooperative actor, in which we use limited Markov chain to describe the dynamic trend of cooperation. Finally, we propose a mixed strategy to implement cooperation at collaborative level, which is the second logic level of HCM. HCM can describe the cooperative relation both statically and dynamically and be able to predict the whole cooperation status in virtual enterprise, which is very helpful for system decision making.
1 Introduction Globalization of trade and customization of products have greatly changed both the macro and micro competition environments of enterprise, in which geographically dispersed enterprises often form a short-term or long-term business coalition to meet various market objectives quickly and flexibly. And advances in modern information technologies, such as Internet, Workflow Managements System (WfMS) etc., have made it possible to enable enterprises in a business coalition to cooperate with each other. This kind of business coalition is called virtual enterprise (VE), which is the product of both economic drive and technical support. One of the key technologies to guarantee agility of VE under network environments is Computer Supported Cooperative Work (CSCW). Unlike their lumbering traditional counterparts, VEs are so flexible that they can be any size or type and can reconfigure themselves quickly and temporarily in response to variant market demands. Without support of CSCW, enterprises cannot form enterprise alliance and collaborate with each other electronically. As cooperation is the sole of VE, more and more researchers on CSCW and some other fields are interested in how to support VE better. Researchers have been working on the problem of VE transaction modeling using different methods in recent ten years. Several typical ways are as follows. Amjad Umar and Paolo Missie analyzed interaction of VE qualitatively and considered that VE can be enabled by extending the existing and evolving electronic commerce infrastructure [1]. Hason Davulcu et Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 257-266, 2002. © Springer-Verlag Berlin Heidelberg 2002
258
Yan Zhang and Meilin Shi
al. proposed a formal framework based on Concurrent Transaction Logic (CTL) for modeling and reasoning interactions in a VE [2]. Jan Øyvind Aagedal et al. introduced some key enterprise modeling concepts from Reference Model for Open Distributed Processing Enterprise Viewpoint to characterize the nature and conflicts of interactions between roles in an enterprise [3]. Justus Klingemann et al. adopted continuous time Markov chain to describe outsourced service of VE [4]. Panos K. Chrysanthis viewed the establishment of a VE as dynamically expanding and integrating workflows in decentralized, autonomous and interacting workflow management systems. He used ACTA, a first order predicate logic formalism with a precedence relation to specify his VE workflow model [5]. Work mentioned above has mainly focused on VE transaction modeling and it has solved the problem of VE transaction description. However, previous work has not considered characteristics of cooperation at different business layer but just generally discussed cross-organizational cooperation of VE. And research on description of cooperation relationship and cooperation way of whole VE is far from adequate. In this paper, a Hierarchical Cooperative Model—HCM is proposed to describe the whole collaborative relation of VE from the viewpoint of VE cooperation. HCM aims to provide the whole cooperative status of VE for enterprise managers, which is significant for decision making in network economy age. The paper is divided into 5 sections. We first explain why to choose a hierarchical structure by analyzing differences between cooperation across VE and cooperation in a VE. Then in section 3, we give both natural description and formal description of HCM. In section 4, an implementing strategy of HCM is discussed. Finally, there is a brief conclusion in section 5.
2 Why HCM is Hierarchical Cooperation is the sole of VE. According to whether cooperative partners belong to the same VE, cooperation of VE can be divided into two types: cross-VE cooperation and inner-VE cooperation. As VE is autonomous, each VE can be regarded as an autonomous domain. Consequently, cooperation of VE can also be divided into crossdomain cooperation and inner-domain cooperation. HCM is designed as a hierarchical structure because cross-domain cooperation and inner-domain cooperation are quite different. Although cooperation across VE essentially equals to building a new VE, differences between the two kinds of cooperation make it necessary to deal with them respectively. Cooperative partners across domain and inner domain are different in granularity of business process; resources share degree, information share degree, cooperation repetition frequency, group cooperation mechanism, and management system etc. Table 1 shows what the differences are between them.
HCM – A Model Describing Cooperation of Virtual Enterprise
259
Table 1. Comparison between Cross-domain Cooperation and Inner-domain Cooperation Level of Cooperation Crossdomain Innerdomain
Granularity of Business process
Resources Share Degree
Information Share Degree
Cooperation Repetition Frequency
Thick
Low
Low
Lower
Thin
High
High
Higher
Group Cooperation Mechanism
Management System
Central control Equal cooperation
Heterogeneous Homogeneous
Granularity of business process across autonomous domain is often thicker than that of business process in the same domain. Business process of VE can be decomposed to sub-processes at different business layer and sub-processes can be finally decomposed to activities. Cooperative partners across domain always need to accomplish business flow at higher layer, such as main flow or its direct sub-flow, which possesses relatively independence and integrity. Comparatively, entities inside a domain often cooperate to fulfill tasks of business flow at lower layer. Resources and information share degrees are closely related with granularity of business process. Commonly, as cooperation across domain has thicker granularity of business process, its resources and information share degrees are lower than those of cooperation in a domain. For example, when a VE outsources one of its services of its main business flow to another VE, these two cooperative partners only need to exchange some basic control data and application data. The more close cooperation is to activity level, the more high resources and information share degrees are. When cooperation is at activity level, resources and information share quality and depth are both high, since relation between activities is very close and most resources and information are communal. Cooperation repetition frequency mostly decides cooperation term. Long-term business coalition often has high cooperation repetition frequency while temporary cooperation has low cooperation repetition frequency. Long-term cooperation relationship is usually built up inside a VE between entities with good trust. Cooperation across VE is often temporary and changes quite quickly due to dynamic market objectives. Group cooperation mechanism is also different between cooperation across domain and cooperation in a domain. The former cooperation is usually initiated by a VE, which is active throughout the cooperation and responsible for organizing and coordinating the whole business process. So cooperation across VE is under central control. Cooperative entities of the latter cooperation cooperate in an equal environment and fulfill their tasks according to business process definition. There is no active side in it and entities are self-organized. Obviously, different VEs probably adopt heterogeneous management systems, as they have different business requirements and also some history reasons. But in a VE, member enterprises often possess homogeneous management systems, even share one management system. On the bases of the comparison mentioned above, conclusion can be drawn that cooperation across VE and in VE has many significant differences that should not be neglected like the models in previous work. These differences cause different cooperative strategies and methods to be selected for cooperation across VE and in VE. That is why HCM is designed as a hierarchical structure.
260
Yan Zhang and Meilin Shi
3 Description of HCM 3.1 Description in Natural Language
Fig. 1. Hierarchical Collaboration Model (HCM)
As shown in Fig.1, collaboration in VE is modeled as a hierarchical structure. The biggest circle represents a VE, which is an autonomous domain. The middle one represents member enterprises of VE. The smallest circle represents workshops comprising the member enterprise of VE. Considering the particular role of final consumers, we use solid node to represent them. Final consumer can cooperate with any entity at each layer [6]. HCM is able to embody three most important characteristics of collaborative work in VE. ! Hierarchical ! Dynamic ! Self-organizational Firstly, cooperation in VE is hierarchical. Differences between cooperation across VE and in VE make it necessary to differentiate cooperation at different business layer and select different strategies and methods for each layer. Secondly, the hierarchical structure is not static but dynamic, which means partners at same layer can freely group according to variable market goals. Even partners at one layer can join the team at another layer and cooperate with its team members. Thirdly, workshops of an enterprise at innermost layer are self-organizational. They can organize themselves to form a cooperation structure automatically in response to sub-goals, which are results of task allocation under the restriction of resources. Cooperation among enterprise entities at different layer varies in coupling tightness. Characteristics of cooperation of each layer decide that entities at outer/higher business layer are more loosely coupled than those at inner/lower layer. For example, entities at outer layer may use heterogeneous management systems to manage their production and sales processes. On the other hand, entities inside a VE may use homogeneous management systems or even share the same system. Cooperation across
HCM – A Model Describing Cooperation of Virtual Enterprise
261
domains may only require final results from cooperative partners. However, cooperation inside a domain may need not only final results but also middle data for process instance monitor, control and data audit etc. According to the interaction in cooperation, there are two typical cooperative ways in VE, namely outsourcing and combination. If the two enterprises are coupled by outsourcing, they have little interaction during the process in which the service provider executes the task requested by the service requester. If they are organized by combinative production, interaction is very necessary during their cooperation process. 3.2 Formal Description of HCM Definition 1: Entities at different layer that comprise VE are called cooperative actors denoted by ai. To meet the needs of market, ai must cooperate with each other and utilize available resources to accomplish tasks allocated according to market demands. Definition 2: Specification which describes cooperation between cooperative actors is Service. It specifies cooperative goal, cooperative behavior, cooperative lifetime and cooperative result etc. ai. can be formally represented by a four-tuple Let id be an identifier of ai., which can specify a cooperative actor uniquely. Let t be a task which is undertaken by cooperative actors. Let L be a row vector denoted by [L1, L2,…,Ln]. m is the number of cooperation layers of VE. L1 represents the outermost layer and Ln represents the innermost layer. Cooperation layer moves inby from L1 to Lm. We can know from L the location of ai in the cooperation by using some kind of data coding method. If ai need shift from one layer to another, we can get it by right-multiplying transition matrix PL. And values of L can be collected for analyzing cooperation states at different layer. Let Co be a p×q matrix. p equals the number of cooperative partners and q equals the number of selected cooperative parameters. For example, let q=3. Row vector of Co is [aij, Couplingij, ServiceIDij] aij is the id of the partner that cooperates with ai.; Couplingij is the coupling way between ai and aj.; ServiceIDij is the id of the service that ai and aj should comply with. We introduce two limited Markov chains whose transition matrices are Pa and Pcop to describe how cooperative structure of VE changes when market objectives change. For any two enterprise entities, they either cooperate with each other or not. In other word, cooperation between them has two states. If there is some cooperation relationship between them, let the state value be 1. If there is not any cooperation relationship between them, let the state value be 0. If the two entities have cooperation relationship, when they face new market requirements, the probability of remaining cooperation relationship is p11 and the probability of giving up cooperation is p12,0@LVDQH[WHQGHGSUHGLFDWHWUDQVLWLRQQHWZKLFKLQWURGXFHV LQKLELWDUFVDQGHUDVHDUFVLQWRWKHWUDGLWLRQDO3U7QHW(UDVHUDUFVPD\OHDGWRWKHORVV RI WRNHQV ZKLOH LQKLELW DUFV SUHYHQW VRPH WUDQVLWLRQV IURP ILULQJ KRZHYHU XQOLNH HUDVHU DUFV LQKLELW DUFV PD\ SUHVHUYH WKH ZKROH QXPEHU RI WRNHQV +DYLQJ EHHQ DXJPHQWHGZLWKVXFKULFKVHPDQWLFVLWVIRUPDOGHVFULSWLRQFDSDELOLWLHVDUHLPSURYHG VLJQLILFDQWO\DQGWKHGHJUHHRI PDUNLQJQRGHV H[SORVLRQLVFRQWUROOHG:LWK3(6$7 >@ZHFDQHGLWWKHQHWJUDSKFKHFNWKHV\QWD[DQDO\VHWKHUHDFKDELOLW\VLPXODWHDQG EURZVH WKH RSHUDWLRQ ZKLFK IRUPV WKH 3HWUL QHWV EDVHG GHVFULSWLRQ DQG DQDO\VLV RI SURWRFROV 1HYHUWKHOHVV ZKHQ ZH DQDO\VH WKH UHDFKDELOLW\ RI DQ (3U71 QHW ZH ILQG UHGXQGDQW FRQFXUUHQW VXFFHVVRU PDUNLQJV >@ ZKLFK LV WKH DGYHUVH VLGH HIIHFW RI LQWURGXFWLQJ LQKLELW DUFV DQG HUDVH DUFV ZKHUH LQFRQVLVWHQF\ RI WRNHQ SODFHPHQW FDXVHVFRQIOLFWVEHWZHHQQRUPDOLQSXWRXWSXW DUFVDQGLQKLELWDUFV,QRWKHUZRUGV VLQFH WUDQVLWLRQV DUH SHUPLWWHG WR RFFXU FRQFXUUHQWO\ WKH UHVXOWV RI UHDFKDELOLW\ DQDO\VLV PD\ LQFOXGH D JUHDW GHDO RI PHDQLQJOHVV VWDWXV QRGHV 7R PDQDJH WKLV VLWXDWLRQ LW LV QHFHVVDU\ WR DGG RQH PHFKDQLVP ZKLFK LQWURGXFHV WKH FRQFHSW RI
WLPH WKLVLVWRVD\WKHSULQFLSOHRIV\QFKURQLVDWLRQ7KHUHIRUHZHSURSRVHRQHQHZ W\SHRI(3U71QHWE\DGGLQJQHZVHPDQWLFVLQWKLVSDSHU 6LQFHWKHRFFXUUHQFHRIUHGXQGDQWFRQFXUUHQWVXFFHVVRUPDUNLQJVLVFDXVHGE\ XQFHUWDLQW\RIWKHWUDQVLWLRQV ILULQJRUGHUWKHDXWKRUVWULHGWRLQWURGXFHLQWHUORFNV>@ WR DPHQG LW ,Q >@ LQKLELW DUFV ZHUH XVHG DV ORFNV ZKLFK IXOO\ DSSOLHG FRPSOH[ VHPDQWLFV RI (3U71 QHW ZKLOH QDUURZHG WKH IOH[LELOLW\ RI GHVFULSWLRQ +HUH ZH SURSRVHV RQH V\QFKURQLVHG (3U71 QHW V\VWHP6@ZHVD\WKH\DUH HTXLYDOHQWGHQRWHGE\³≡´ 7ZRFRQWH[WUHODWLRQVRIRSHUDWLRQVKDYHEHHQGHILQHG DVIROORZV>@ 'HILQLWLRQ&RQWH[WHTXLYDOHQWUHODWLRQ³Ù´*LYHQWZRRSHUDWLRQV2DDQG2E 2DDQG2EDUHFRQWH[WHTXLYDOHQWH[SUHVVHGDV2DÙ2ELII'&2D ≡'&2E
,QWHQWLRQ3UHVHUYDWLRQE\0XOWLYHUVLRQLQJLQ'LVWULEXWHG5HDO7LPH*URXS(GLWRUV
'HILQLWLRQ&RQWH[WSUHFHGLQJUHODWLRQ³Ö´*LYHQWZRRSHUDWLRQV2DDQG2E 2DLVFRQWH[WSUHFHGLQJ2EH[SUHVVHGDV2DÖ2ELII'&2E ≡'&2D >2D@ZKHUH ³´H[SUHVVHVWKHFRQFDWHQDWLRQRIWZRRSHUDWLRQOLVWV 7UDQVIRUPDWLRQ)XQFWLRQVDQG3URSHUWLHV 2SHUDWLRQDO WUDQVIRUPDWLRQ FDQ EH UHSUHVHQWHG DV JUDSK WUDQVIRUPDWLRQ VXFK WKDW LWV HIIHFWLVYLVXDOLVHGDQGHDVLHUWREHXQGHUVWRRG>@%DVLFQRWDWLRQVDUHSUHVHQWHGLQ )LJ:LWKWKHFRQFXUUHQWH[HFXWLRQQRWDWLRQ2DDQG2EDUHFRQFXUUHQWDQGGHILQHG RQ WKH VDPH FRQWH[W DOWKRXJK RULJLQDOO\ WKH\ PD\ EHJHQHUDWHG IURPGLIIHUHQWFRQ WH[WV:HDVVXPHWKDWDOORSHUDWLRQVDUHHQIRUFHGWRH[HFXWHLQWKHFDXVDORUGHU$Q\ WZR FDXVDOO\ FRQVHFXWLYH RSHUDWLRQV ZLOO QRW EH WUDQVIRUPHG VXFK WKDW WKH\ DUH GH ILQHGRQWKHVDPHFRQWH[W:LWKWKHFRQVHFXWLYHH[HFXWLRQQRWDWLRQKRZHYHU2DPD\ EH FDXVDOO\ FRQFXUUHQW ZLWK RU SUHFHGLQJ 2E GHQRWHG E\ 2D __ 2E DQG 2D ĺ 2E UH VSHFWLYHO\ :HZLOOXVH2Dƕ2EWRGHQRWHFRQVHFXWLYHH[HFXWLRQRI2DDQG2E 2
2SHUDWLRQ2 RUFRQWH[W2GRFXPHQWVWDWH
2/
$OLVWRIRSHUDWLRQV &RQWH[WSUHFHGLQJÖ UHODWLRQRIRSHUDWLRQV
≡ 25 25
25
(TXLYDOHQWUHODWLRQRIFRQWH[WV 2D
7KHHIIHFWRIRSHUDWLRQ2D H[HFXWHGRQWKH FRQWH[W25
2D
7KHHIIHFWRIFRQFXUUHQWH[HFXWLRQRI2D DQG 2E RQFRQWH[W25 2D Ù 2E 7KHHIIHFWRIFRQVHFXWLYHH[HFXWLRQRI2D DQG2E RQFRQWH[W25 2D Ö2E
2E
2D
2E
)LJ%DVLFQRWDWLRQVIRURSHUDWLRQDOWUDQVIRUPDWLRQ
8VHULQWHQWLRQSUHVHUYDWLRQLVDFKLHYHGLQDOOH[LVWLQJVFKHPHVE\WUDQVIRUPLQJDQ RSHUDWLRQZLWKUHVSHFWWRFRQFXUUHQWRSHUDWLRQVLQRUGHUWRSHUPLWLWVLQWHJUDWLRQ7KLV WUDQVIRUPDWLRQ ZDV HQWLWOHG GLIIHUHQWO\ /7UDQVIRUPDWLRQ LQ DG237HG >@ DQG )RU ZDUG 7UDQVSRVLWLRQ LQ 62&7 >@ :H ZLOO DGRSW ,QFOXVLRQ 7UDQVIRUPDWLRQ XVHG LQ 5('8&(¶V*27DOJRULWKP>@7KLVLVLOOXVWUDWHGLQ)LJD 7ZRFRQWH[WHTXLYD OHQWFRQFXUUHQWRSHUDWLRQVFDQEHLQFOXVLRQWUDQVIRUPHGDQGWKHQH[HFXWHGFRQVHFX WLYHO\,QRWKHUZRUGVDQLQFOXVLRQWUDQVIRUPDWLRQIXQFWLRQ,72E2D WUDQVIRUPVDQ RSHUDWLRQ2EDJDLQVWDFRQWH[WHTXLYDOHQWRSHUDWLRQ2DLQVXFKDZD\WKDWWKHLPSDFW RI2DLVHIIHFWLYHO\LQFOXGHG&RQVLGHUWKHH[DPSOHLQ)LJ,WLVQHFHVVDU\WRLQFOX VLRQWUDQVIRUP2DJDLQVWILUVW2DQGWKHQ2DWVLWH ,WKDVEHHQVKRZQWKDWWKHIROORZLQJWZRSURSHUWLHV73DQG73 RIWKHWUDQVIRU PDWLRQDUHWKHQHFHVVDU\DQGVXIILFLHQWFRQGLWLRQVIRUHQVXULQJFRQYHUJHQFHLHWKH ILQDO GRFXPHQW VWDWHV DUH LGHQWLFDO DW DOO SDUWLFLSDWLQJ VLWHV LQ PXOWLXVHU HQYLURQ PHQWVLQ>@ 'HILQLWLRQ 7UDQVIRUPDWLRQ SURSHUW\ 73 2D ƕ 2E′ ≡ 2E ƕ 2D′ ZKHUH 2D Ù2E2E′ 72E2D 2D′ 72D2E DQG7LVWKHWUDQVIRUPDWLRQIXQFWLRQ 'HILQLWLRQ 7UDQVIRUPDWLRQ SURSHUW\ 73 )RU DQ\ 2 772 2D 2E′ 7722E 2D′
/L\LQ;XH0HKPHW2UJXQDQG.DQJ=KDQJ
73JXDUDQWHHVWKDWWKHH[HFXWLRQRIFRQFXUUHQWRSHUDWLRQVLQGLIIHUHQWRUGHUVZLOO UHVXOWLQWKHVDPHGRFXPHQWVWDWH73HQVXUHVWKDWWKHWUDQVIRUPDWLRQRIRSHUDWLRQ2 DORQJ GLIIHUHQW SDWKV LQ WKH FDVH RI PXOWLSOH FRQFXUUHQW RSHUDWLRQV ZLOO \LHOG WKH VDPHUHVXOWLQJRSHUDWLRQ 2D
25
2E
25
≡
25
2D
,72E2D
D ,QFOXVLRQWUDQVIRUPDWLRQ
2E
2D
≡
2E
25
E ([FOXVLRQWUDQVIRUPDWLRQ
25
2D
2E
2E′ (72E2D
≡
(72D2E
2E′
25
2D ′
2D ′ ,72D2E′
F 7UDQVSRVH IXQFWLRQ
25
2Q
2
2Q
≡
25
2Q′
2′
2Q′
:KHUH2L ′ LVDWUDQVIRUPHGIRUPRI2L G /7UDQVSRVH IXQFWLRQ
2/
2/
2/ 2D
≡
2/
2/
2D ′ /,72D2/ H /LVWLQFOXVLRQWUDQVIRUPDWLRQ
)LJ7UDQVIRUPDWLRQIXQFWLRQV
2/ 2D ′
,QRUGHUWRIDFLOLWDWHWKHWUDQVIRUPDWLRQSURFHVVRWKHUIXQFWLRQVZHUHLQWURGXFHGLQ VRPH RI WKH H[LVWLQJ VFKHPHV ,Q 5('8&( WZR FRQVHFXWLYHO\ H[HFXWHG RSHUDWLRQV EXW WKH\ ZHUH RULJLQDOO\JHQHUDWHGFRQFXUUHQWO\ FDQEHWUDQVIRUPHGVXFKWKDWWKH\ DUH FRQWH[W HTXLYDOHQW DV LV LOOXVWUDWHG LQ )LJ E ,Q RWKHU ZRUGV DQ (72D 2E IXQFWLRQ WUDQVIRUPV 2D DJDLQVW D FRQWH[W SUHFHGLQJ RSHUDWLRQ 2E LQ VXFK D ZD\ WKDW WKHLPSDFWRI2ELVHIIHFWLYHO\H[FOXGHG)LJF LOOXVWUDWHV62&7¶VEDFNZDUGWUDQV SRVLWLRQIXQFWLRQ>@DQG*272¶VWUDQVSRVHIXQFWLRQ>@HLWKHURIZKLFKVZDSVWKH H[HFXWLRQRUGHURIWZRFRQVHFXWLYHRSHUDWLRQV6LPLODUO\DOLVWRIRSHUDWLRQV2/ FDQ EH WUDQVIRUPHG LQWR D QHZ OLVW 2/′ DIWHU FDOOLQJ SURFHGXUH /7UDQVSRVH2/ VXFK WKDW2/≡2/′>@7KHSURFHGXUHWUDQVIRUPVDQGFLUFXODUO\VKLIWVWKHRSHUDWLRQVLQ 2/LHLWVODVWRSHUDWLRQEHFRPHVWKHILUVWRQH)LJG )XUWKHUPRUHDQRSHUDWLRQ 2FDQEHLQFOXVLRQWUDQVIRUPHGDJDLQVWDOLVWRIRSHUDWLRQV2/ LH/,722/ )LJ H ,QWHQWLRQ9LRODWLRQ3UREOHPLQWKH([LVWLQJ6FKHPHV $OORSHUDWLRQVDUHHQIRUFHGWRH[HFXWHLQVRPHWRWDORUGHUDWDOOSDUWLFLSDWLQJVLWHVLQ WKH*2762&7DQG62&7DOJRULWKPV>@VXFKWKDWWKHFRQYHUJHQFHRULGHQWL FDO VWDWH RI D GRFXPHQW FDQ DOZD\V EH JXDUDQWHHG 7KHUHIRUH 73 DQG 73 DUH QRW UHTXLUHG7KHSULFHSDLGIRUWKLVLVWKHXQGRUHGRRYHUKHDGLQ*27DQGDFHQWUDORU GHULQJPHFKDQLVPLQ62&7DQG62&7 :HREVHUYHWKDWWKHDERYHJOREDOVHULDOLVDWLRQDSSURDFKHVKDYH RQHPRUHVHULRXV SUREOHP LH LQWHQWLRQ YLRODWLRQ LQ DGGLWLRQ WR WKH DIRUHPHQWLRQHG RYHUKHDG 7KLV SUREOHPKDSSHQVZKHQPXOWLSOHFRQFXUUHQWRSHUDWLRQVLQVHUWLQJDFKDUDFWHURUVWULQJ
Intention Preservation by Multi-versioning in Distributed Real-Time Group Editors
515
in the same position or deleting the same character or string. For example, two users find that there is a spelling error in “grup editor”. If they concurrently insert character “o” before character “u”, the combined effect will be “grooup editor”, which satisfies neither of them. Some systems (e.g., Grove) automatically merge them such that the final result becomes “group editor”. The problem is that it is difficult to design such an intelligent algorithm applicable to various situations. The above circumstances involve only two concurrent operations. If multiple operations’ targeting regions overlap, the intention violation problem becomes more serious, since they may have more complex temporal relationships. A specifically ordered and merged effect may satisfy none of the users. In fact, these operations may have different intentions and the system does not have the knowledge of how to integrate them in such a way that the final effect will be agreed upon among the users. If concurrent operations are allowed to execute in any order, the operational transformation approach must ensure that different execution orders of properly transformed operations produce identical results as in the adOPTed [8] and SOCT2 [9] algorithms. The complexity caused is that the transformation functions must satisfy TP1 and TP2. The verification of the satisfaction of the properties is not trivial, and if not done, the convergence of the replicas cannot be guaranteed [11]. Furthermore, the aforementioned intention violation problem still remains. Concurrent operations targeting different parts of a text object will generally not cause a conflict as long as the operational transformation approach is employed. Therefore they can be executed in any order. We observe that the aforementioned intention violation problem is caused by concurrent operations that are intended to modify the same region of the text. If we can prevent them from being applied to the same object, then the problem can be avoided. The multi-version approach provides such a solution. Generally, Our approach to the convergence and intention preservation issues is to devise an algorithm that does not require any global order for the execution of concurrent operations while the transformation functions satisfying TP1 and TP2 can be easily designed. In the meantime, it also preserves each user’s intentions.
3
Our Approach: An Integrated Operational Transformation and Multi-versioning Scheme
In object-based collaborative graphics editing systems, when two concurrent operations change the same attribute of the same object to different values, intention violation occurs. It is impossible for the system to accommodate such conflicting intentions in the same object. The only way to preserve the intentions of both operations is to make a new version of the object, and then apply the two conflicting operations to the two versions separately. The execution effect is that no conflicting operations are applied to the same version. This is the multi-version approach proposed in Tivoli [5] and GRACE [10]. Operations targeting different attributes of the same object can be considered as compatible. The effects of compatible operations can be applied to the same object
/L\LQ;XH0HKPHW2UJXQDQG.DQJ=KDQJ
ZLWKRXW FDXVLQJ LQWHQWLRQ YLRODWLRQ ,Q D KLJKO\ FRQFXUUHQW UHDOWLPH FRRSHUDWLYH HGLWLQJ HQYLURQPHQW D JURXS RI RSHUDWLRQV PD\ KDYH UDWKHU DUELWUDU\ DQG FRPSOH[ FRPSDWLEOHRUFRQIOLFWUHODWLRQVKLSVWKHLUFRPELQHGHIIHFWFDQEHH[SUHVVHGE\DVHW RI PD[LPDO FRPSDWLEOH JURXSV (DFK PD[LPDO FRPSDWLEOH JURXS GHQRWHG E\ PFJ FRQWDLQVDVHWRIPXWXDOO\FRPSDWLEOHRSHUDWLRQV)RUHYHU\WZRPD[LPDOFRPSDWLEOH JURXSWKHUHH[LVWVDWOHDVWRQHSDLURIRSHUDWLRQVWKDWDUHFRQIOLFWLQJZLWKHDFKRWKHU $PD[LPDOFRPSDWLEOHJURXSZLOOQRWEHDVXEVHWRIDQ\RWKHU (DFK YHUVLRQ FUHDWHG LV DFFRUGLQJO\ FDOOHG PD[LPDO YHUVLRQ GHQRWHG E\ 0&* ZKLFK FRQVLVWV RI D PD[LPDO FRPSDWLEOH JURXS DQG D XQLTXH YHUVLRQ LGHQWLILHU 7KH LGHQWLILHURIDQ\REMHFWYHUVLRQ*GHQRWHGE\9LG* FRQVLVWVRIDVHWRILGHQWLILHUV RI RSHUDWLRQV HDFK RI ZKLFK HLWKHU FUHDWHG * RU KDV EHHQ DSSOLHG WR * DQG LV FRQIOLFWLQJZLWKDQRSHUDWLRQWKDWKDVEHHQDSSOLHGWRVRPHRWKHUYHUVLRQ$YHUVLRQ LVVDLGWREHDVXEYHUVLRQRIDQRWKHULIWKHLGHQWLILHURIWKHIRUPHULVDVXSHUVHWRIWKH ODWWHU 7KH PXOWLYHUVLRQLQJ VFKHPH SURSRVHG LQ *5$&( LV RQO\ FRQGLWLRQDOO\ FRUUHFW $Q H[WHQGHG DQG JHQHUDOLVHG VFKHPH FDOOHG FRQWH[WXDO LQWHQWLRQ RULHQWHG PXOWL YHUVLRQLQJ VFKHPH KDV EHHQ SURSRVHG LQ >@ ,W VXSSRUWV XQFRQVWUDLQHG HGLWLQJ RQ WKHYHUVLRQVFUHDWHG 7KHDIRUHPHQWLRQHGFRQIOLFWDQGFRPSDWLEOHUHODWLRQVKLSVDFRPELQDWLRQRIWHP SRUDO DQG VHPDQWLF UHODWLRQVKLSV DUH UHIHUUHG WR DV LQWHQWLRQDO UHODWLRQVKLSV LQ WKH FRQWH[WXDO LQWHQWLRQ RULHQWHG VFKHPH 7KH VHPDQWLF SDUW LV FDOOHG VSDWLDO UHODWLRQ VKLS ZKLFK LV DSSOLFDWLRQ GHSHQGHQW 7KH IROORZLQJ GHILQLWLRQ LV IRU REMHFWEDVHG FROODERUDWLYHJUDSKLFVHGLWRUV 'HILQLWLRQ 6SDWLDO UHODWLRQV *LYHQ RSHUDWLRQV 2D DQG 2E JHQHUDWHG LQ D FRO ODERUDWLYHHGLWLQJVHVVLRQ 2D_2E GHI7JW2D ∩7JW2E ≠∅∧$WW7\SH2D $WW7\SH2E 2D;2E GHI¬2D_2E 5HODWLRQV_DQG;DUHFDOOHGVSDWLDO FRQIOLFWDQGVSDWLDOFRPSDWLEOHUHVSHFWLYHO\+HUH7JW2 GHQRWHVWKHLGHQWLILHURIWKH WDUJHWHGREMHFWLHWKHREMHFWZKLFK2LVJHQHUDWHGWRWDUJHW DQG$WW7\SH2 LVWKH DWWULEXWH W\SH RI WKH WDUJHWHG REMHFW ZKLFK 2 LV LQWHQGHG WR PRGLI\ 7JW2D ∩ 7JW2E ≠ ∅ PHDQV WKDW WKH WZR RSHUDWLRQV¶ WDUJHWHG REMHFWV VKDUH RU DUH GHULYHG IURP WKHVDPHEDVHREMHFW ,QWKHFRQWH[WXDOLQWHQWLRQRULHQWHGVFKHPHDQ\FRQFXUUHQWRSHUDWLRQVZLWKVSDWLDO FRQIOLFWUHODWLRQVKLSDUHUHIHUUHGWRDVEDVLFLQWHQWLRQDOFRQIOLFWDQGJXDUDQWHHGQRWWR EHDSSOLHGWRWKHVDPHYHUVLRQVXFKWKDWLQWHQWLRQYLRODWLRQZLOOQRWRFFXU ,QWHQWLRQDOFRQIOLFWUHODWLRQVKLSVGHQRWHGE\⊗ DUHGLIIHUHQWLDWHGLQWRGLUHFWFRQ IOLFWUHODWLRQVKLSVGHQRWHGE\⊗' DQGLQGLUHFWFRQIOLFWUHODWLRQVKLSVGHQRWHGE\⊗, 7KH GLUHFW FRQIOLFW UHODWLRQ LV LQWURGXFHG WR GHWHFW WKRVH RSHUDWLRQV WKDW FDXVH WKH FUHDWLRQRIQHZYHUVLRQV)RUDQ\WZRRSHUDWLRQVLIWKHUHH[LVWVDSDLURIFRQIOLFWLQJ RSHUDWLRQVEHWZHHQWKHLUWDUJHWHGYHUVLRQVRUJHQHUDWLRQFRQWH[WV RUHLWKHURIWKHP LQWHQWLRQDOO\FRQIOLFWVZLWKDQRSHUDWLRQLQDQRWKHU¶VJHQHUDWLRQFRQWH[WWKHQWKH\DUH LQGLUHFWO\ FRQIOLFWLQJ ZLWK HDFK RWKHU 1R QHZ YHUVLRQ ZLOO EH FUHDWHG GXH WR WKLV LQGLUHFW FRQIOLFW *LYHQ DQ\ WZR RSHUDWLRQV LI WKH\ DUH QRW LQWHQWLRQDOO\ FRQIOLFWLQJ ZLWKHDFKRWKHUWKH\DUHLQWHQWLRQDOO\FRPSDWLEOHGHQRWHGE\~ )RUPDOGHILQLWLRQV RILQWHQWLRQDOUHODWLRQVFDQEHIRXQGLQ>@)RUVLPSOLFLW\VRPHWLPHVZHXVH³FRQ IOLFW FRPSDWLEOH ´ DQG ³LQWHQWLRQDOO\ FRPSDWLEOH FRQIOLFW ´ LQWHUFKDQJHDEO\ LQ WKH VHTXHO7KHH[DFWPHDQLQJVKRXOGEHFOHDUIURPWKHFRQWH[W
,QWHQWLRQ3UHVHUYDWLRQE\0XOWLYHUVLRQLQJLQ'LVWULEXWHG5HDO7LPH*URXS(GLWRUV
&RQVLGHUWKHHGLWLQJVFHQDULRLQ)LJD $OWKRXJKERWK2DQG2DUHJHQHUDWHG WR WDUJHW WKH EDVH REMHFW ZLWK WKH VDPH LGHQWLILHU EXW GLIIHUHQW VWDWHV DQG DUH VSD WLDOO\FRPSDWLEOHWKH\DUHGHILQHGDVLQGLUHFWO\FRQIOLFWLQJVLQFHWKHUHH[LVWVDSDLURI RSHUDWLRQV2DQG2 LQWKHLUUHVSHFWLYHWDUJHWHGYHUVLRQVRUJHQHUDWLRQFRQWH[WV2 ∈*&2 2∈*&2 ZKLFKDUHGLUHFWO\FRQIOLFWLQJZLWKHDFKRWKHU7KHYHUVLRQV FUHDWHGDUHJLYHQLQ)LJE 6LWHL
6LWHM
ℜ
2
2
2
2
~
⊗'
~
⊗,
~
⊗,
~
~
⊗,
2
2
2
2
2
2 2 2
~
7JW2 7JW2 7JW2 7JW2 ^2%` 2_ 2 2; 22; 22; 2
0&*^22`^2%2`! 0&*^22`^2%2`!
D (GLWLQJVFHQDULR
E ,QWHQWLRQDOUHODWLRQVKLSVDQG YHUVLRQV
1RWH 7KH ILUVW FRPSRQHQW RI WKH YHUVLRQ HLWKHU 0&* RU 0&* LV LWV PD[LPDO FRPSDWLEOH JURXS DQG WKH VHFRQG LWV LGHQWLILHU )RU VLPSOLFLW\ ZH XVH D VHW RI RSHUDWLRQV WR GHQRWH D VHW RI LGHQWLILHUV RI WKH RSHUDWLRQV LQ WKH YHUVLRQV 2% LV WKH RSHUDWLRQWKDWFUHDWHGWKHEDVHREMHFW^2%`UHSUHVHQWVWKHLGHQWLILHURIWKHEDVHREMHFW
)LJ'LUHFWDQGLQGLUHFWFRQIOLFWV
$SSDUHQWO\WKHJHQHUDOSULQFLSOHRIPXOWLYHUVLRQLQJIRUJUDSKLFVREMHFWVZLWKLQ GHSHQGHQW DWWULEXWHV LV DSSOLFDEOH WR WKH FDVH RI WH[W REMHFW HGLWLQJ ,I FRQFXUUHQW RSHUDWLRQV ZLWK RYHUODSSHG WDUJHW DUHDV DUH DFFRPPRGDWHG LQ GLIIHUHQW YHUVLRQV LQ WHQWLRQYLRODWLRQVZLOOQRWRFFXU7KHFKDOOHQJHVKHUHDUHKRZ WRGHILQHVSDWLDOUHOD WLRQVKLSV IRU WH[W REMHFWV DQG KRZ WR LQWHJUDWH WKH RSHUDWLRQDO WUDQVIRUPDWLRQ WHFK QLTXHLQWRWKHPXOWLYHUVLRQLQJSURFHVV
6SDWLDO5HODWLRQVRI2SHUDWLRQV
:HKDYHREVHUYHGWKDWLQWHQWLRQYLRODWLRQRFFXUVZKHQWKHWDUJHWUHJLRQVRIFRQFXU UHQWRSHUDWLRQVRYHUODS$OOWKHSRVVLELOLWLHVRIDQRYHUODSDUHH[KDXVWLYHO\LQFOXGHG LQ WKH IROORZLQJ GHILQLWLRQ QRWH IRU DOO WKH QRWDWLRQV RI UHODWLRQV GHILQHG IRU WH[W REMHFWVZHDGGDVXEVFULSW7LQWKHVHTXHO 'HILQLWLRQ 6SDWLDO FRQIOLFW UHODWLRQ *LYHQ DQ\ WZR RSHUDWLRQV 2D DQG 2E ZKHUH 7JW2D ∩ 7JW2E ≠ ∅ WKH\ DUH VSDWLDOO\ FRQIOLFWLQJ ZLWK HDFK RWKHU H[ SUHVVHGDV2D_72ELI2DÙ2EDQG 7\SH2D 7\SH2E ,QVHUWDQG32D 32E RU 7\SH2D 7\SH2E 'HOHWH DQG 32D 32E 32D /2D RU 32E 32D 32E /2E RU 7\SH2D ,QVHUW ∧ 7\SH2E 'HOHWH DQG 32E 32D 32E /2E RU 7\SH2D 'HOHWH ∧ 7\SH2E ,QVHUW DQG 32D 32E 32D /2D
/L\LQ;XH0HKPHW2UJXQDQG.DQJ=KDQJ
:KHUH 7\SH2 WKHW\SHRIRSHUDWLRQ2LH,QVHUWRU'HOHWH 32 WKHSR VLWLRQ SDUDPHWHU RI RSHUDWLRQ 2 /2 WKH OHQJWK SDUDPHWHU RI RSHUDWLRQ 2 )RU ,QVHUWLWLVWKHOHQJWKRIWKHVWULQJWREHLQVHUWHGIRU'HOHWHLWLVWKHQXPEHURIFKDU DFWHUVWREHGHOHWHG 'HILQLWLRQ 6SDWLDO FRPSDWLEOH UHODWLRQ *LYHQ DQ\ WZR RSHUDWLRQV 2D DQG 2E 7JW2D ∩7JW2E ≠∅LIWKH\DUHQRWVSDWLDOO\FRQIOLFWLQJZLWKHDFKRWKHUWKH\DUH VSDWLDOO\FRPSDWLEOHH[SUHVVHGDV2D;72E ,IWZRRSHUDWLRQVVSDWLDOO\FRQIOLFWZLWKHDFKRWKHUWKHQLWLVQRQWULYLDOWRGHVLJQ WUDQVIRUPDWLRQ IXQFWLRQV VDWLVI\LQJ WKH WUDQVIRUPDWLRQ SURSHUWLHV 73 DQG 73 > @2XUVROXWLRQWRWKLVLVWRJXDUDQWHHWKDWHYHU\YHUVLRQGRHVQRWFRQWDLQFRQFXUUHQW RSHUDWLRQVZLWKWKHVSDWLDOFRQIOLFWUHODWLRQVKLSVXFKWKDWWKHSURSHUWLHVFDQEHHDVLO\ VDWLVILHGDQGWKHRSHUDWLRQVFDQEHIUHHO\WUDQVIRUPHGDVORQJDVWKH\DUHDUUDQJHGLQ DFFRUGDQFHZLWKWKHFDXVDORUGHU,QRWKHUZRUGVDQ\WUDQVIRUPDWLRQRIDQRSHUDWLRQ DJDLQVWVSDWLDOO\FRQIOLFWLQJRSHUDWLRQVLVDYRLGHG 7KHDLPRIGHILQLQJVSDWLDOUHODWLRQVLVWRDYRLGFRQFXUUHQWRSHUDWLRQVZLWKVXFKD UHODWLRQVKLSWREHDSSOLHGWRWKHVDPHREMHFW+RZHYHUVLQFHWKHREMHFWVWDWHWRZKLFK DQ RSHUDWLRQ LV WR EH DSSOLHG DW D UHPRWH VLWH PD\ EH GLIIHUHQW IURP LWV JHQHUDWLRQ FRQWH[WDWLWVRULJLQDWLQJVLWHLWLVQRWFHUWDLQZKHWKHULWFRQIOLFWVZLWKDQ\FRQFXUUHQW RSHUDWLRQVKDYLQJEHHQDSSOLHGWRWKHREMHFW ,QRUGHUWRGHWHUPLQHWKHVSDWLDOFRPSDWLELOLW\RIDQ\WZRRSHUDWLRQV2DDQG2E ZHILUVWQHHGWRWUDQVIRUPWKHPVXFKWKDW2DÙ2E+RZHYHUZKHWKHUDQRSHUDWLRQ FDQ EH WUDQVIRUPHG DJDLQVW DQRWKHU GHSHQGV RQ WKHLU VSDWLDO FRPSDWLELOLW\ ,W VHHPV ZH DUH IDFLQJ D GLOHPPD ,W LV SRVVLEOH WR UHVROYH LW &RQVLGHU WKH UHODWLRQVKLSV EH WZHHQDSDLURIGHSHQGHQWRSHUDWLRQV2DDQG2EZLWK2Dĺ2EWKDWLV2DLVFDXVDOO\ SUHFHGLQJ 2E RU 2E LV GHSHQGHQW RQ 2D 6LQFH WKH\ DUH JHQHUDWHG IURP GLIIHUHQW FRQWH[WV 'HILQLWLRQ LV QRW GLUHFWO\ DSSOLFDEOH ,I WKHLU WDUJHW DUHDV RYHUODS LW PD\ QRW EH SRVVLEOH WR H[FOXVLRQWUDQVIRUP 2E DJDLQVW 2D HYHQ WKRXJKRSHUDWLRQVZRXOG EHDOORZHGWRH[HFXWHRXWRIWKHFDXVDORUGHU)RUWXQDWHO\WZRGHSHQGHQWRSHUDWLRQV WDUJHWLQJ WKH VDPH REMHFW DUH GHILQHG DV LQWHQWLRQDOO\ FRPSDWLEOH QR PDWWHU ZKDW VSDWLDOUHODWLRQVKLSWKH\KDYH,IERWKRIWKHPDUHFRQFXUUHQWZLWKDQRWKHURSHUDWLRQ 2F*&2D *&2F DQG2D_72FWKHQWKHVSDWLDOUHODWLRQVKLSEHWZHHQ2EDQG2F FDQQRW EH GHWHUPLQHG GLUHFWO\ IURP WKH GHILQLWLRQV VLQFH 2F FDQQRW EH LQFOXVLRQ WUDQVIRUPHG DJDLQVW2DVXFKWKDW2EÙ2F1HYHUWKHOHVVZLWKWKHFRQWH[WXDOLQWHQ WLRQRULHQWHGVFKHPHZHNQRZWKDW2EDQG2FDUHLQWHQWLRQDOO\ LQGLUHFWO\FRQIOLFW LQJEHFDXVH2DFRQIOLFWVZLWK2FDQG2DLVLQ2F¶VJHQHUDWLRQFRQWH[W7KHUHIRUHLWLV QRWQHFHVVDU\WRGHWHUPLQHWKHLUVSDWLDOUHODWLRQVKLS,I2D;72F2FFDQEHLQFOXVLRQ WUDQVIRUPHG DJDLQVW 2D VXFK WKDW 2F Ù 2E DQG WKH VSDWLDO UHODWLRQVKLS EHWZHHQ 2F DQG2EFDQEHHDVLO\GHWHUPLQHGEDVHGRQWKHGHILQLWLRQV )RU H[DPSOH LQ )LJ 2 DQG 2 DUH FRQFXUUHQW DQG VKDUH WKH VDPH JHQHUDWLRQ FRQWH[WLH³DEFG´%DVHGRQ'HILQLWLRQWKH\DUHVSDWLDOO\FRQIOLFWLQJ2DQG2 DUHDOVRFRQFXUUHQWZLWKHDFKRWKHUEXWZLWKGLIIHUHQWJHQHUDWLRQFRQWH[WV2LQWHQ WLRQDOO\ FRQIOLFWV ZLWK ERWK 2 DQG 2 2 LV VSDWLDOO\ DQG LQWHQWLRQDOO\ FRPSDWLEOH ZLWKERWK2DQG2EXWVSDWLDOO\FRQIOLFWLQJZLWK2
,QWHQWLRQ3UHVHUYDWLRQE\0XOWLYHUVLRQLQJLQ'LVWULEXWHG5HDO7LPH*URXS(GLWRUV 6LWH6LWH6LWH 2
2
2
2
ℜ
2
2
2
2
2
~7
⊗'7
⊗,7
~7
~7
~7
~7
~7
⊗'7
2 2
~7
2 ,QLWLDOREMHFWVWDWH³DEFG´ 2 ,QVHUW³[´ LQWHQGHGHIIHFW³D[EFG´ 2 'HOHWH LQWHQGHGHIIHFW³DFG´ 2 'HOHWH LQWHQGHGHIIHFW³DG´ 2 ,QVHUW³\´ LQWHQGHGHIIHFW³DEF\G´ D (GLWLQJVFHQDULR
2_7 22_7 2 2;7 22;7 2 E 6SDWLDODQGLQWHQWLRQDOUHODWLRQVKLSV
)LJ,OOXVWUDWLRQRIVSDWLDODQGLQWHQWLRQDOUHODWLRQVKLSV
:LWK WKH LQWURGXFWLRQ RI PXOWLYHUVLRQLQJ LW LV DSSDUHQW WKDW RXU VROXWLRQ WR WKH SUREOHPVRILQWHQWLRQYLRODWLRQDQGGLYHUJHQFHLVQRWWRGHVLJQWUDQVIRUPDWLRQIXQF WLRQVWKDWDOZD\VVDWLVI\WKHWUDQVIRUPDWLRQSURSHUWLHVUHJDUGOHVVRIWKHYDOXHVRIWKH SDUDPHWHUVRIWKHRSHUDWLRQVLQYROYHGLQWKHWUDQVIRUPDWLRQ,QVWHDGZHDLPDWGHILQ LQJWUDQVIRUPDWLRQIXQFWLRQVWKDWVDWLVI\WKHSURSHUWLHVZLWKUHVSHFWWRDVXEVHWRIWKHLU GRPDLQV YDOXHV RI WKH LQSXW SDUDPHWHUV 7KH\ DUH JXDUDQWHHG QRW WR EH DSSOLHG WR RWKHU YDOXHV RI WKH LQSXW SDUDPHWHUV ,Q RWKHU ZRUGV LQFOXVLRQ DQG H[FOXVLRQ WUDQV IRUPDWLRQRIDQRSHUDWLRQDJDLQVWDQRWKHURSHUDWLRQFDQEHFRPSOHWHGRQO\ZKHQWKH\ DUH QRW VSDWLDOO\ FRQIOLFWLQJ ZLWK HDFK RWKHU 6LPLODUO\ WKH 7UDQVSRVH IXQFWLRQ LV DSSOLFDEOH RQO\ ZKHQ WKH WZR LQSXW RSHUDWLRQV DUH VSDWLDOO\ FRPSDWLEOH :LWK WKH /7UDQVSRVH IXQFWLRQ WKH IROORZLQJ SUHFRQGLWLRQ LV UHTXLUHG 2/>_2/_@ LV VSDWLDOO\ FRPSDWLEOHZLWK2/>L@IRUDOOL «_2/_± :LWKWKHDERYHFRQVLGHUDWLRQWKH GHVLJQRIWKHWUDQVIRUPDWLRQIXQFWLRQVEHFRPHVVWUDLJKWIRUZDUG
7KH3URFHVVRI7UDQVIRUPDWLRQDQG0XOWLYHUVLRQLQJ
$QRSHUDWLRQLVVDLGWREHFDXVDOO\UHDG\IRUH[HFXWLRQLIDOOWKHRSHUDWLRQVSUHFHGLQJ LWKDYHEHHQH[HFXWHG>@$QRSHUDWLRQFDQEHH[HFXWHGRQO\LILWLVFDXVDOO\UHDG\ :KHQH[HFXWLQJRSHUDWLRQ2QHZDWDUHPRWHVLWHWKHUHPD\QRWH[LVWDQREMHFWZLWKDQ LGHQWLILHU RI 7JW2QHZ VLQFH FRQFXUUHQW FRQIOLFWLQJ RSHUDWLRQV PD\ KDYH EHHQ H[H FXWHGDQGQHZYHUVLRQVFUHDWHG7KHLGHQWLILHUVRIWKHYHUVLRQVDUHXSGDWHGE\LQFOXG LQJWKHLGHQWLILHUVRIWKHRSHUDWLRQVLQYROYLQJLQGLUHFWFRQIOLFWV*HQHUDOO\WKHRSHUD WLRQQHHGVWREHDSSOLHGWRDVHWGHQRWHGE\0&*62QHZ RIYHUVLRQVWKDWDUHVXE YHUVLRQV RI WKH WDUJHWHG REMHFW RU YHUVLRQ WKDW LV IRU DQ\ YHUVLRQ 0&* ∈ 0&*62QHZ LWV LGHQWLILHU 9LG0&* PXVW EH D VXSHUVHW RI WKDW RI WKH WDUJHWHG REMHFW7JW2QHZ ⊆9LG0&* 7KH PRVW VLJQLILFDQW FKDUDFWHULVWLF RI WKH FRQWH[WXDO LQWHQWLRQ RULHQWHG VFKHPH LV WKHFRQWH[WXDOLQWHQWLRQSUHVHUYDWLRQSURSHUW\LHDQRSHUDWLRQZLOORQO\EHDSSOLHG WR YHUVLRQV FRQWDLQLQJ LWV JHQHUDWLRQ FRQWH[W ,W LV VXIILFLHQW WR DSSO\ WKH RSHUDWLRQ 2QHZ WRDVHWRIWDUJHWYHUVLRQVHDFKRIZKLFKFRQWDLQVWKHZKROHJHQHUDWLRQFRQWH[W RI WKH RSHUDWLRQ 7KH WDUJHW YHUVLRQ VHW GHQRWHG E\ 0&*6′ 2QHZ LV D VXEVHW RI 0&*62QHZ 7KHUHIRUH ZH QHHG WR UHFRQVWUXFW WKH RSHUDWLRQ¶V JHQHUDWLRQ FRQWH[W
/L\LQ;XH0HKPHW2UJXQDQG.DQJ=KDQJ
*&2QHZ WRORFDWHWKHWDUJHWYHUVLRQVDQGWKHQWRDSSO\2QHZWRHDFKWDUJHWYHUVLRQ LQRUGHUWRFUHDWHSRWHQWLDOQHZYHUVLRQV
5HFRQVWUXFWLRQRI*HQHUDWLRQ&RQWH[W
7KH JHQHUDWLRQ FRQWH[W RI RSHUDWLRQ 2QHZ FDQ EH UHFRQVWUXFWHG IURP 0&*62QHZ ZKLFK FDQ EH HDVLO\ ORFDWHG E\ FRPSDULQJ WKH LGHQWLILHUV RI FXUUHQW YHUVLRQV ZLWK 7JW2QHZ $OORSHUDWLRQVLQ*&2QHZ FDXVDOO\SUHFHGH2QHZ+RZHYHUWKH\PD\EH PL[HG XS ZLWK RWKHU RSHUDWLRQV FRQFXUUHQW ZLWK 2QHZ LQ FXUUHQW WDUJHW YHUVLRQV LQ 0&*6′2QHZ ZKRVHHIIHFWVPXVWEHH[FOXGHGVXFKWKDWWKHJHQHUDWLRQFRQWH[WFDQ EHUHFRYHUHG7KHUHIRUHIRUHDFKYHUVLRQ0&*EHORQJLQJWR0&*62QHZ ZHQHHG WR GLYLGH LWV PD[LPDO FRPSDWLEOH JURXS PFJ ZH DVVXPH WKDW LWV RSHUDWLRQV DUH DU UDQJHGLQDOLVWDFFRUGLQJWRWKHFDXVDORUGHULQWKHVHTXHO LQWRWZRSDUWVLHSUHFHG LQJKLVWRU\OLVW35 DQGFRQFXUUHQWKLVWRU\OLVW&& $OORSHUDWLRQVLQ35FDXVDOO\ SUHFHGH2QHZGHQRWHGE\35→>2QHZ@DQG2QHZLVFRQFXUUHQWZLWKDOOWKHRSHUDWLRQV LQ&&GHQRWHGE\>2QHZ@__&&7KLVSURFHVVLVVLPLODUWRWKHRSHUDWLRQDOWUDQVIRUPD WLRQSURFHVVLQWKH*272DOJRULWKP>@,WLVLOOXVWUDWHGZLWKRXUJUDSKLFDOQRWDWLRQV LQ)LJD PFJ
≡
35
&&
35 → >2QHZ@&&__>2QHZ@ D 7KHVHSDUDWLRQRISUHFHGLQJRSHUDWLRQVIURPFRQFXUUHQWRQHV
&& *&2QHZ
2QHZ
≡
*&2QHZ
&3
&) (2QHZ
(2QHZ /,72QHZ&3 &3 ~7 >2QHZ@ >2QHZ@⊗7 &) E 7KHVHSDUDWLRQRIFRPSDWLEOHRSHUDWLRQVIURPFRQIOLFWLQJRQHV
)LJ2SHUDWLRQDOWUDQVIRUPDWLRQVLQWKHPXOWLYHUVLRQLQJSURFHVV
7KH PDMRU SURFHVV LQYROYHGLQWKHDERYHWUDQVIRUPDWLRQLVWRVKLIWRUWUDQVSRVH WKRVHSUHFHGLQJRSHUDWLRQVWRWKHOHIWRIWKHOLVWDQGFRQFXUUHQWRSHUDWLRQVWRWKHULJKW VXFKWKDW35DQG&&DUHVHSDUDWHG6LQFHZHDVVXPHWKDWFRQFXUUHQWRSHUDWLRQVZLWK VSDWLDOFRQIOLFWUHODWLRQVKLSVFDQQRWEHDSSOLHGWRWKHVDPHYHUVLRQWKHWUDQVIRUPDWLRQ IXQFWLRQV LQWURGXFHG SUHYLRXVO\ DUH DSSOLFDEOH$SSO\LQJWKLVSURFHVVWRDOOWKHYHU VLRQV EHORQJLQJ WR 0&*62QHZ ZLOO UHVXOW LQ DVHW356 RISUHFHGLQJKLVWRU\OLVWV 35LL «_0&*62QHZ _ 6LQFHQRWDOOWKRVHYHUVLRQVFRQWDLQWKHIXOOJHQHUDWLRQ FRQWH[W ZH QHHG WR FRPSDUH DOO WKH SUHFHGLQJ KLVWRU\ OLVWV LQ RUGHU WR REWDLQ WKH JHQHUDWLRQ FRQWH[W DQG LJQRUH WKRVH YHUVLRQV WKDW FRQWDLQ RQO\ SDUW RI WKH FRQWH[W 2QHZZLOORQO\EHDSSOLHGWRWKHVHWRIWDUJHWYHUVLRQVWKDWFRQWDLQWKHIXOOJHQHUDWLRQ FRQWH[WWKDWLV0&*6′2QHZ
,QWHQWLRQ3UHVHUYDWLRQE\0XOWLYHUVLRQLQJLQ'LVWULEXWHG5HDO7LPH*URXS(GLWRUV
'HWHUPLQDWLRQRI,QWHQWLRQDO5HODWLRQVKLSV
7KHUHDUHWKUHHSRVVLEOHUHVXOWVRIDSSO\LQJ2QHZWRDWDUJHWYHUVLRQ0&*ZLWKDFRP SDWLEOHJURXSRIPFJ,I^2QHZ`LVFRPSDWLEOHZLWK PFJ0&*ZLOOEHXSGDWHGVXFK WKDWWKHQHZFRPSDWLEOHJURXSEHFRPHVPFJ∪^2QHZ`,I2QHZFRQIOLFWVZLWKDOOWKH RSHUDWLRQV LQ WKH PFJ WKHQ D QHZ YHUVLRQ ZLOO EH FUHDWHG ZKRVH FRPSDWLEOH JURXS FRQWDLQVRQO\RQHRSHUDWLRQLH^2QHZ`LQDGGLWLRQWRWKHRSHUDWLRQFUHDWLQJWKHEDVH REMHFW,I2QHZLVFRPSDWLEOHZLWKVRPHRIWKHRSHUDWLRQVLQWKHPFJWKHQDQHZYHU VLRQZLOOEHFUHDWHGZKRVHFRPSDWLEOHJURXSFRQWDLQV^2QHZ`DQGDOOWKRVHRSHUDWLRQV FRPSDWLEOH ZLWK 2QHZ 7KHUHIRUH ZH QHHG WR GHWHUPLQH WKH LQWHQWLRQDO UHODWLRQVKLSV EHWZHHQ^2QHZ`DQGWKHRSHUDWLRQVLQDPD[LPDOFRPSDWLEOHJURXS :HKDYHDOUHDG\WUDQVIRUPHGWKHPFJRIDQ\WDUJHWYHUVLRQLQWRWZRSDUWVWKHJHQ HUDWLRQFRQWH[WRI2QHZDQGDFRQFXUUHQWOLVW&&$OOWKRVHRSHUDWLRQVFRQWDLQHGLQWKH JHQHUDWLRQ FRQWH[W DUH LQWHQWLRQDOO\ FRPSDWLEOH ZLWK 2QHZ DQG &&>@ LV FRQWH[W HTXLYDOHQW WR2QHZ1RZZHQHHGWRGHWHUPLQHWKHLQWHQWLRQDOUHODWLRQVKLSVEHWZHHQ 2QHZDQGWKHRSHUDWLRQVLQWKHVHWRIFRQFXUUHQWKLVWRU\OLVWVRI0&*6′2QHZ GHQRWHG E\&&6VRWKDW2QHZFDQEHSURSHUO\H[HFXWHG ,I 2QHZ LV VSDWLDOO\ FRPSDWLEOH ZLWK DOO WKH RSHUDWLRQV LQ WKH OLVW && WKHQ VLPSO\ OLVWLQFOXVLRQWUDQVIRUPLQJ 2QHZ DJDLQVW && ZLOO REWDLQ WKH H[HFXWLRQ IRUP (2QHZ LQ WKHORFDOYHUVLRQFRQWH[W,I2QHZLVVSDWLDOO\FRQIOLFWLQJZLWKDOOWKHRSHUDWLRQVLQWKH OLVW&&WKHQLWLVQHLWKHUSRVVLEOHQRUQHFHVVDU\WRWUDQVIRUP2QHZDJDLQVWDQ\RQHRI WKHRSHUDWLRQVLQ&& *HQHUDOO\ZHQHHGWRWUDQVSRVHHDFKRIWKHFRQFXUUHQWKLVWRU\OLVWVLQWRWZRFRQ VHFXWLYHOLVWVLHFRPSDWLEOHKLVWRU\OLVW&3 DQGFRQIOLFWKLVWRU\OLVW&) LQWHUPV RILQWHQWLRQDOUHODWLRQVKLSVEHWZHHQLWVRSHUDWLRQVDQG2QHZ$OORSHUDWLRQVLQ&3DUH FRPSDWLEOHZLWK2QHZGHQRWHGE\&3~7>2QHZ@DQG2QHZFRQIOLFWVZLWKDOOWKHRSHUD WLRQV LQ &) GHQRWHG E\ >2QHZ@ ⊗7 &) ,Q WKH PHDQ WLPH 2QHZ FDQ EH LQFOXVLRQ WUDQVIRUPHG DJDLQVW &3 VXFK WKDW LWV H[HFXWLRQ IRUP (2QHZ DQG &)>@ DUH FRQWH[W HTXLYDOHQWZKHUH(2QHZ /,72QHZ&3 7KLVSURFHVVLVLOOXVWUDWHGLQ)LJE :LWK WKH LQGHSHQGHQW JUDSKLFV REMHFWV KDYLQJ LQGHSHQGHQW DWWULEXWHV WKH VSDWLDO UHODWLRQVKLSEHWZHHQWZRRSHUDWLRQVFDQEHLPPHGLDWHO\GHWHUPLQHGZLWKRXWUHIHUULQJ WR RWKHU RSHUDWLRQV +RZHYHU LW FDQ EH GHWHUPLQHG RQO\ ZKHQ WZR RSHUDWLRQV DUH GHILQHG RQ WKH VDPH FRQWH[W LQ WKH FDVH RI WH[W REMHFWV DOWKRXJK RXU GHILQLWLRQ RI VSDWLDOUHODWLRQVEHWZHHQWZRRSHUDWLRQVDUHLQGHSHQGHQWRIRWKHURSHUDWLRQV7KHUH IRUH LQ RUGHU WRGHWHUPLQHWKHVHUHODWLRQVKLSVRSHUDWLRQVPXVWEHWUDQVIRUPHGVXFK WKDWWKH\DUHFRQWH[WHTXLYDOHQW7KHSURFHVVLOOXVWUDWHGLQ)LJE FDQRQO\EHFRP SOHWHGVWHSE\VWHSDVLQWKHIXQFWLRQ&3B&)/2 &3&)'&)6HW(2!LQ$OJR ULWKP,WVWHFKQLFDOGHWDLOVDUHH[SODLQHGDVIROORZV 6LQFHDOORSHUDWLRQVLQHDFK&&DUHFDXVDOO\RUGHUHG*&&&>@ ⊆*&2QHZ PXVW EHWUXHDFFRUGLQJWRWKHFRQWH[WXDOLQWHQWLRQSUHVHUYDWLRQSURSHUW\)URPWKHGHILQL WLRQVRILQWHQWLRQDOUHODWLRQVKLSVLI2QHZ;7&&>@WKHQ2QHZ~7&&>@,QRUGHUWR FKHFNWKHLQWHQWLRQDOFRPSDWLELOLW\EHWZHHQ2QHZDQG&&>@2QHZPXVWEHLQFOXVLRQ WUDQVIRUPHGDJDLQVW&&>@VXFKWKDW72QHZÙ&&>@ZKHUH72QHZ ,72QHZ&&>@ 72QHZVWDQGVIRU7HPSRUDU\2QHZDQLQWHUPHGLDWHUHVXOWGXULQJWKHWUDQVIRUPDWLRQ 7KLVSURFHVVFDQFRQWLQXHXQWLODFRQIOLFWLVIRXQGRUWKHOLVW&&LVH[KDXVWHG
/L\LQ;XH0HKPHW2UJXQDQG.DQJ=KDQJ
$OJRULWKP&3B&)/2 &3&)'&)6HW(2! ,QSXW2LVDFDXVDOO\UHDG\RSHUDWLRQ/LVDOLVWRIRSHUDWLRQVDUUDQJHGLQDFFRUGDQFHZLWK ERWKWKHFDXVDORUGHUDQGFRQWH[WXDORUGHU2__/>L@IRUDOOL∈^«_/_`DQG2Ù/>@ 2XWSXW&3DQG&)DUHOLVWVRIRSHUDWLRQVLQWKHFDXVDORUGHU WKDWDUHLQWHQWLRQDOO\FRP SDWLEOHDQGFRQIOLFWLQJZLWK2UHVSHFWLYHO\'&)6HWLVDVHWRIRSHUDWLRQVGLUHFWO\FRQIOLFWLQJ ZLWK2(2LVWKHH[HFXWLRQIRUPRI2&3>_&3_@Ö(2DQG(2Ù&)>@ ^ &3 >@&) >@ &)6HW DQG '&)6HW VWRUH WKH VHWV RI RSHUDWLRQV LQWHQWLRQDOO\ FRQIOLFWLQJ DQG GLUHFWO\ FRQIOLFWLQJZLWK2UHVSHFWLYHO\ &)6HW ^`'&)6HW ^`(2 2 ,I_/_ WKHQUHWXUQ&3&)'&)6HW(2! )RUL WR_/_ ,I&)6HW ^` ,I(2;7/>L@ (2 ,7(2/>L@ (OVH &)6HW &)6HW∪^/>L@` '&)6HW '&)6HW∪^/>L@` (OVHLI∃2[ 2[∈&)6HW∧2[→/>L@ &)6HW &)6HW∪^/>L@` (OVH Q _&)6HW_WKHFXUUHQWQXPEHURIFRQIOLFWLQJRSHUDWLRQV /7UDQVSRVH/>L−QL@ ,I(2;7/>L−Q@ (2 ,7(2/>L−Q@ (OVH &)6HW &)6HW∪^/>L−Q@` '&)6HW '&)6HW∪^/>L−Q@` (QGIRU Q _&)6HW_ &3 />_/_Q@&) />_/_Q_/_@ 5HWXUQ&3&)'&)6HW(2! `
6XSSRVH&&>L@LVWKHILUVWRSHUDWLRQWKDWVSDWLDOO\FRQIOLFWVZLWK2QHZWKHQZHKDYH 72QHZ⊗'7&&>L@⊗'7GHQRWHVGLUHFWFRQIOLFW ZKHUH72QHZ /,72QHZ&&>L−@ DQG &&> L − @ GHQRWHV WKH OLVW RI RSHUDWLRQV LQ && ZLWK LQGLFHV IURP WR L − 1RZ72QHZFDQQRWEHLQFOXVLRQWUDQVIRUPHGDJDLQVW&&>L@VRWKHUHVWRIWKHOLVWPXVW EH H[FOXVLRQWUDQVIRUPHG DJDLQVW &&>L@ LI LW LV SRVVLEOH VXFK WKDW WKH\ DUH FRQWH[W HTXLYDOHQWWR72QHZ,I&&>L@→&&>L@WKHQE\GHILQLWLRQ2QHZLVLQGLUHFWO\FRQ IOLFWLQJ ZLWK &&>L @ 6XSSRVH &&>M@ _/_ ≥ M ! L LV WKH ILUVW RSHUDWLRQ FRQFXUUHQW ZLWK&&>L@WKHQLWPXVWDOVREHFRQFXUUHQWZLWKDOOWKRVHRSHUDWLRQVEHWZHHQ&&>L @DQG&&>M−@LQFOXVLYH LQWKHOLVW7KHUHIRUH&&>M@PXVWEHVSDWLDOO\FRPSDWLEOH ZLWKDOORIWKHPDQGFDQEH/7UDQVSRVHGVXFKWKDWLWFLUFXODUO\VKLIWVWRWKHSRVLWLRQ ZLWKDQLQGH[RILLQWKHOLVWRWKHUZLVHWKH\FRXOGQRWKDYHEHHQDSSOLHGWRWKHVDPH YHUVLRQ 2EYLRXVO\ WKH WUDQVIRUPHG &&>M@ ZLWK DQ LQGH[ EHLQJ FKDQJHG WR L LV
Intention Preservation by Multi-versioning in Distributed Real-Time Group Editors
523
context equivalent to TOnew. Their spatial relationship can be easily determined. If this operation is found to be spatially conflicting with Onew, then it (intentionally) directly conflicts with Onew. Otherwise, it is intentionally compatible with TOnew, and the latter can be inclusion-transformed against the former. This process can continue until the list is exhausted. Within Algorithm 1, we use the set CFSet to temporarily store the set of operations that have been found to be intentionally conflicting with Onew. Any operation causally following them in the list (L) is indirectly conflicting with Onew. All operations directly conflicting with Onew are kept in the set DCFSet, which is necessary for identifying the versions resulting from the execution of Onew. The order of the operations in both sets is irrelevant to the transformation. The result of applying Onew to a set of maximal versions sharing the same base object is a new version set consisting of a set of original versions (with their identifiers updated) and a set of new versions with compatible groups being compatible with Onew. Since the new versions created may not be a maximal version, that is, its compatible group is a subset of the one of some other version, it is necessary to remove non-maximal versions. This process is the same as the one in the multi-versioning of graphics objects with independent attributes [15].
6
Conclusions
This paper started with an examination of the intention violation problem in the existing operational transformation schemes, which occurs when concurrent operations are generated to target the same region of a text object. There are multi-versioning schemes that can preserve individual users’ concurrent conflicting intentions in a consistent way. However, they are proposed for intention preservation in collaborative editing environments, where a document consists of a set of independent objects with independent attributes. In order to support unconstrained collaborative editing on text objects, this paper has proposed an integrated operational transformation and multi-versioning scheme such that individual users’ contextual intentions are always preserved, that is, concurrent compatible intentions are preserved by operational transformation as in some of the existing schemes, whereas concurrent conflicting intentions are accommodated in different versions. Technically, the way in which operational transformation is integrated into the multi-versioning process is discussed in detail. The scheme has been implemented in Java in our research prototype called POLO, which is a real-time group editor supporting unconstrained editing of documents consisting of independent objects such as rectangles, ellipses, and lines, in addition to textboxes that are our major concern in this paper. The preservation of individual users’ intentions is only a part of the consistency maintenance process. Mechanisms are necessary to coordinate the users to reach a group intention [14, 17].
524
Liyin Xue, Mehmet Orgun, and Kang Zhang
References 1. P. Dewan. Architectures for collaborative applications. In M. Beaudouin-Lafon (ed.), Computer supported Co-operative Work, John Wiley & Sons, 1999, pp.169-193. 2. C.A. Ellis and S.J. Gibbs. Concurrency control in groupware systems. In Proc. of ACM SIGMOD Conference on Management of Data, May 1989, pp.399-407. 3. S. Greenberg and D. Marwood. Real time groupware as a distributed system: concurrency control and its effect on the interface. In Proc. ACM Conference on CSCW, November 1994, pp.207-217. 4. L. Lamport. Time, clock, and the ordering of events in a distributed system. In CACM 21(7), July 1978, pp.558-565. 5. T. P. Moran, K. McCall, B. van Melle, E. R. Pedersen, and F.G.H. Halasz. Some design principles for sharing in Tivoli, a white-board meeting support tool. In S. Greenberg, S. Hayne, and R. Rada (eds.), Groupware for Real-time Drawing: A Designer’s guide, McGraw-Hill, 1995, pp.24-36. 6. J. P. Munson and P. Dewan. A concurrency control framework for collaborative systems. In Proceedings of ACM CSCW’1996, pp. 278-287. 7. A. Prakash. Group editors. In M. Beaudouin-Lafon (ed.), Computer Supported Cooperative Work, John Wiley & Sons, 1999, pp.103-133. 8. M. Ressel, D. Nitshce-Ruhland, and R. Gunzenbaeuser. An integrating, transformationoriented approach to concurrency control and undo in group editors. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Nov. 1996, pp.288-297. 9. M. Suleiman, M. Cart, and J. Ferrie. Serialization of concurrent operations in distributed collaborative environment. In Proceedings of ACM Conference on GROUP, Phoenix, November 1997, pp. 435-445. 10. C. Sun and D. Chen. A multi-version approach to conflict resolution in distribute groupware systems. In Proceedings of International Conference on Distributed Computing Systems, April 2000. 11. C. Sun and C.A. Ellis. Operational transformation in real-time group editors: Issues, algorithms, and achievements. In Proceedings of ACM Conference on CSCW, Nov. 1998, pp.59-68. 12. C. Sun, X. Jia, Y. Zhang, Y. Yang, and D. Chen. Achieving convergence, causality-preservation, and intention-preservation in real-time cooperative editing systems. In ACM Transactions on Computer-Human Interaction, 5(1), March 1998, pp.63-108. 13. N. Vidot, M. Cart, J. Ferrie, and M. Suleiman. Copies convergence in a distributed real-time collaborative environment. In Proceedings of ACM Conference on CSCW, Dec. 2000, pp.171-180. 14. L. Xue, M. Orgun, and K. Zhang. A group-based time-stamping scheme for the preservation of group intentions. In Proceedings of the 4th International Conference on Distributed Communities on the Web, Sydney, Australia, April 2002. 15. L. Xue, M. Orgun, and K. Zhang. A generic multi-versioning algorithm for intention preservation in real-time group editors. In Macquarie Computing Reports, No. C/TR02-01, Macquarie University, March 2002. 16. L. Xue, K. Zhang, and C. Sun. Conflict control locking in distributed cooperative graphics editors. In Proceedings of the 1st International Conference on Web Information Systems Engineering (WISE 2000), Hong Kong, IEEE CS Press, June 2000, pp.401-408. 17. L. Xue, K. Zhang, and C. Sun. An integrated post-locking, multi-versioning, and transformation scheme for consistency maintenance in real-time group editors. In Proceedings of the 5th International Symposium on Autonomous Decentralised Systems, Texas, USA, IEEE CS Press, Mar 2001.
Supporting Group Awareness in Web-Based Learning Environments Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema Fraunhofer Institute for Secure Telecooperation (FhG-SIT), Rheinstrasse 75, D-64295, Darmstadt, Germany, Phone: +49 6151 869 – 399, Fax: +49 6151 869 – 224 {hu, kuhlenkamp, reinema}@sit.fraunhofer.de
Abstract: In e-Learning systems, group awareness is an important issue. A tutor or a student needs to know the activities, knowledge, and contexts of others’ in order to support learning processes effectively. In this paper, we present an awareness component model together with a notification mechanism designed to support group awareness within web-based learning environments. Furthermore, some of the key methods and techniques to implement the model will be presented. A prototype has been developed, which has been deployed within the e-Qualification Framework (e-QF) project. The objective of e-QF is to provide a platform, which supports the processing and provisioning of teaching material in the form of so-called Web-based trainings (WBTs). The platform can be accessed at the same time by tutors, authors, content providers, and students. In comparison to other approaches, which are aiming at supporting group awareness in web-based learning environments; our solution offers a higher degree of flexibility, tailor-ability, and reliability.
Key words: e-Learning, group awareness, notification mechanism, web, component models.
1 Introduction Today, with the progress of network and Internet technologies, e-Learning systems and platforms are becoming popular, and more and more people are using them [7]. Traditionally, such environments do support an integrated mix of synchronous and asynchronous learning activities in combination with opportunities for ad-hoc communication and collaboration among their users, in particular tutors and students. A well-designed e-Learning system will provide this type of environment, as it also need incorporate well-established teaching methodologies and proven educational philosophies, and has to enhance them with a rich mix of interactive media. Like most cooperative systems [12][13][14][15][16][17], the support for group awareness is also very essential in web-based learning systems. Teaching and learning are obviously group activities and the necessity for communication and coordination occurs quite often in these processes. It appears very important to be aware of the ideas, activities, and learning contexts of other students as well as tutors. Group Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 525-536, 2002 © Springer-Verlag Berlin Heidelberg 2002
526
Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema
awareness means to have an understanding of the activities of others [1]. In the past it has been shown, that group awareness plays an important role in workgroups (tutors and students do also form a kind of workgroup), i.e. such workgroups are more successful if they maintain awareness of the state of the team, its tasks and environment [2]. In e-Learning systems, at least, the following types of group awareness are relevant: Activity awareness: group activities include synchronous and asynchronous activities. Activity awareness means that tutors and students should know each other’s focuses, actions, interests, and traces in order that they can contact each other, raise questions, ask for explanation, or initiate discussions during the course of the learning process. Activity awareness is a basic requirement in a web-based learning environment in order to support the communication and coordination in particular among students and with tutors. Status awareness: information on the actual status of group members is very useful when it comes to decide whether somebody can be contacted or not and how, i.e. what are the most appropriate means to interact with somebody at a certain point in time. However, having knowledge about the availability of other’s is necessary but not sufficient. Social awareness [18]: it means to be aware whether somebody else is currently busy and therefore does not want to be disturbed or for any other reason is unwilling to interact with others regardless of his/her presence. Personalization awareness: the ability to personalize services is also required in eLearning. Awareness information can be gathered according to the requirements of a user by the definition of awareness rules, which allow filtering out only the required information. Process awareness: in order to follow predefined learning procedures, it is useful to provide process awareness, which gives members a sense of where their pieces fit into the whole picture, what is the next step, and what needs to be done to move the process a step forward. Awareness information supports coordination. Awareness information is essential to the success of collaborative activities [3]. More and more cooperative software tools are providing awareness information [4][5][6][8][9][10][11]. One example is ShrEdit [1], a multi-user synchronous editing tool, which provides a so-called shared feedback approach. DOOR is another example, which supports awareness data management based on an “extended client/replicated server” architecture for asynchronous groupware [3]. In order to support a mix of synchronous and asynchronous learning activities, updated methods and mechanisms supporting group awareness are strongly required. In the following, we will introduce our awareness component model, which is being used in the e-QF platform.
Supporting Group Awareness in Web-Based Learning Environments
527
2 The Awareness Component Model 2.1 Dealing with Awareness Data in the e-QF Platform The e-QF platform offers a web-based training/learning environment. The user group includes students, tutors and course builders, who can communicate and collaborate via a set of provided software tools. These software tools are running with group information including awareness data. From a user’s view, awareness information can be divided into two types: initiative awareness information and passive awareness information. Initiative awareness requires users to take up specific actions in order to request awareness information. For example, a user wants to get information about the interests of another user. Passive awareness information is being passed to a user without the user requesting any specific action. For instance, the system keeps track of meeting invitations, and forwards them to the related invitees automatically (see Fig. 1). In order to avoid users being disturbed by awareness information, which is not relevant in their current context, a customized notification mechanism is being applied, i.e. it can be defined what type of information will be delivered in which context.
Fig. 1. Awareness information window in the communication tool of the e-Qf
In the e-QF platform, both, synchronous as well as asynchronous awareness modes are supported. Synchronous awareness requires the system providing information on the current status and activities of its users together with e.g. the status of meetings or documents, and being able to trace their changes. Asynchronous awareness encompasses gathering history information, monitoring notifications and perceiving changes in the environment.
528
Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema
Awareness information can be shared among users via different tools, such as simple information windows, audio/video conference tools, chat rooms, shared whiteboards, or email. 2.2 The Awareness Component Model One of the key components within the e-QF platform is an awareness component model, which is composed of an awareness component together with a notification mechanism (see Fig. 2)
Fig. 2. The awareness component model in the e-QF
In Fig. 2, i denotes the interface component, f is the functional component, and d is the data component. A notification server is being used to exchange messages among these components. The interface component i provides the interface to a user. It receives awareness information from a graphical user interface and returns notifications by setting and getting the attributes of the awareness component. To issue awareness information, the relative tools transfer the users’ requirements to the specific methods encapsulated in the component, which parse it and send it to relevant objectives. However, users can receive passive awareness information via the registered information and personalized definition of the filter, either synchronously or asynchronously in accordance with their authorizations and roles. Functional components f can gather awareness information, invoke related methods or classes, and respond with the result. An example for such a functional component would be a component, which traces and keeps track of the activity record of a user in order to make other authorized users to be aware of his/her current focus and context. In our model, awareness data, in particular in asynchronous mode, needs to be kept in a persistent data source, such as a relational database or LDAP directory. The data component focuses on the persistent management of awareness information. In this component, awareness information can be stored in different tables, such as Log, Conference, UserInfo etc. Their logical relationship to other static information
Supporting Group Awareness in Web-Based Learning Environments
529
sources is always maintained by this component for more comprehensive and deeper awareness requirements from users. Another important task of this component is concurrent access control and the access to multiple heterogeneous data sources. In fact, the data component encapsulates drivers for different data sources (e.g. MS Access, Netscape LDAP). Additional data sources can be added easily, which requires the provisioning of additional data driver classes in the component. The notification mechanism plays a key role in the awareness component model. Therefore, every component has to register an alias in the name space of the notification server. Components can either use a PTP (point-to-point) and/or Pub/Sub (publish-subscribe) mode to exchange information. In our model, the PTP mode is based on the concept of awareness data queues. Awareness data is always being sent to a specific queue, and a receiver extracts the information from the queue established to hold its messages, such as the communication mode between i and f, d. In the Pub/Sub mode, each side addresses awareness data to a certain topic. Publishers and subscribers are generally anonymous and may dynamically publish or subscribe to the content hierarchy. The system takes care of distributing the information on such a topic from multiple publishers to multiple subscribers. Generally, every component includes the following basic elements: Attributes: the characters of every component, they can be accessed by the relevant methods. PTP class: encapsulates the point-to-point mode of Java JMS. Through message queues, the awareness information can be transferred in synchronous and asynchronous mode. This includes the send, receive and onMessage methods. The objective of this class is to guarantee that the information is sent to the specific object, which can deliver and return it in a flexible manner. Pub/Sub class: is mainly composed of a Publish/Subscribe mode provided by Java JMS and includes the same sending and receiving methods as the PTP class. However, it makes the component able to distribute the information to multiple subscribers synchronously and asynchronously, in accordance with certain topics. Such topics retain the information only until they have successfully been delivered to the corresponding subscribers. Fig. 3 shows the architecture of the awareness component model.
Fig. 3. The architecture of the awareness component model
530
Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema
As illustrated by the above figure, the awareness information is transferred among different components, which are composed of Java beans. The Java JMS technology has been used to implement the lower level notification mechanism. Although the transferring level encapsulates the PTP and the P/S mode, this does not mean that both modes should be used in the same application. On the contrary, an appropriate transferring mode is promoted. In the e-QF platform, named queues (DBqueue for d, Actqueue for f ) are being used for transferring awareness information to d and f components, and Pub/Sub mode based on topics supports the i component to be aware of the cooperative information of group members synchronously and asynchronously, such as status, activities, and notifications. In fact, these topics are organized by a tree structure and every node symbolizes a specific topic. The awareness information is not only being sent to the node but also to all of its children, depending on the settings in the transferring level. For instance (in Fig. 4), the topic node A has two children, topic B and topic C. If the awareness information is sent to the A node, it will also be distributed to B and C, using the preorder traversal algorithm, if the setting of the transferring level is TRUE. Algorithm(Binary Tree): TopicSending (t,nodes,mode,msg) Input t: the pointer to the structure of the topic tree nodes:TRUE includes the children, otherwise FALSE mode: synchronous or asynchronous msg: awareness information Output: none Body { if(t!=NULL){ TranSending(t->data, nodes, mode, msg); TopicSending (t->lchild, nodes, mode, msg); TopicSending (t->rchild, nodes, mode, msg); } } The benefit of this architecture is to make the users aware of the most relevant information. For that, users need to define the topics (nodes) they are interested in via the user interface, when they register on the system. In this structure there exists a problem. For example, if a user has registered for topics B and C, when the information is sent to A as well as its children, that means the user has to suffer the information disturbing twice. To solve this problem, we define the message ID for the information sent every time. When the user gets the information, the message ID is kept in the Log table, and redundant information with the same ID will be discarded by the analysis layer in i component.
6XSSRUWLQJ*URXS$ZDUHQHVVLQ:HE%DVHG/HDUQLQJ(QYLURQPHQWV
)LJ7KHQDPLQJVSDFHRIWKHWRSLF
$QRWKHU LPSRUWDQW IHDWXUH RI WKH DQDO\VLV OD\HU LQ L FRPSRQHQW LV VXSSRUWLQJ SHUVRQDOL]HGDZDUHQHVVE\GHILQLQJXVHURZQHGILOWHUV$EVWUDFWO\LIDVHW8 ^XX «X Q` Q UHSUHVHQWV WKH UHFHLYHG DZDUHQHVV LQIRUPDWLRQ RI D XVHU DQG D VHW 9 ^Y Y« Y P` P GHQRWHV WKH DZDUHQHVV LQIRUPDWLRQ ZKLFK WKH XVHU LV FXUUHQWO\LQWHUHVWHGLQ7KHQWKHILOWHUUXOHLVLIXL ∈ 9 WKHQUHFHLYHHOVHGLVFDUGQ L ,QRXUPRGHOWKHDZDUHQHVVLQIRUPDWLRQVHOHFWRULVDVWULQJZKRVHV\QWD[LV EDVHGRQDVXEVHWRIWKH64/ FRQGLWLRQDOH[SUHVVLRQV\QWD[EXWZHPDQDJHLWE\ DSURSHUWLHVILOHZLWK;0/IRUPRUHFRQYHQLHQWPRGLILFDWLRQWRXVHUV)RULQVWDQFHD XVHU ZDQWV WR UHFHLYH WKH FRQIHUHQFH QRWLILFDWLRQ ³conference_name LIKE ‘%e-Qf%’´KHVKHFDQDGGLWDVDQLWHPLQWKHSURSHUWLHVILOHXVLQJD*8, conference_name LIKE ‘%e-Qf%’ 7KH PHWKRG UHTXLUHV WKH LQIRUPDWLRQ EHLQJ WUDQVIHUUHG ZLWK QDPHYDOXH SDLUV ZKLFK FDQ EH DGGHG DXWRPDWLFDOO\ E\ WKH V\VWHP DFFRUGLQJ WR WKH XVHU GHILQHG WRSLFVQDPH DQGNH\ZRUGVYDOXH ,QVRPHFDVHVWKHNH\ZRUGVDUHDVVLJQHGZLWK GHIDXOW YDOXHV HJ WKH VWDWXV DQG DFWLYLWLHV LQIRUPDWLRQ 7KH V\VWHP WUDFNV DQG PDQDJHVWKHNH\ZRUGVVHWLQRUGHUWRVXSSRUWWKHILOWHUGHILQLWLRQ 7KHLQWHUIDFHLQWKHPRGHORIIHUVVHWDWWULEXWHVDQGJHWDWWULEXWHVRSHUDWLRQVIRUWKH DWWULEXWHV RI HDFK FRPSRQHQW $ZDUHQHVV LQIRUPDWLRQ FDQ EH GHOLYHUHG RU JDWKHUHG EHWZHHQWKHWZROD\HUV )LJ VKRZV WKH SURFHVV IORZ RI DZDUHQHVV LQIRUPDWLRQ LQ DQ LQWHUIDFH FRPSRQHQWL 7KH)HDWXUHVRIWKH0RGHO 6XSSRUWLQJ6\QFKURQRXVDQG$V\QFKURQRXV$ZDUHQHVV 7KH DZDUHQHVV FRPSRQHQW PRGHO ZKLFK LV SDUWLFXODUO\ GHVLJQHG IRU ZHEEDVHG V\VWHPVZLWKDPL[RIV\QFKURQRXVDQGDV\QFKURQRXVFRRSHUDWLYHDFWLYLWLHVVXSSRUWV JDWKHULQJDQGGHOLYHULQJRIDZDUHQHVVLQIRUPDWLRQ,WFDQDOVRUHDFWRQWKHUHFHLYHG LQIRUPDWLRQ,QWKLVPRGHOV\QFKURQRXVDQGDV\QFKURQRXVPRGHVDUHERWKDYDLODEOH LQ 373 RU 36 PRGH ,Q V\QFKURQRXV PRGH HYHU\ FRPSRQHQW H[SOLFLWO\ LQFOXGHV D VXEVFULEHU DQG D SXEOLVKHU ZKLFK IHWFK WKH LQIRUPDWLRQ IURP WKH GHVWLQDWLRQ E\
532
Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema
Fig. 5. The process flow of the interface component
calling the receive method. The receive method can block until the information arrives, or it can time out in case the information does not arrive within a specified time limit. In asynchronous mode, a component registers an information listener, similar to an event listener. Whenever the information arrives at the destination, the message-driven method of the component is started and processes the information.
2.3.2 Flexibility and Tailor-Ability The awareness component model is based on a message exchange mechanism. A component may be added or deleted flexibly during run-time, without affecting the underlying system or platform. Therefore, the component encapsulates some classes, attributes, and methods, which can be modified according to the requirements of the underlying application. For example, developers can specify whether they want to receive notification messages in a synchronous or asynchronous manner and how the message should be delivered. In the awareness model, a data component is provided, which provides access to various external data sources, e.g. an LDAP directory. New drivers for other external data sources can be flexibly added, freeing the developers of upper layer components taking care about different data sources and different storage formats.
2.3.3 Distributed The awareness component model is distributed. Components can exist in different places and communicate each other via the message mode.
6XSSRUWLQJ*URXS$ZDUHQHVVLQ:HE%DVHG/HDUQLQJ(QYLURQPHQWV
5HOLDELOLW\ 7KH QRWLILFDWLRQ PHFKDQLVP LV UHDOL]HG E\ XVLQJ -DYD -06 ,W LV HQVXUHG WKDW D PHVVDJHLVGHOLYHUHGRQFHDQGRQO\RQFH/RZHUOHYHOVRIUHOLDELOLW\DUHDYDLODEOHIRU DSSOLFDWLRQVWKDWFDQDIIRUGWRPLVVPHVVDJHVRUUHFHLYHGXSOLFDWHGPHVVDJHV
5HXVDELOLW\DQG+HWHURJHQHRXV3ODWIRUP6XSSRUWLQJ 7KHGHVLJQRIWKHFRPSRQHQWDOORZVLWVXVDJHLQRWKHUDSSOLFDWLRQV7KHPHVVDJHVGR KDYHDEDVLFIRUPDWZKLFKLVTXLWHVLPSOHEXWIOH[LEOHHQRXJKDOORZLQJGHYHORSHUVWR FUHDWHPHVVDJHVWKDWPDWFKIRUPDWVXVHGE\QRQ-06DSSOLFDWLRQVRQRWKHUSODWIRUPV 7KHH4)3ODWIRUP %DVHGRQWKHDZDUHQHVVFRPSRQHQWPRGHODFRQIHUHQFHUHVHUYDWLRQDQGPDQDJHPHQW WRRO )LJ KDV EHHQ GHYHORSHG IRU WKH VXSSRUW RI JURXS FRPPXQLFDWLRQ LQ H4) SODWIRUP,WLQFOXGHV 8VHUPDQDJHPHQW 7KHWRROFDQPDQDJHWKHLQIRUPDWLRQRIJURXSPHPEHUVLQDFRQIHUHQFHVXFKDVWKHLU UROHVVWDWXVEDFNJURXQGHWF &RQIHUHQFHPDQDJHPHQW &RQIHUHQFHPDQDJHPHQWFRPSULVHVWKHIROORZLQJIXQFWLRQV
•6XSSRUWIRUDGKRFFRQIHUHQFHV $ XVHU FDQ LQYLWH RWKHU XVHUV ZKR DUH FXUUHQWO\ RQOLQH WR LQVWDQWDQHRXVO\ MRLQ D PHHWLQJ •6XSSRUWIRUVFKHGXOHGFRQIHUHQFHV ,WSHUPLWVDJURXSPHPEHUWRFKRRVHRWKHUPHPEHUVDQGLQYLWHWKHPWRDVFKHGXOHG FRQIHUHQFHZKLFKFDQEHKHOGDWDQ\WLPHLQWKHIXWXUH7KHLQYLWHGPHPEHUVFDQEH QRWLILHG YLD HPDLO DQG ZKHQ WKH VFKHGXOHG GDWH RI WKH FRQIHUHQFH KDV DUULYHG WKH FRQIHUHQFHOLQNZLOODSSHDULQWKHDZDUHQHVVLQIRUPDWLRQZLQGRZ •6XSSRUWIRUYLHZLQJPRGLI\LQJDQGGHOHWLQJFRQIHUHQFHV 8VHUV FDQ YLHZ DQG PRGLI\ GHWDLOHG FRQIHUHQFH LQIRUPDWLRQ VXFK DV WKH VFKHGXOHG WLPH WKH UROHV RI WKH DWWHQGDQWV PRGLI\ FRQIHUHQFH VHWWLQJ RU GHOHWH D SUHYLRXVO\ VFKHGXOHGFRQIHUHQFH ,QWHUDFWLYH7RROV 6HYHUDOPHDQVWRFRPPXQLFDWHDQGFROODERUDWHDUHVXSSRUWHG2Q WKHFOLHQWVLGHHJ 06 1HWPHHWLQJ FDQ EH XVHG WR VXSSRUW DXGLRYLGHR FRPPXQLFDWLRQ$GGLWLRQDOO\D FKDWURRPLVSURYLGHGWRJHWKHUZLWKWKHDELOLW\WRVKDUHDQDSSOLFDWLRQRUWRMRLQWO\HGLW DGUDZLQJRQDZKLWHERDUG
534
Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema
Fig. 6. The conference tool
3 Future Work So far, the awareness component model has been applied in the e-QF platform. Providing a larger degree of expansibility, additional work needs to be done. This includes an extension of the notification mechanism, in order to support the transaction of asynchronous awareness information, together with an analyzing component for awareness information based on knowledge of the nature of the underlying cooperation process. Furthermore the integration of the H.320 protocol is planned to support ISDN-based video-conferencing devices. Telephony services for audio conferencing will be added as well. Additionally, we plan to use XML as the general message exchange format. The benefit gained by this is a better expandability of the awareness model to more complex and heterogeneous environments, and the application of the model within different application domains by defining corresponding DTDs. Acknowledgments Finally, we would like to thank our project partners for their help and contribution to our work.
Supporting Group Awareness in Web-Based Learning Environments
535
References 1. Paul Dourish, Victoria Bellotti: Awareness and coordination in shared workspaces. Conference proceedings on Computer-supported cooperative work (1992) 107 - 114 . 2. Alberto Espinosa, Jonathan Cadiz, Luis Rico-Gutierrez, Robert Kraut, William Scherlis, Glenn Lautenbacher: Coming to the wrong decision quickly: why awareness tools must be matched with appropriate tasks. Proceedings of the CHI 2000 conference on Human factors in computing systems (2000) 392 – 399. 3. Nuno P., J. Legatheaux Martins, Henrique Domingos, S. Duarte.: Data management support for asynchronous groupware. Proceeding of the ACM 2000 Conference on Computer supported cooperative work (2000) 69 – 78. 4. Samuli Pekkola, Mike Robinson, Markku-Juhani O. Saarinen, Jonni Korhonen, Saku Hujala, Tero Toivonen: Collaborative virtual environments in the year of the dragon. Proceedings of the third international conference on Collaborative virtual environments (2000) 11 – 18. 5. Mike Fraser, Steve Benford, Jon Hindmarsh, Christian Heath.: Supporting awareness and interaction through collaborative virtual interfaces. Proceedings of the 12th annual ACM symposium on User interface software and technology (1999) 27 - 36 . 6. Charles Steinfield, Chyng-Yang Jang, Ben Pfaff: Supporting virtual team collaboration: the TeamSCOPE system. Proceedings of the international ACM SIGGROUP conference on Supporting group work (1999) 81 – 90. 7. Igor Hawryszkiewycz: Creating and supporting learning environments. Proceedings of the on Australasian computing education conference (2000) 134 138 . 8. Chris Greenhalgh, Steve Benford, Gail Reynard: A QoS architecture for collaborative virtual environments. Proceedings of the seventh ACM international conference on Multimedia (1999) 121 – 130. 9. Changtao Qu, Wolfgang Nejdl: Constructing a web-based asynchronous and synchronous collaboration environment using WebDAV and Lotus Sametime. Conference Proceedings on University and College Computing Services (2001) 142 – 149. 10.John C. Tang, Nicole Yankelovich, James Begole, Max Van Kleek, Francis Li, Janak Bhalodia: ConNexus to awarenex: extending awareness to mobile users. Proceedings of the SIGCHI conference on Human factors in computing systems (2001) 221 - 228. 11.Yiming Ye, Stephen Boies, Paul Huang, John Tsotsos: Smart distance and WWWaware: a multi-agent approach. Proceedings of the fifth international conference on Autonomous agents (2001) 176 – 177. 12.Kevin Palfreyman , Tom Rodden: A protocol for user awareness on the World Wide Web. Proceedings of the ACM 1996 conference on on Computer supported cooperative work (1996) 130- 139 13.Weixiong Zhang, Randal W. Hill: A template-based and pattern-driven approach to situation awareness and assessment in virtual humans. Proceedings of the fourth international conference on Autonomous agents (2000) 116 – 123.
536
Bin Hu, Andreas Kuhlenkamp, and Rolf Reinema
14.Scott E. Hudson, Ian Smith: Techniques for addressing fundamental privacy and disruption tradeoffs in awareness support systems. Proceedings of the ACM 1996 conference on Computer supported cooperative work (1996) 248 – 257. 15.Uta Pankoke-Babatz, Anja Syri: Collaborative workspace for time deferred electronic cooperation. Proceedings of the international ACM SIGGROUP conference on Supporting group work: the integration challenge (1997) 187 – 196. 16.Shinkuro Honda, Hironari Tomioka, Takaaki Kimura, Takaharu Ohsawa, Kenichi Okada, Yutaka Matsushita: A virtual office environment based on a shared room realizing awareness space and transmitting awareness information. Proceedings of the 10th annual ACM symposium on User interface software and technology (1997) 199 – 207. 17.Markus Sohlenkamp, Greg Chwelos: Integrating communication, cooperation, and awareness. Proceedings of the conference on Computer supported cooperative work (1994) 331 – 343. 18.Tollmar, K., Sandor, O., and Schemer: A. Supporting social awareness @Work: Design and experience. Proceedings of CSCW ’96 (1996) 298-307.
Raison d’Etre Object: A Cyber-Hearth That Catalyzes Face-to-Face Informal Communication Takashi Matsubara1, Kozo Sugiyama2, and Kazushi Nishimoto2 1
Hitachi Ltd., Yoshida 292, Totsuka, Yokohama, 244-0817, Japan
[email protected] 2 Japan Advanced Institute of Science and Technology, Asashidai 1-1, Tatsunokuchi, Nomi, Ishikawa 923-1292, Japan {Sugi, Knishi}@Jaist.ac.jp
Abstract. We propose a new concept, raison d’etre objects, and a new ware, cyber-hearth, that affords snugness in face-to-face communication in a shared informal place such as a refreshing room or lounge. We carried out observation experiments on the behavior of individuals in such a place and found interesting tendencies: most people behave unconsciously to pay attention to physical objects by watching or handling as excuses for entering or staying there. This might be because participants are unusually close each other in terms of proxemics. We developed a prototype cyber-hearth IRORI that incorporated raison d’etre objects with a facility for enhancing conversations, employing a metaphor ‘hearth’ (‘irori’ in Japanese) as a total design principle since ‘irori’ is well recognized as a snug, traditional informal place in Japan. We preliminarily evaluated IRORI by conducting a user experiment. The results of the experiment suggested that IRORI attained snugness and therefore were effective for catalyzing face-to-face informal communication.
1
Introduction
Nowadays the importance of face-to-face informal communication is well recognized [1]. Informal communication in an organization is indispensable elements for attaining targets of an organization harmonizing with formal activities. Most researches of informal communication support so far have inclined to researches of awareness support with video connections etc. in a distributed environment due to the fact that multi-site offices are expanding widely [2,3,4]. However, it is also important to study how to facilitate or catalyze communications in a shared informal place such as a refreshing room or lounge because we often get essential information and good ideas through informal communication in a relaxing and natural atmosphere. In other words, it is essential for participants to interact face-to-face in the same time and space for dealing with tacit knowledge that is difficult to be transmitted via electronic media [5]. In this research, we develop a prototype system called IRORI based upon the following hypothesis: “Snugness is the most important factor for facilitating or catalyzing communication in a shared informal place such as a refreshing room or lounge. An effective way for realizing snugness is that the system incorporates raison Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 537-546, 2002 © Springer-Verlag Berlin Heidelberg 2002
538
Takashi Matsubara, Kozo Sugiyama, and Kazushi Nishimoto
d’etre objects as system facilities and ‘hearth’ (‘irori’ in Japanese) metaphor as a total system design, where an ‘irori’ is a traditional fireplace in Japan.” Considerations to get the above hypothesis are as follows. The purpose of using a formal place such as a meeting room is usually limited and shared well among participants whereas in the case of a shared informal place it is usually not limited and not well shared. This means that time is needed for participants to adjust different intentions and interests. Therefore proxemics pressures [6] cause each participant an uncomfortable feeling in joining and staying still more if there is no active conversation in the space. To avoid proxemics pressures or attain snugness, each participant needs some reasons to justify still joining and staying there: usually appealing unconsciously to other participants that “I am paying attention to physical objects”. Also often a conversation is started with talking about the object itself that some participant is touching or seeing. These mean that objects have effects of comfortableness or snugness to participants, which are called the effects of raison d’etre via objects and such objects are called raison d’etre objects. Thus, it is important for attaining snugness that the system incorporates such the effects.
(a) a hearth
(b) a prototype cyber-hearth IRORI
Fig. 1. A real-hearth and a cyber-hearth. The left picture [7] shows a traditional hearth in Japan called ‘irori’ that is used for a snug living room. The right picture shows a prototype cyberhaerth IRORI we developed. We use ‘hearth’ metaphor for designing our system IRORI
It is convenient to have an adequate metaphor in designing a system. Fortunately we have a good instance for the metaphor called ‘irori’ that is a traditional hearth used for a snug living room with various objects (see Fig.1(a)). This informal place ‘irori’ is effective for facilitating informal communication in relaxing and natural atmosphere. Thus, we adopted this metaphor to develop our system. In this paper we call a system that incorporates the effects of raison d’etre via objects and ‘hearth’ metaphor a cyber-hearth (see Fig.1(b)). It should be noted here that this metaphor is an extension of the concept, raison d’etre objects, into a wider,
A Cyber-Hearth That Catalyzes Face-to-face Informal Communication
539
comprehensive concept emphasizing mental and social aspects as well as physical and informational aspects. Therefore, a cyber-hearth is relating to wares for supporting ‘Ba’ [5] and cooperative buildings [8]. We took three steps approach: observation, implementation, and evaluation
2 Observation Experiments in a Shared Informal Space We carried out observation experiments to investigate the behavior of individuals in a shared informal place: what are important factors and triggers for facilitating informal communication? As the place for observation experiments we chose an open shared informal place (about 3.5m x 3.5m) situated in a center of a graduate student room (16m x 16m; 16 booths for students acquainted each other) in JAIST where there were a round table (diameter 90cm), four chairs around it, and other things. The place was formed spontaneously in advance of the experiment just as a relaxing space with no special purpose and used frequently by several alternative participants to talk each other, read magazines and so on. Therefore it was the best place for our observation purpose. We arranged three video cameras and other things such as toys, boards, and magazines (see Fig. 2). Community Communitymessage message board board
Video Video camera camera
Cards Cards
Magazines Magazines Newspaper Newspaper Magic Magic snake snake
Rubik’s Rubik’s cube cube
Whiteboard Whiteboard scanner scanner
Notebook Notebook PC PC
Whiteb oard Whiteboard
1.5 m 3.0 m
Fig. 2. The arrangement in an shared informal place used for our observation experiments.
We recorded the behavior of individuals by three video cameras for 7 days and analyzed the record. We first selected scene segments that include 5 or more minute talk by 3 or more participants but do not include taking foods or drinks. We got 16 scene segments. For each scene segment we carefully checked and described the movements of individuals (i.e., entering, staying, leaving, handling, seeing etc.),
540
Takashi Matsubara, Kozo Sugiyama, and Kazushi Nishimoto
things participants saw or handled, triggers conversations were started or ended, and so on. An example for the description is as follows. “Participants A, B, and C. A is reading a magazine and B is handling cards. C enters and a conversation between B and C is started. A is sometimes joining the conversation while reading the magazine. In silence, each participant looks at different points. Participant A notices writings on the white board, both B and C look at it together, and a conversation is continued. B searches around him and holds a magazine, but B does not read it though turning over its pages. While conversations, B closes the magazine. A conversation is continued intermittently….” Fig.3 shows scenes in the shared informal place.
Fig. 3. Three pictures made from different directions at the same time in a shared informal place situated in the center of a graduate student room where the observation experiment was being conducted.
As the result of the observation experiments we found the following remarkable relationships between physical objects and participants behavior: 1. Behavior in entering: While approaching the table, most of individuals check who are around the table. However, after arriving there, usually a conversation is not started soon. They first begin to touch objects such as cards or magazines on the table or to see boards as if they came to the place for touching the objects or seeing boards. 2. Behavior in staying: Participants begin to talk about the object itself that someone is touching or seeing. Otherwise, a conversation is started gradually and intermittently without a clear direction. Sometime a conversation is active and sometime non-active. In both cases most of participants continue to handle or see the objects simultaneously. It seems that every participant is waiting a timing to join, begin, or re-begin a conversation rather than they are interested in just handling the objects.
A Cyber-Hearth That Catalyzes Face-to-face Informal Communication
541
3. Time for which participants touch objects: It is remarkable that participants around the table are touching any object during 69.6% of the time in average. The results of the experiment well support our hypothesis described above. From the above relationships, it is suggested that the existence of physical objects gives participants chances for starting a conversation and reasons for justifying “enter there” and “stay there”. The necessity of raison d’etre objects can be supported from a viewpoint of proxemics. People possess unspoken proxemics rules for appropriate distances in daily relationships [6]. It is proposed that the distances are categorized into the five zones according to the possibilities for starting a conversation [9]. Among them the following zones are interesting for us: 1. Zone of conversation (50cm~1.5m): When this zone is entered, a conversation is mandatory. 2. Zone of proximity (1.5m~3m): It is possible to enter this zone without starting a conversation for the time being. Our shared informal place used for the observation experiment has the critical size where it is mandatory to start a conversation or not (see Fig.2). This might be a reason why the raison d’etre effects of objects were observed in the place. Fig.4 illustrates possible models of mutual awareness among participants in the shared informal place.
TV
Handling
Handling
Seeing
Seeing
Being aware
Conversation
Being aware
Conversation
Fig. 4. Possible models for the effects of justifying via objects. Both participants are aware of behavior of handling (left) or seeing (right) each other. This situation becomes excuses (or raison d’etre) to each other for entering or staying there
3.
A Prototype Cyber-Hearth IRORI
In order to develop a prototype system to facilitate communication in a shared informal place, we adopted two design principles. One of them is a general principle: our system should be like “irori” both physically and mentally because “irori” was a comfortable, traditional shared informal place with a long history. It has been used for multiple informal purposes such as cooking, eating, lighting, warming, and talking.
542
Takashi Matsubara, Kozo Sugiyama, and Kazushi Nishimoto
We wish that our system will become a digital “irori” in offices and schools in the near future. The other one is that our system should incorporate the raison d’etre effects of objects. There also exist various objects including fire, charcoals and a kettle around a traditional “irori” used for the above purposes. In our view, these objects play the role of easing “enter there” and “stay there” like a catalyst. The first principle can be broken down into the following requirements: (I1) the physical design of our system should be a “irori” style, and (I2) our system should afford comfortableness or relaxation but avoid tiredness. For the second principle we can break down “stay there” effect into three requirements: (S1) the system should have elements (i.e., objects) that the user can touch, control, and/or see, (S2) the user can readily touch or stop to touch the elements at will, and (S3) user’s behavior to touch them should not look like unnatural. We can beak down the “enter there” effect into three requirements: (E1) the system should provide such information as the user can not see it in any other place in surroundings, (E2) objects and their contents should change frequently so that it is useful for the user to see them, and (E3) the user’s behavior to see them should not look like unnatural. We implemented a prototype system called IRORI according to the above requirements. The system offers the following facilities: 1. Physical arrangement for a shared informal place in “irori” style: A traditional “irori” has been continuously sophisticated in terms of snugness (i.e., communicability, familiarity, and comfortableness) for a long time. Therefore, seeking similarity to an “irori” style in designing our system must be meaningful. Fig.5 shows IRORI system that has the appearance similar to “irori”. This facility satisfies requirement I1. 2. Direct manipulation of water and vapors in 3D space by fingers: This facility is intended to satisfy requirements I2, S1, and S2. We realize water and vapors in 3D space instead of fire and charcoals of ‘irori’. Water and vapors are shown on a big plasma display (PDP) covered by a touch panel. We can control the movement of vapors by fingers. We might have a tendency to be fond of touching water and vapors that give us relaxation effects. Moreover, water and vapors dynamically and complicatedly changes and therefore this never make us feel tiredness. 3. Dynamic presentation of Web contents hidden in vapors for enhancing conversations: This facility is intended to afford topics for enhancing conversations while a real-hearth cannot provide such the facility via fire and firewood (or charcoals). Each vapor has a connection to a Web page. The rule for the connection can be defined according to the user’s convenience. Currently it is linked to a page that some members in an organization had accessed previously. Each member is identified by color. When the user touches a vapor and attains specific conditions, the page is displayed on another horizontal PDP settled near. This facilities satisfies requirements E1, E2, and E3. 4. Non-simple operation for searching and controlling vapors: The specific conditions to display Web pages are not known by the user. The user has to search and find them. Therefore, this facility satisfies requirement S1. IRORI consists of three parts: a main part for displaying water and vapors on a PDP, a part for displaying Web pages on another PDP, and a proxy server. These parts are connected via network (see Figs. 1(b), 5 and 6).
A Cyber-Hearth That Catalyzes Face-to-face Informal Communication
543
P roxy server C lient
S hared Inform alP lace
Fig. 5 . Structure of IRORI system. Each vapor on the screen of horizontal PDP is linked to an URL that a client had accessed previously
Fig. 6. Snapshots of the screen of the horizontal PDP
544
4
Takashi Matsubara, Kozo Sugiyama, and Kazushi Nishimoto
Preliminary User Experiment for Evaluating IRORI
Evaluation Experiments are preliminarily carried out in terms of “stay there” (see Fig.7). Our concern exists mainly in evaluating how intensely the user of IRORI feels snugness comparing with the cases without using IRORI. Moreover, we are interested in relationships among objects, conversation, and snugness. We prepared three different experimental environments: (a) BASE: an environment where there is only a table and nothing on it, (b) LEAFLET: an environment where there are a table and leaflets on it, and (c) IRORI environment. Five groups of subjects were formed and each group consisted of three subjects. A session for each group was planed as the following schedule (126 minutes): 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Fig. 7.
Explanation on experiments to the subjects Communication in environment BASE Rest Communication in environment LEAFLET Rest Explanation on IRORI system Use of IRORI (not recorded) Communication in environment IRORI Rest Questionnaire (replayed)
15 min 3 min 15 min 3 min 5 min 5 min 15 min 5 min 60 min.
Three subjects are using IRORI in the evaluation experiment
Scenes of all the sessions were recorded by a video camera. For each session three subjects who were acquainted each other were asked to make communication in an own way, i.e., having a conversation were not requested. After the sessions, all
A Cyber-Hearth That Catalyzes Face-to-face Informal Communication
545
subjects were requested to reply to a questionnaire through looking at the replay of video records. In the questionnaire each subject were asked to give evaluation values (0: none, 1: lowest,…, 5: highest) for the following four evaluation items at every 30 seconds: (a) CONVERSATION: Intensity of interest in the conversation, (b) LOOK: Intensity of interest in the looking object , (c) TOUCH: Intensity of interest in the touching object, and (d) SNUGNESS: Intensity of snugness. The following suggestions were obtained from statistic analyses of these values. 1. In BASE environment, a high correlation between CONVERSATION and SNUGNESS appears in most case. This means that in BASE environment a conversation is important for making snugness better. This is naturally accepted in our experience. 2. In IRORI environment a correlation between CONVERSATION and SNUGNESS becomes lower than BASE case but SNUGNESS is higher than the other two environments. This means that the behavior of touching objects (i.e., water and vapors of IRORI) compensates the lack of a conversation and makes snugness higher than the other two. This result is very important because it suggests that the raison d’etre effect of objects is effective in IRORI environment. 3. A correlation between TOUCH and SNUGNESS in IRORI environment is higher than the same correlation in LEAFLET environment. TOUCH and LOOK in IRORI environment are relatively higher than the other cases. These results also suggest the raison d’etre effect of objects in IRORI system.
5
Concluding Remarks
We have investigated a system to facilitate face-to-face informal communication among people of an organization that occurs spontaneously in a shared informal place such as a refreshing room or lounge. Our research based on the hypothesis that snugness is the most important factor for the informal communication support and it is effective for developing the system to incorporate a newly proposed concept ‘raison d’etre object’ and “irori” metaphor. We made observation experiment to analyze behavior of participants in a shared informal place and found two kinds of effects of justifying via objects: “enter there” and “stay there”. Then we developed a prototype system called IRORI that incorporated both the raison d’etre object and “irori” metaphor. We carried out preliminary a user experiment to evaluating IRORI system based on questionnaire methods. The results of the experiments have suggested that the user of IRORI system feels snugness more than other environments.
References 1. 2.
Matsushita, Y., Okada, K.: Collaboration and Communication. Kyoritsu-shuppan. (1995) Tang, J., Ruaa, M.: Montage: Providing Teleproximity for Distributed Groups. Proc. CHI’94, ACM, Boston. (1994) 37-34
546 3. 4. 5. 6. 7. 8. 9.
Takashi Matsubara, Kozo Sugiyama, and Kazushi Nishimoto Obata, A., Sasaki, K.: OfficeWalker: A Virtual Visiting System Based on Proxemics. Proc. ACM 1998 Conference on Computer Supported Cooperative Work, Seattle. (1998) 1-10 Dourish, P., Bly, S.: Portholes: Supporting Awareness in Distributed Work Group. Proc. CHI’92, ACM. (1992) 541-547 Nonaka, I., Konno, N.: The Concept of ‘Ba’: Building a Foundation for Knowledge Creation. California Management Review. 40-3 (1998) 40-54 Hall, E. T.: The Hidden Dimension. Doubleday. New York. (1966) http://www.kt.rim.or.jp/~noir_/irori.html Streitz, N. A., Geithler, J., Holmer, T.: Roomware for Cooperative Buildings. Proc. CoBuild’98, Darmstadt. (1998) 4-21 (Springer LNCS 1370) Nishide, K.: Distance Between a Human and a Human. Architect and Business. 5 (1985) 95-99 (in Japanese)
The Neem Platform: An Extensible Framework for the Development of Perceptual Collaborative Applications P. Barthelmess and C.A. Ellis Department of Computer Science, University of Colorado at Boulder, Campus Box 430, Boulder, CO 80309-0430, USA. {barthelm,skip}@colorado.edu
Abstract. The Neem Platform is a research test bed for Project Neem, concerned with the development of socially and culturally aware group systems. The Neem Platform is a generic framework for the development of augmented collaborative applications, mainly targeting synchronous distributed collaboration over the internet. It supports rapid prototyping, as well as Wizard of Oz experiments to ease development and evolution of such applications. A novelty of the work is its focus on the dynamic aspects that distinguish group interaction under a Perceptual interface paradigm. Participants’ multimodal interactions such as voice exchanges, textual messages, widget operations and eventually gestures, eye gaze and facial expressions are made available to applications, that apply situated reasoning, using this rich contextual information to dynamically adapt their behavior. The system presents itself multimodally as well, through a set of virtual participants - automated entities that are perceived by human participants as having personalities and emotions, making use of animation and voice generation.
1
Introduction
The Neem Platform is a generic framework for the development of collaborative applications. It supports rapid development of augmented synchronous distributed group applications. It is based on technology - perceptual interfaces [25] - that goes beyond conventional GUI-based interfaces by allowing interactions based on how humans communicate among themselves. Perceptual interfaces explore verbal and nonverbal human behaviors through the use of multiple modalities, such as speech, gestures, gaze, both for collection and for presentation. Perceptual interfaces take into consideration psychosocial aspects and aim at presenting a system in such a way as to make it an acceptable social actor. This is particularly relevant in the context of group collaboration, since in this context the bulk of communication is already performed by humans among themselves - the objective of a group system is in fact to support such human to human interaction. It is therefore natural to use a similar communications-based Y. Han, S. Tai, and D. Wikarski (Eds.): EDCIS 2002, LNCS 2480, pp. 547-562, 2002. Springer-Verlag Berlin Heidelberg 2002
548
P. Barthelmess and C.A. Ellis
paradigm to integrate augmentation functionality in a seamless and transparent way. It is thus desirable on one hand for a system to extract information from an ongoing interaction among humans, rather than by direct commands given through conventional GUIs, and on the other hand to present system’s contributions through similar mechanisms as the ones used by human participants, i.e., through a complex combination of speech, gestures, facial expressions, gaze, etc. The Neem Platform is a research test bed for University of Colorado’s Project Neem, which is concerned with the development of socially and culturally aware group collaboration systems. The use of perceptual interfaces is a cornerstone of this project. One of the research hypotheses of the project is that social mediation systems can benefit from the low impedance provided by perceptual interfaces, that blur to some extent the distinction between human and system participation in an interaction. In the present work, we concentrate on the description of the technical aspects related to perceptual interfaces, and only hint at the deeper social and cultural issues, that are detailed elsewhere [9, 10]. Central to the design of the platform is the focus of the project on the dynamic nature of group work in general, and social and cultural aspects in particular. The platform is therefore designed to make available to applications rich contextual information and provide mechanisms to allow analysis and reasoning based on it, so that applications can produce timely and appropriate responses that obey the situated social and cultural rules of a target group. The platform supports and encourages application design based on dynamic, suitable reaction, situated to specific groups of users. 1.1
Platform characteristics
The Neem platform provides the communications infrastructure that binds participants together, both humans and virtual ones, under a perceptual perspective. The goal of the platform is to ease the development of real-time distributed multipoint perceptual-based applications that focus on context-appropriate dynamic reactions. To this end, an evolvable component-based infrastructure has been built. Functionality embedded in the platform makes development of groupaware perceptual applications roughly equivalent to the development of singleware (single user applications). Development is also supported by embedded support for Wizard of Oz experiments. Wizard of Oz is a traditional technique for testing new features in the field by having a human participant impersonate a virtual one, thus allowing for faster development cycles than possible if everything needs to be coded. This technique is usually used by natural language based systems. Here we extend this use to explore issues on group interaction. The platform is an extensible, open-ended, application neutral infrastructure that offers communication services necessary to integrate new functionality on two levels: 1. Integration of new devices and/or modalities. Perceptual functionality can be added to the platform as technology becomes available. Once integrated
The Neem Platform
549
into the platform, this functionality can be used by any application built on top of it. 2. Development of tailor made application layers. A variety of applications that involve groups of people interacting though a computerized, augmented system can be developed on top of the platform, making use of its communications and perceptual interface support. 1.2
Neem Applications
Neem applications are built on top of the platform and provide those components that are unique to an application, namely user interface components and back end functionality. A meeting application can, e.g., use interface elements that support creating an agenda, and tracking topics as they are discussed by a group. At the same time, such a meeting application might include back-end functionality that e.g. issues warnings whenever the time allocated for a topic is exceeded by some percentage. In this case, either a visual warning can be used, or an animated character might produce the warning. Clearly, application specific interface elements and augmentation functionality are highly dependent on cultural and social aspects of a group of users. Even for the same domain, such as meetings, what is appropriate support depends on the type of meeting that is being targeted (formal, informal), how groups are organized (e.g. hierarchy) and a multitude of social and cultural rules that determine what is to be expected from each participant, including the system, that has to present itself as a virtual participant and must therefore stick to the social conventions of the group. The needs at the application level for cultural adaptation is addressed by a clear separation between the generic functionality supplied by the platform and the specific one that has to be developed for each different kind of application. The platform thus offers a mechanism for reuse of generic functionality and supports rapid application development by encapsulating communications details. We envision using the Neem Platform to develop end applications in different areas: – Meeting support - many different meeting models exist, that support informal to formal group collaboration scenarios. Each model would correspond to a different application layer built on top of the platform, each of them exploring alternative, perhaps conflicting social theories. – Distance education - the rich environment potentially provided by perceptualbased user interfaces can be used to support learning applications, where the students learning styles are accounted for. – Universal access - the ease of integration of devices makes it possible in principle to support a larger population of users, that rely on specific modalities for information production or consumption (haptik, speech, for blind users, gesture based for deaf, and so on).
550
P. Barthelmess and C.A. Ellis
1.3
Organization of the Paper
In the rest of this paper, we describe in further detail the functionality of the Neem Platform. We begin by presenting Related Work (Section 2), followed by the presentation of features of Perceptual interfaces and the connection to multimedia and multimodal technology (Section 3). Section 4 overviews the architecture of the platform. A prototype of the platform has been implemented and is described in Section 5. The paper ends with Summary and Future Work (Section 6) and References.
2 2.1
Related Work CSCW Toolkits
The Neem Platform is a toolkit meant to be used by developers, rather than by users. Therefore, in the following discussion, we do not consider the latter, user-tailorable solutions. Flexible CSCW toolkits differ in the functionality they target and therefore in the kinds of application aspects that they facilitate. As an example of this variety, consider: Prospero [7, 6] which concentrates on distributed data management and consistency control mechanisms; Intermezzo [8] focus on the coordination aspects of collaboration, in support of fluid interactions, offering user awareness, session management, and policy control; GroupKit [23] offers a basic infrastructure that includes distributed process coordination, groupware widgets and session management; Roussev, Dewan and Jain [24] propose a component-based approach based on extensions to JavaBeans, and focus on the reuse of existing single user application code, that is converted for multi-user with minimum code changes. The Neem Platform focus on real-time distributed multipoint dynamic collaborative systems based on a perceptual interface paradigm. 2.2
Perceptual Interface Based Systems
From a Perceptual Interface research perspective, Neem differs from typical work because of its focus on group interaction, as opposed to the focus on single users, as is typical in the area. Even in systems that target groups of users (e.g. [5, 15, 20]), the focus is on multimodal command, in which speech and pen, for instance are used to replace more conventional interface devices. Neem uses an opportunistic approach where the system dynamically adapts based on reasoning over a context of (mostly human-to-human) interaction, as opposed to receiving direct (multimodal) commands from individual users. In this sense Neem is more closely related to the work presented e.g. by: Jebara et al [12] in which the system acts as a mediator of the group meeting, offering feedback and relevant questions to stimulate further conversation; Isbister et al [11], whose prototype mimics a party host, trying to find a safe common topic for guests whose conversation has lagged; Nishimoto et al [18] whose agent enhances
The Neem Platform
551
the creative aspects of the conversations by entering them as an equal participant with the human participants and keeping the conversation lively; CMU’s Janus project [26] is somewhat related, in its aim to make human-to-human communication across language barriers easier through multilingual translation of multi-party conversations and access of databases to automatically provide additional information (such as train schedules or city maps to the user). While Neem shares the interest in human-to-human mediation, its goals are more ambitious than keeping a bi-party conversation going. Neem targets social and cultural aspects and is therefore concerned with a more detailed view of how groups work, and how collaborative systems can contribute. 2.3
Extensibility Mechanisms
From the perspective of extensibility, different strategies are used by CSCW toolkits: in Prospero [7], extensibility derives from a reflective mechanism that makes use of facilities provided by the host language used - CLOS. The guiding paradigm is that of Open Implementation [14], that proposes gaining flexibility by breaking the encapsulation that is traditional in object oriented development, making them amenable to meta-level control mechanisms. Intermezzo’s allow code (written in an extended version of Python) to be dynamically downloaded, and executed at the time it is downloaded or as a response to object and database change events described via a pattern matching language; Groupkit [23] supports programatic reaction not only to system generated events, but also to application specific ones. Events can trigger application notifiers or handlers. Particularly, such notifiers can be associated with changes to environments dictionary-style shared data models - that the system automatically keeps consistent across replicas. Flexibility and extensibility in Neem are result from its foundation on a core architectural coordination model. In this model, decoupled components interact indirectly through message exchanges. A meta-level mechanism mediates access to a Linda-like tuple-space [4]. This functionality, that we call mediation, allows for explicit control over component cooperation through message-rewrite rules. The approach is related to data-centered architectures and is therefore related to some extent to a great number of similar approaches (see [19] for a comprehensive survey). Neem differs from these approaches by providing an explicit locus of (meta-level) coordination control, the mediation functionality mentioned above.
3
Perceptual Interfaces
Perceptual interfaces are based on a paradigmatic shift from the structured, command based GUI interfaces to a more natural one based on how humans interact among themselves. These new kinds of interfaces have been extensively studied, among others by Reeves and Nass at Stanford’s Center for the Study of Language and Information (e.g. [22, 21]). Their findings support the notion that
552
P. Barthelmess and C.A. Ellis
provided that an interface mimic real life, even if imperfectly, principles that explain perception in real life can be applied straightforwardly to computers, i.e. that people’s reactions to computers are fundamentally social and perceptual [22]. The reaction to animated characters, for instance, tend to be similar to real participants, and even gender, ethnicity and similar factors play similar roles independently of the obvious artificial nature of such characters, and their imperfections in movements, voice. While perceptual interfaces are not new, the use in a collaborative system for extraction, analysis and reasoning about human-to-human interactions in support of social and cultural awareness is unique to Neem. Perceptual interfaces are associated to other technology, particularly multimedia and multimodal, that can be related to each other to form a 4-tiered structure (Figure 1). Multiple media channels (e.g. audio, video) broaden human perception of others. Information carried over these channels are analyzed according to multiple modalities (e.g., natural language from voice and text, gestures, prosody, facial expressions, gaze). This rich information is then used to build context and awareness of the interaction at the perceptual level. This context allows the system to elicit reactions that are grounded on psychosocial aspects, allowing the system to be perceived as a meaningful social actor on its own right [22]. While we concentrate on the present work on discussing support for perceptual features (the first three levels we just described), social and culturally aware systems add a fourth layer that is concerned with the social dynamics of a group. Here we only hint at the issues that surround the development of such systems.
Social/Cultural Social dynamics
Perceptual Context awareness
Multimodal Meaning
Multimedia
Fig. 1. Perceptual interfaces.
The use of multiple communication channels (multimedia) has been extensively researched in the last decade and has resulted mainly in improved information presentation capabilities, through the addition of video, animation and sound to the interface. Multimodal systems take this concept one step beyond. A multimodal system is able to automatically model the content of the information at a high level of abstraction. A multimodal system strives for meaning [17]. While individual modalities provide a wealth of information that is commonly not present in conventional interfaces, the combination of modalities provides
The Neem Platform
553
still richer information. Fusion is the process that combines individual modality streams into a single one, based on time and discourse constraints [13]. Fusion unleashes the full power of multimodal communication, allowing information on each mode to complement each other. A classic example of such combination was used in the Bolt’s pioneer system, ’Put-that-there’ [3], that combined speech recognition with gesture analysis that allowed users to point to objects and locations and issue verbal commands to have them moved. The combination of modalities during group interaction opens up possibilities for analysis never before available. It is for instance possible to combine facial expressions with voice analysis to determine if a user looked (or sounded) angry or happy while (or just before) issuing some utterance. Fusion has therefore the potential for adding context to the interaction that would otherwise go unnoticed. If fusion combines different modes, fission does exactly the opposite. Given a message that needs to be conveyed, it is possible and desirable to pause and consider what is the most effective way of conveying it, either through some conventional user interface mechanism, or through a voice message, or through the use of an animated character that emotes. In other words, multimodal interfaces provide an opportunity for alternative renderings of the same information, so that it imparts the importance and content appropriately, taking into account social and cultural rules. Fusion and fission mechanisms have a potential to liberate users from having to adapt to system mandated input and output mechanisms. Users can be free to choose the modality that best fits their styles both when producing information as well as when being presented with information produced by a system. There is therefore a potential for adaptation to user needs and styles. To fully realize user adaptation, besides having some form of user modeling, a system would have to provide support for translations between modes that may not be trivial. A communication between a visually oriented user (say a deaf person) with users that prefer to speak would require translation to and from a spoken language and a visual, gesture-based language that is not in the least trivial. Perceptual group applications have, nonetheless, the potential for offering an integration framework for such technology, once it becomes available, thus making possible the development of universally accessible systems. Figure 2 depicts information flow in Neem’s perceptual interface. Participants interact in a distributed collaboration environment and their actions are sensed and interpreted; multiple interpreted streams are combined during fusion. Reasoning analyses the events in context, i.e., it takes into consideration past interaction. A response is generated (which includes doing nothing), fission determines the most effective way to react given available modes and taking into consideration user characteristics. This potentially complex sequence of multiple modality actions are finally rendered at one or more participants stations (reaction).
554
P. Barthelmess and C.A. Ellis Interpretation Reasoning Fusion
Response
Interpretation
Fission
Sensing
Reaction
Workstation
Workstation Workstation
Distributed Collaboration
Fig. 2. Perceptual information processing.
4
Architecture
We now turn our attention to the platform’s architecture. Briefly, the platform is built around a distributed message brokering infrastructure that binds system components together. We start by presenting a conceptual view of the architecture and then proceed to a more detailed view of the framework and finally discuss implementation of a prototype. 4.1
Conceptual Architecture
The Neem platform is an infrastructure that binds together components that obey a uniform abstract representation. A message brokering infrastructure handles communication among components (Figure 3). Flexibility and extensibility in Neem result from its foundation on a core architectural coordination model. In this model, decoupled components interact indirectly through message exchanges. A meta-level mechanism mediates access to a Linda-like tuple-space [4]. This functionality, that we call mediation, allows for explicit control over component cooperation through message-rewrite rules. In this paper we just briefly describe this functionality and refer the reader to [1, 2] for details. Neem components are black-boxes that generate events and/or service events. Events in the system are reified as messages. Neem components can therefore be seen as message-enabled objects. A component signature consists of the messages it generates and messages that it services. Neem messages are frames (sets of pairs). Each of the message types that flow through the system can be serviced by zero or more components, that are not aware of each other’s existence. Adding new functionality consists basically of adding new components that service existing messages in new ways (Figure 4.1). A group mood evaluation component, for instance, would
The Neem Platform
N I C
N I C
N I C
N I C
N I C
N I C
N I C
N A C
N I C
N I C
N I C
N A C
N A C
N A C
N A C
N A C
N A C
555
N A C
N A C
Coordination control
Fig. 3. Neem conceptual architecture. Components above the dotted line are application specific, while those below that line are generic and reusable.
intercept messages that have to do with users’ emotional state (facial expressions, prosodic voice features, etc). At the same time, these same messages can be processed in a different context by a fusion mechanism, that integrates them into the larger context of the interaction. Components are uniformly used both for the development of core platform functionality and application layers. The only difference from a developer’s point of view is the potential for reusability of components at each of these levels while platform components are expected to have general use, application specific components are tied to a specific solution and therefore have less chance of reuse. In the following paragraphs, we describe mostly generic functionality, that therefore corresponds to reusable platform elements, rather than application level ones. Components can be further distinguished as interface or augmentation components. Even though conceptually similar (both are message enabled component types), Neem Interface Components (NICs) are characterized by their attachment to one or more interface devices, which makes them suitable for collecting and relaying interface events generated by each participant, in the form of standard messages. A Neem Augmentation Component (NAC), on the other hand, does not have this constraint and is purely a message processing device. NICs provide means for the integration of multimedia devices, such as conventional monitor, keyboard, mouse, consoles, audio and video. Other less conventional devices (e.g. Virtual Reality (VR) goggles, haptik devices) can also be integrate through NICs. All that is required to integrate a new device is a set of NICs that interface with a device, extract events commanded by users and
556
P. Barthelmess and C.A. Ellis
N I C
N I C
N I C
N I C
N I C
N I C
M
N N I A C C
N I C
N I C
N I C
N A C
N A C
M
N A C
N A C
N A C
N A C
NN AI CC
N I C
N A C
N I C
M
Coordination control
Fig. 4. Extensibility by composition. Components are unaware of the recipients of messages they generate (a is unaware that b is a recipient of message M. Adding new functionality consists of adding new components that process existing messages in new ways. Component c can be added at any time to provide new services for events represented by message M. In this picture, arrows pointing down represent messages that are generated, arrows pointing up represent messages serviced by a component and bi-directional arcs represent both input and output.
modify its state (for devices with output capabilities) according to commands received as messages. A NIC may for instance attach to an audio source (e.g. microphone) and do speech-to-text conversion, or extraction of prosodic features, or attach to a video source and do gesture or facial expression extraction. NICs also react to messages they receive, causing changes to the associated state of the interface, for instance rendering at users stations of textual messages, graphics or full animations including gesture and/or voice. A single NIC can attach and service multiple devices, as is typical, for instance, in conventional GUI-based ones, where a single component attaches to a video monitor, keyboard and mouse. Conversely, one device can be tapped by multiple NICs. Information from a video source can for example be extracted by a set of NICs, each specializing in one modality, e.g. facial expressions, or gestures. Wizard of Oz functionality is supported straightforwardly by NICs that offer an interface through which a human participant can activate the generation of messages that cause other components to react. One can, for instance, send messages to components that control animated characters, making them move, speak, emote, and so on. Similarly, any other component can be made to react by issuing appropriate messages from a wizard interface.While Wizard of Oz
The Neem Platform
557
experiments are common in natural language processing systems, here we extend this use to explore issues on group interaction. NICs can be remotely launched and removed from participants stations dynamically. The set of NICs active at any station at any time can be controlled either by participants themselves, by a Wizard or the back-end functionality (i.e. some NAC). Dynamic activation and deactivation of distributed components add a time dimension to the interface, potentially allowing for the best possible use of screen real-state by having active just the elements that are relevant at each stage of an interaction, replacing them as needed. Neem Augmentation Components (NACs) provide mostly back-end functionality, i.e., they are mostly responsible for processing the multiple modality streams, e.g. parsing natural language streams, fusion, fission of different streams and so on. NACs are also responsible for providing support for reasoning about the perceived context of an ongoing interaction and generating appropriate responses. Responses themselves are dependent on the specific application that is built on top of the platform. NACs typically collaborate on refining messages. Some NACs receive and process messages that represent participants actions directly. These NACs typically apply an initial transformation that is further refined by other NACs, obeying a cycle depicted in Figure 2. At the end of the cycle, one or more responses might have been generated. Responses are implemented as messages, that take effect as NICs react causing changes to one or more participants interfaces. NICs and NACs encapsulate potentially complex functionality, such as natural language parsing, dialog management, fusion and fission engines, complex reasoning modules and so on. The apparent simplicity of a Neem component reveals just its interface to the infrastructure. Functionality at this level is leveraged on existing technology, as will be presented in Section 5.
5
Prototype
A prototype implementing the architecture we just described has been developed. Development leverages as much as possible on existing, field tested technology, based on open standards (Figure ??). A messaging infrastructure is implemented as two distinct environments - a collaboration and a multi-agent environment that are connected through a coupler component. The distributed collaboration environment provides support for participants’ interaction through NICs and the multi-agent environment supports back end augmentation functionality, such as multimodal processing and reasoning, which are typically provided by NACs. Distributed collaboration environment Provides message delivery to groups of distributed participants (or rather to the NICs through which they interact), either through broadcasts or selective delivery. This environment is implemented by DC-Meeting Server and also embeds support for session management and multimedia processing.
558
P. Barthelmess and C.A. Ellis
N I C
N I C
N I C
N I C
N I C
ActiveX wrapper COM interface H.323 Stack
N A C
N I C
N I C
N I C
ActiveX wrapper COM interface H.323 Stack
N I C
N A C
ActiveX wrapper COM interface H.323 Stack
DC-MeetingServer
Multimedia processing
N A C
N A C
N A C
N A C
DARPA API
Messaging Session management
N A C
N A C
DARPA API
Messaging
Coupler
Coordination mediator
DARPA Comm HUB
Fig. 5. The Neem prototype. The dotted line indicates a (conceptual) division between platform components (at the bottom) and application layer ones (top components).
DC-MeetingServer is a commercial H.323 Multipoint Conference Unit (MCU), produced by Data Connection Limited. H.323 is a family of multimedia conferencing protocols published by the International Telecommunication Union. A variety of server and client software based on this protocol is readily available in many platforms. Multi-agent environment This environment is organized in a hub-and-spoke configuration. The mediator (hub) controls the flow of information among other components (spokes). The mediator keeps a state that can dynamically influence the flow of information among the spokes. The spokes can trade information among themselves through the Hub. Spokes and hub can either be on the same machine or distributed. This is the environment that supports NACs. This environment embeds the mediator that handles the coordination model functionality and is implemented by the DARPA Communicator Architectural platform, based on MIT’s Galaxy architecture [16]. DARPA Communicator’s coordination is based on a hub script that specifies which servers (components) should be activated, according to matching rules. There are important semantic differences between the coordination model described in this paper and the model implemented by the DARPA Communicator. The hub is based on a fixed (and predefined) set of servers that are activated as messages posted to the hub match certain patterns. Our model requires that certain messages be delivered to multiple active components, and furthermore,
The Neem Platform
559
that these active components may dynamically vary throughout an interaction. To compensate for these differences, we introduced functionality in the coupler that provides for dynamic multicasting and selective delivery. Coupler Is the component that binds these two distinct environments together - it translates between message formats and is responsible for: 1) relaying collaboration events to the multi-agent environment for analysis and 2) propagating messages originated at the multi-agent environment among those components whose signatures comply to the messages. The latter functionality complements the hub’s by providing the (conceptually equivalent) tuple-space message distribution mechanism. For efficiency reasons, this mechanism is based on message push, rather than on a database that is polled by components, and is based on the programming style that is used in applications, which makes easy to map messages to components. The result is conceptually equivalent to the described tuple-space based mechanism, even if perhaps less flexible. About ten thousand lines of code (mainly C/C++) implement the connection, translation between environments, as well as a highly abstracted API that is used to develop application layer components. The operational environment involves a variety of operating systems: Linux (running DARPA Communicator), Windows 2000 (running DC-MeetingServer) and Windows XP (on the workstations). Multimodal support in this initial phase consists of console i/o (monitor, keyboard, mouse) as well as natural language through typed and spoken messages. Natural language text output and animation, including voice production can be used as output modalities, besides the activation of conventional widgets. Natural language processing capabilities running on the back-end are provided by language processing modules produced by Colorado University’s Center for Spoken Language Research (CSLR) under the CU Communicator Project [27]. Currently, conventional interface components are developed in Visual Basic. Speech-to-text is built on top of SAPI (Speech API). A variety of speech-to-text engines are compliant with SAPI. We currently use IBM’s ViaVoice 9.0’s engine. Animation is currently built using Haptek’s VirtualFriends. White board, file transfer and application sharing, audio and video communication are provided directly by the H.323 infrastructure functionality. 5.1
Proof-of-Concept Application
A proof-of-concept application layer has been developed to validate the platform. This application includes the following interface components (NICs): a Chat for textual messages, a meeting agenda that registers topics and keeps track of time, a Speak queue that handles requests for talking, a Mood tool through which participants can anonymously register their emotions (bored, confused, etc.). A simple NAC illustrates context-aware reaction. This NAC monitors the clicks on the NIC instances that are active at the different participants’ workstations, through which participants can express their ’mood’ with respect to
560
P. Barthelmess and C.A. Ellis
the ongoing interaction, by clicking on e.g. ’bored’, ’confused’, ’take-a-break’ buttons. Depending on what the majority of participants expressed over a period of time, a message is produced suggesting some action, for instance, taking a break, switching topics, and so on. In case single participants repeatedly click on a button, but their mood does not correspond to what the others are expressing, a message is sent privately, informing this participant of the different view expressed by other participants. These messages are delivered through two virtual participants (Kwebena and Kwaku), whose animated characters are displayed on participants’ stations. A Wizard of Oz interface to the animated characters allows them to be controlled remotely, basically by having them say a strings typed through the Wizard interface. These messages can be directed either to the whole group or to sub-groups or individual participants. Experience developing these components shows that the platform does indeed support a rapid application development cycle that was expected and allows for consistency of the shared interface elements, dynamic context dependent system reaction and multimodal support, in tune with the goals of the project.
6
Summary and Future Work
The Neem Platform is a research test bed for University of Colorado’s Project Neem, which is concerned with the development of socially and culturally aware group collaboration systems. The use of perceptual interfaces is a cornerstone of this project. One of the research hypotheses of the project is that social mediation systems can benefit from the low impedance provided by perceptual interfaces, that blur to some extent the distinction between human and system participation in an interaction. The Neem Platform is a generic framework for the development of collaborative applications. It supports rapid development of augmented synchronous distributed group applications based on technology - perceptual interfaces - that aims at superseding conventional GUI-based interfaces by allowing interactions based on how humans communicate among themselves. The platform is an extensible, open-ended, application neutral infrastructure that offers communication services necessary to integrate new functionality on two levels: 1) Integration of new platform functionality, such as new devices/modalities and 2) Development of specific applications. The variety of needs at the application level and the requirement for cultural adaptation is addressed by a clear separation between the generic functionality provided by a platform from the specific one that has to be developed for each different kind of application. The platform thus offers a mechanism for reuse of generic functionality and supports rapid application development by encapsulating communications details. A prototype of the platform has been implemented. Actual development leverages as much as possible on existing, field tested technology, such as H.323
The Neem Platform
561
MCUs, the DARPA Communicator architecture and commercial speech and animation engines. Future work will enhance the platforms capabilities by expanding its multimodal functionality, including facial analysis, gestures, including American Sign Language capabilities in a robust way. Different applications are being developed and deal with the challenging aspects of building culturally and socially adequate tools and virtual participants. Planned applications include business meetings and distance education.
References 1. P. Barthelmess and C.A. Ellis. Aspect-oriented composition in extensible collaborative applications. In Proceedings of the 2002 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’02), Las Vegas, June 2002. Accepted for publication. 2. P. Barthelmess and C.A. Ellis. The neem platform: an evolvable framework for perceptual collaborative applications. Technical report, University of Colorado at Boulder, Computer Science Department, 2002. 3. R. A. Bolt. ”put-that-there”: Voice and gesture at the graphics interface. In SIGGRAPH ’80 Proceedings, volume 14, 1980. 4. N. Carriero and D. Gelernter. Linda in context. Communications of the ACM, 32(4), 1989. 5. P. R. Cohen, M. Johnston, D. McGee, I. Smith, S. Oviatt, J. Pittman, L. Chen, and J. Clow. Quickset: Multimodal interaction for simulation set-up and control. In Proceedings of the Fifth Applied Natural Language Processing meeting, 1997. 6. Paul Dourish. Consistency guarantees: Exploiting application semantics for consistency management in a collaboration toolkit. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Concurrency, 1996. 7. Paul Dourish. Using metalevel techniques in a flexible toolkit for CSCW applications. ACM Transactions on Computer-Human Interaction, 5(2), 1998. 8. W. Keith Edwards. Coordination Infrastructure in Collaborative Systems. PhD thesis, Georgia Institute of Technology, College of Computing, 1995. 9. C.A. Ellis. Neem project: An agent based meeting augmentation system. Technical report, University Of Colorado at Boulder, Department of Computer Science, 2002. 10. C.A. Ellis, P. Barthelmess, B. Quan, and J. Wainer. Neem: An agent-based meeting augmentation system. Technical report, University of Colorado at Boulder,Department of Computer Science, 2001. 11. Katherine Isbister, Cliff Nass, Hideyuki Nakanishi, and Toru Ishida. Helper agent: An assistant for human-human interaction in a virtual meeting space. In Proceedings of the CHI 2000 Conference, 2000. 12. Tony Jebara, Yuri Ivanov, Ali Rahimi, and Alex Pentland. Tracking conversational context for machine mediation of human discourse. In AAAI Fall 2000 Symposium - Socially Intelligent Agents - The Human in the Loop, 2000. 13. Michael Johnston, Philip R. Cohen, David McGee, Sharon L. Oviatt, James A. Pittman, and Ira Smith. Unification-based multimodal integration. In Proceedings of the Thirty-Fifth Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, 1997.
562
P. Barthelmess and C.A. Ellis
14. Gregor Kiczales. Beyond the black box: open implementation — soapbox. IEEE Software, 13(1), 1996. 15. D. McGee and P. Cohen. Creating tangible interfaces by augmenting physical objects with multimodal language. In Proceedings of the International Conference on Intelligent User Interfaces (IUI 2001), 2001. 16. Mitre Corporation. Galaxy Communicator Documentation, 2002. Available on the web at http://communicator.sourceforge.net/sites/MITRE/distributions/ GalaxyCommunicator/docs/manual/index.html. 17. Laurence Nigay and Jo¨elle Coutaz. A design space for multimodal systems: Concurrent processing and data fusion. In Proceedings of INTERCHI ’93, 1993. 18. Kazushi Nishimoto, Yasuyuki Sumi, and Kenji Mase. Enhancement of creative aspects of a daily conversation with a topic development agent. In Coordination Technology for Collaborative Applications – Organizations, Processes, and Agents, volume 1364 of Lecture Notes on Computer Science, 1998. 19. George A. Papadopoulos and Farhad Arbab. Coordination models and languages. In The Engineering of Large Systems, volume 46 of Advances in Computers. Academic Press, 1998. 20. Rameshsharma Ramloll. Micis: A multimodal interface for a common information space. In ECSCW’97 Conference Supplement, 1997. 21. Byron Reeves and Clifford Nass. The Media Equation. How People Treat Computers, Television, and New Media Like Real People and Places. CSLI Publications, 1996. 22. Byron Reeves and Clifford Nass. Perceptual user interfaces: perceptual bandwidth. Communications of the ACM, 43(3), 2000. 23. Mark Roseman and Saul Greenberg. Building real-time groupware with groupkit, a groupware toolkit. ACM Transactions on Computer-Human Interaction, 3(1), 1996. 24. Vassil Roussev, Prasun Dewan, and Vibhor Jain. Composable collaboration infrastructures based on programming patterns. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Component Based Infrastructures, 2000. 25. M. Turk and G. Robertson, editors. Communications of the ACM, volume 43, March 2000. Special Issue on Perceptual User Interfaces. 26. Alex Waibel. Interactive translation of conversational speech. IEEE Computer, 29(7), 1996. 27. W. Ward and B. Pellom. The cu communicator system. In IEEE Workshop on Automatic Speech Recognition and Understanding, 1999.
Author Index Aalst, W.M.P. Van der 45 Andreoli, Jean-Marc 429 Anson, M. 360 Arregui, Damian 429 Bai, Shuo 180 Barthelmess, Paolo 547 Cao, Cungen 104 Cao, Jian 80, 456 Cao, Yin 360 Chau, K.W 360 Chen, Jessica 347 Chen, Pin 332 Cook, Dave 396 Creutzburg, Rainer 396 Dai, Kaiyu 499 Dietrich, Suzanne W 403 Dong, Y. 90 Dongen, B.F. Van 45 Eder, Johann 1 Ellis, Clarence Skip 547 Fan, Yushun 16 Feng, Qinagze 104 Fillies, Christian 130 Frost, Guy 396 Galis, Alex 232 Guo, Xin 232 Gruber, Wolfgang 1 Gruhn, Volker 315 Gu, Fang 104 Han, Jun 332 Han, Yanbo 168 Hemmje, Matthias 155 Holappa, Jarkko 487 Hu, Bin 525 Hu, Jinmin 80 Hu, Songling 180 Huang, Lican 370
Jiang, Jinlei 418 Jin, Y. 403 Kim, Hong-Gee 130 Kim, Sung Wang 141 Kuhlenkamp, Andreas 525 Lai, Jin 16 Lee, Jaeho 141 Li, Dong-Dong 180 Li, Mingshu 90 Li, Sujian 117 Lim, Ha Chull 141 Lin, Chuang 30, 64 Lin, Yi 381 Liu, Jinagxun 80 Liu, Yue 117 Luo, Junzhou 303 Ma, Jun 155 Marinescu, D.C. 64 Matsubara, Takashi 537 Maxim, Michael 220 Mellor, L. 396 Mota, Telma 232 Nishimoto, Kazushi 537 Olivotto, Georg E. 1 Orgun, Mehmet 510 Pacull, Francois 429 Park, Sung-Hoon 280 Qu, Yang 30, 64 Reinema, Rolf 525 Ren, F 64 Schoepe, Lothar 315 Shen, Derong 208 Shen, Jun 303 Shi, Meilin 257, 418 Shi, XiaoAn 381
564
Author Index
Si, Jinxin 104 Sihvonen, Markus 487 Smith, Bob 130 Song, Baoyan 208 Sugiyama, Kozo 537 Sun, Hongwei 193 Sundermier, Amy 403 Tian, Wen 104 Todd, Chris 232 Urban, Susan D. 403 Venugopal, Ashish 220 Wang, Dan 208 Wang, Fengjin 168 Wang, Gouren 208 Wang, Guangxing 478 Wang, Haitao 104 Wang, Jing 193 Wang, Jiye 30 Wang, Yinling 499 Wei, Yin-Xing 456 Wiedeler, Markus 267 Wikarski, Dietmar 130 Willamowski, Jutta 429 Woo, Sung-Ho 289 Wu, Zhaohui 370
Xu, Hongxia 247 Xu, Ming 444 Xu, Xiquing 499 Xue, Liyin 510 Yang, Kun 232 Yang, Qu 64 Yang, Sung-Bong 289 Yang, Yun 303 Yang, Zhifeng 117 You, Jinyuan 466 Yu, Ge 208 Zeng, Qingtian 104 Zhang, Chunxia 104 Zhang, Jiangping 360 Zhang, Kang 510 Zhang, Li 247 Zhang, Shensheng 80, 456 Zhang, Shusheng 193 Zhang, Yan 257 Zhang, Yaying 466 Zhao, YongYi 478 Zhao, Zhuofeng, 168 Zheng, Yufei 104 Zhou, Bosheng 247 Zhou, Jingtao 193 Zhou, XingShe 381 Zhuang, Yi 444